Your CRM tells you what happened. These metrics tell you what is about to happen — and what to do about it.
We built Speedrun's analytics engine on a premise borrowed from quantitative finance: the metrics everyone tracks are the wrong ones, and the metrics that actually predict outcomes are hiding in data that is already being collected. What follows is the complete framework — 15 composite metrics organized into five categories, each designed to answer a specific question that activity reports and pipeline reviews cannot.
Deal Intelligence
Is this deal real? Is it healthy? Will it close?
1. Deal Engagement Velocity (DEV)
The speedometer, not the odometer.
DEV is a weighted composite of engagement events per deal per week, with exponential recency decay. An email reply is worth more than an open. A meeting held is worth more than a meeting booked. And an event from yesterday carries more weight than one from three weeks ago.
The number is divided by days in current stage to normalize for deal velocity. A deal in Discovery for 5 days with DEV of 40 is healthy. A deal in Technical Evaluation for 30 days with DEV of 40 is stalling — even though the raw engagement looks identical.
What it replaces: "Last activity: 3 days ago." A binary, memoryless metric that tells you nothing about trend, intensity, or direction.
2. Deal Momentum Score (DMS)
Three independent signals combined into a single "is this deal on track?" reading.
Stage velocity — how fast is the deal moving through stages relative to the historical median for this deal type? A ratio above 1.0 means faster than normal. Below 0.5 means significantly behind pace.
Engagement trend — is DEV increasing or decreasing week over week? The direction matters more than the level.
Activity cadence consistency — are interactions evenly spaced (healthy bilateral rhythm) or clustered in bursts followed by silences (rep-driven activity without buyer reciprocation)?
Deals with DMS above 70 close within the forecasted timeframe approximately 80% of the time. Deals below 30 slip 75% of the time. The score provides a 2-4 week advance warning of forecast misses.
3. Response Latency Gradient (RLG)
The early warning system that no one else has.
RLG fits a linear regression to the timestamps of a buyer's last N responses. Positive slope means responses are getting slower — the deal is cooling. Negative slope means they are getting faster — engagement is intensifying.
A buyer who replied in 2 hours on Day 1, 8 hours on Day 7, and 36 hours on Day 14 is still "responsive" by any binary measure. The gradient tells you the deal is dying — 30 to 60 days before it appears as a missed forecast. No pipeline review, no rep self-assessment, and no CRM dashboard captures this signal. It lives in email timestamps that every platform collects and no platform analyzes.
4. Deal Health Composite (DHC)
The one score. A weighted composite of every deal-level metric:
| Component | Weight | Signal |
|---|---|---|
| DEV (Engagement Velocity) | 20% | Is the buyer actively engaged? |
| BGC (Group Coverage) | 20% | Are we talking to the right people? |
| DMS (Momentum) | 15% | Is this deal on track? |
| RLG (Response Gradient) | 15% | Is engagement accelerating or fading? |
| CSI (Champion Strength) | 10% | Can our champion actually close? |
| END (Network Density) | 10% | Is the buyer group aligned? |
| AQS (Activity Quality) | 10% | Are we doing the right things? |
Score of 90-100 means high-confidence close. 70-89 means healthy and on track. 50-69 means at risk — intervention needed. 30-49 means critical — likely to stall without significant change. Below 30 means terminal — recommend disqualification to focus resources elsewhere.
DHC replaces gut feel with a composite vital sign. Like a patient's chart, any single metric can mislead. The composite is diagnostic.
Buyer Intelligence
Are we talking to the right people? Can they actually get this done?
5. Buyer Group Coverage (BGC)
The single most predictive metric in B2B sales.
BGC measures the ratio of engaged buying roles to required buying roles for the deal type, weighted by seniority and engagement depth. Five contacts who all report to the same engineering manager score lower than three contacts spanning VP of Engineering, Director of IT, and Head of Procurement.
The required roles vary by deal size: a $50K deal may only need a champion and an economic buyer. A $500K deal needs economic buyer, technical evaluator, champion, end user representation, and procurement. BGC tells you exactly which roles are covered and which are missing.
Multi-threaded deals close at 2-3x the rate of single-threaded deals. ^1^ BGC turns that research finding into an actionable diagnostic.
6. Champion Strength Index (CSI)
Not all champions are created equal. CSI quantifies whether your internal champion can actually drive the deal to close.
Three dimensions:
Engagement depth — meeting attendance, email responsiveness, content consumption. How invested is the champion in the evaluation?
Internal advocacy — email forwards to colleagues, internal meetings scheduled on your behalf, new stakeholders introduced into the conversation. These are the signals that the champion is selling internally, not just evaluating externally.
Organizational influence — title seniority, tenure at the company, centrality in the communication network. A passionate champion with no organizational power is as dangerous as no champion at all.
A deal with a high BGC and a low CSI has the coverage but lacks the engine. A deal with a low BGC and a high CSI has the engine but will hit a wall when stakeholders outside the champion's reach weigh in.
7. Engagement Network Density (END)
Are the people in the buyer group talking to each other?
END measures the ratio of actual connections between buyer contacts to the maximum possible connections. A "connection" exists when two buyer contacts are CC'd on the same email thread, attend the same meeting, or are referenced together in conversation.
High END means the buyer group is communicating internally about the deal — consensus is forming. Low END means siloed evaluation — the VP doesn't know what the engineer thinks, procurement hasn't talked to IT, and the misalignment will surface as a late-stage objection or a silent stall.
This is social network analysis applied to the buyer group — a technique borrowed from quantitative finance, where hedge funds analyze corporate board interlocks and executive communication networks to predict M&A activity.
8. Time-to-Power (TTP)
How fast are you getting to the person who can actually say yes?
TTP measures the number of days from first outbound touch to the first meaningful engagement with an economic buyer (VP level or above). "Meaningful" means they replied to an email, attended a meeting, or proactively requested information — not that they were CC'd on a thread they never read.
Deals where TTP is below the company-wide median close 40% faster and at 1.5x the win rate. Deals where TTP exceeds twice the median have close rates below 10%.
The metric isolates the single most important early-deal activity: reaching the person who controls the budget and the decision. Everything else is preparation.
Forecast Intelligence
What is our pipeline actually worth?
9. Pipeline Risk-Adjusted Value (PRAV)
The number the CFO actually needs.
PRAV calculates probability-weighted expected revenue for every deal in the pipeline — not using crude stage-based percentages ("Proposal = 50%"), but a dynamic probability model derived from the five deal factors: engagement momentum, political breadth, temporal fit, competitive position, and process velocity.
Deals past their expected close date are penalized with exponential time decay. A $500K deal that was supposed to close three months ago and is still "in negotiation" is not a half-million-dollar opportunity. The model treats it accordingly.
The sum across all deals gives a single portfolio-level number — the realistic expected value of the pipeline, stripped of optimism bias and adjusted for risk. It replaces hope-based forecasting with factor-based estimation.
10. Forecast Confidence Score (FCS)
Which forecasts should you trust?
FCS measures the alignment between a rep's stated forecast (their probability estimate and close date) and the model's independent assessment using DHC and the five factors.
A score near 1.0 means the rep and the model agree — the rep has well-calibrated judgment about this deal. A score near 0.0 means complete disagreement — either the rep sees something the model doesn't, or the rep is wrong.
Tracked historically, FCS reveals each rep's forecasting calibration. Reps who are consistently optimistic (FCS consistently below 0.5, with actual results below their forecasts) need their pipeline haircut. Reps whose FCS is consistently high have earned forecast trust.
Rep Intelligence
Who is actually good? At what? Why?
11. Revenue Above Replacement (RAR)
WAR for sellers. The most important rep-level metric.
RAR asks: how much more revenue does this rep generate than a replacement-level rep would generate in the same territory, with the same pipeline quality, against the same competition?
The calculation decomposes revenue into context and skill:
- Territory expected yield — historical revenue for this territory, adjusted for market size and customer base
- Pipeline quality adjustment — weighting for self-sourced vs. inbound pipeline
- Degree of difficulty — average competitive displacement difficulty of the rep's deals
- Marketing support level — campaign, event, and content investment in the territory
What remains after these adjustments is the rep's actual contribution. It is the first metric that fairly compares reps across unequal conditions — and the only one that should inform compensation, promotion, and territory design decisions.
12. Activity Quality Score (AQS)
Quality over quantity.
AQS weights each activity by its outcome, the seniority of the buyer involved, the depth of the interaction, and whether it produced forward progress. A 60-minute meeting with a VP that moved the deal to the next stage scores dramatically higher than a 15-minute check-in call with an IC that maintained the status quo.
The aggregate AQS — total weighted quality divided by total activity count — reveals efficiency. A rep with 20 activities and AQS of 80 is more valuable than a rep with 50 activities and AQS of 30. Activity metrics are universally gamed; AQS is nearly impossible to game because it requires actual buyer engagement and deal progression.
13. Rep Skill Decomposition Profile (RSDP)
Seven independent skill dimensions, scored 0-100:
- Prospecting efficiency — meetings booked per 100 outreach attempts, adjusted for persona difficulty
- Discovery depth — qualification thoroughness, pain identification, next-step creation
- Multi-threading ability — average BGC across the rep's active deals
- Deal progression — average DMS across deals
- Competitive win rate — win rate in deals with 2+ competitors, adjusted for displacement difficulty
- Negotiation — average discount, deal slip rate, close date accuracy
- Expansion — net revenue retention contribution, upsell and cross-sell rate
The profile replaces vibes-based performance reviews with a data-driven diagnostic. It tells the manager exactly which skills to coach, which deals to staff with complementary reps, and which territories to assign based on the rep's strengths.
Outbound Intelligence
What actually works?
14. Sequence Effectiveness Quotient (SEQ)
Not just open rates. Revenue per hour of rep time invested.
SEQ traces the full funnel from outbound sequence enrollment through to pipeline creation and revenue. It combines positive outcome rate (replies + meetings booked as a percentage of total sends), response quality (sentiment, length, next-step creation), and cost per touch (rep time invested per step).
The metric answers the question that open rates and reply rates cannot: which sequence produces the most revenue per unit of effort, for which personas, with which messaging?
Compared across different sequences, different target personas, different timing patterns, and different rep execution quality, SEQ reveals the combinations that work — and retires the combinations that don't before they waste another quarter of outbound capacity.
15. Competitive Displacement Difficulty (CDD)
The park-adjusted statistic for deal difficulty.
CDD combines three dimensions: incumbent entrenchment (years as customer, integration depth, remaining contract term), estimated switching costs (implementation time, user migration, data migration), and competitive density (number of vendors in active evaluation).
Score of 1 means easy greenfield — no incumbent, no competitors. Score of 10 means entrenched incumbent with deep integration, long remaining contract, and multiple active competitors.
CDD adjusts raw win rates for context. A rep with a 30% win rate against CDD-8 deals is significantly more valuable than one with a 45% win rate against CDD-2 deals. Without this adjustment, territory luck and deal assignment masquerade as skill.
Building the edge
These 15 metrics share a common architecture: they are composites built from signals that already exist in the data — email timestamps, meeting attendee lists, stage change logs, engagement events, buyer group memberships, and sequence enrollment records.
No new data collection is required. The data is already captured. What changes is the analysis — moving from counting activities to measuring the factors that actually predict outcomes.
The teams that adopt this framework will have the same structural advantage that transformed baseball in 2002 and quantitative finance in the decades that followed: not better intuition, but better math applied to better questions.
In a market where 86% of B2B purchases stall during the buying process, ^2^ the edge belongs to the teams that can see the stall coming — and act before it arrives.
Notes
^1^ Ebsta Revenue Intelligence Report, 2024. Analysis of 22 million sales interactions found that deals with 3+ engaged stakeholders close at 2.4x the rate of single-threaded deals.
^2^ Gartner, "The New B2B Buying Journey," 2024. 86% of B2B technology purchases experience at least one stall during the buying process.
