The Buying Committee
Ross Sylvester, Co-Founder & CEO, Adrata | Feb 2026 | ~14 min read
Last quarter, I watched two reps at a customer's organization work the same sized deal at the same type of company in the same industry. Both deals were around $450K. Both targeted mid-market financial services firms with roughly 2,500 employees. Both had strong champions and confirmed budget.
One rep mapped four people: the VP of Revenue Operations, the CRO, a sales director, and a procurement contact. The other rep mapped eleven: three from sales leadership, two from marketing, the CFO's office, IT security, a regional GM, a RevOps analyst, the head of enablement, and a VP of customer success who had been quietly evaluating competitive products for six months.
The first rep lost to "no decision" in month four. The second rep closed in month three.
The difference was not skill, effort, or product knowledge. It was the accuracy of each rep's mental model of who was actually in the room -- including the rooms they would never be invited to. This is the buying committee problem, and after two years of building Adrata's Buyer Group Intelligence platform and analyzing the data it produces, I can say with some confidence that most B2B organizations fundamentally misunderstand what buying committees look like, how they behave, and why they matter.
What the Platform Actually Measures
Before I get into the data, some context on what we're looking at. Adrata's Buyer Group Intelligence platform classifies every contact associated with a deal into one of five roles: decision_maker, champion, stakeholder, blocker, or introducer. For each person, the system computes an influence score (0-100), a role confidence score (0-100), and an enrichment level (identify, enrich, or know -- corresponding roughly to "we found them," "we have context on them," and "we understand their position and behavior").
The platform tracks committee-level metrics as well: committee size (adaptive, meaning it adjusts expected size based on deal value, company headcount, and org structure), a cohesion score measuring how well connected committee members are to each other, and a coverage quality grade (excellent, good, fair, or limited). Title normalization runs across six languages and regional conventions, so a "Directeur Commercial" in Paris and a "Chief Revenue Officer" in Austin are correctly mapped to the same functional role.
This gives us a dataset of 4,100+ buying committees across deal sizes ranging from $30K to $2.8M, spanning technology, healthcare, financial services, and manufacturing verticals. What follows is what the data shows.
Buying Committees by Deal Size
The first thing the data makes clear is that committee size is not random. It scales with deal value in a pattern that is remarkably consistent across industries.
| Deal Size | Avg. Committee Size | Median | Range (5th-95th Percentile) |
|---|---|---|---|
| $50K | 4.2 | 4 | 3-7 |
| $150K | 7.1 | 7 | 4-11 |
| $500K | 11.3 | 11 | 7-16 |
| $1M+ | 15.8 | 15 | 10-22 |
The scaling is roughly logarithmic, not linear. Doubling deal size from $50K to $100K adds about two people. Doubling from $500K to $1M adds about four. The committee grows faster at higher deal values because governance layers multiply -- legal review, executive oversight, board-level awareness, cross-functional impact assessment. A $50K tool purchase can be approved by a VP with discretionary budget. A $1M platform decision touches capital expenditure processes that involve the CFO's office, IT architecture review, and often the CEO or COO.
Role distribution shifts meaningfully with deal size:
| Role | $50K Deals | $150K Deals | $500K Deals | $1M+ Deals |
|---|---|---|---|---|
| Decision Maker | 1.0 | 1.3 | 2.1 | 3.2 |
| Champion | 1.1 | 1.4 | 1.8 | 2.3 |
| Stakeholder | 1.4 | 2.8 | 4.7 | 6.1 |
| Blocker | 0.4 | 0.9 | 1.6 | 2.4 |
| Introducer | 0.3 | 0.7 | 1.1 | 1.8 |
Two things stand out. First, the number of decision makers grows -- from effectively one person at $50K to three or more at $1M+. This is the "consensus of authority" problem: at higher deal values, no single executive has unilateral sign-off power, and the decision must be ratified across multiple people who each hold a piece of the authority. Second, the blocker count grows proportionally. At $50K, fewer than half of deals have an identifiable blocker. At $1M+, the average deal has 2.4 -- and the 95th percentile has five or more.
Department involvement also expands with deal size:
| Department | $50K | $150K | $500K | $1M+ |
|---|---|---|---|---|
| Primary buyer function | 92% | 94% | 96% | 97% |
| IT / Security | 31% | 58% | 84% | 93% |
| Finance / Procurement | 22% | 47% | 78% | 91% |
| Adjacent functions | 14% | 33% | 61% | 79% |
| Executive / C-suite | 8% | 24% | 52% | 81% |
| Legal | 6% | 19% | 44% | 68% |
At $50K, you are selling to a department. At $500K, you are selling to an organization. At $1M+, you are navigating a political system. The muscle memory that works at one tier -- the relationships you build, the materials you create, the cadence you run -- does not transfer to the next.
The Invisible Members
The most consequential finding in our dataset is not about the people on calls. It is about the people who never appear on calls but influence the outcome anyway.
We define an "invisible member" as someone the platform identifies as part of the buying committee -- based on email engagement, internal forwarding patterns, document access logs, calendar overlap, and organizational proximity -- who never directly interacts with the selling team. No meetings attended. No emails sent. No calls joined. But demonstrably involved in the decision.
Invisible members exist in 73% of deals over $100K. The average deal with invisible members has 2.1 of them. In deals over $500K, that number rises to 3.4.
These are not peripheral figures. Their average influence score is 62 out of 100 -- higher than the average visible stakeholder (54). They are disproportionately decision makers (28% of invisible members vs. 18% of visible members) and blockers (24% vs. 14%). They are, in other words, the people who matter most and engage least.
Where do they sit? The distribution is consistent:
| Function | % of Invisible Members |
|---|---|
| IT Security / InfoSec | 27% |
| Finance / FP&A | 22% |
| Executive leadership (skip-level) | 19% |
| Legal / Compliance | 16% |
| Adjacent department heads | 11% |
| Other | 5% |
A concrete example from the dataset. A $400K security platform deal at a 3,000-person financial services company. The selling team had mapped eight committee members and engaged six directly. Coverage quality: good. Cohesion score: 0.71 out of 1.0. The deal was progressing on schedule.
The platform identified three additional members the selling team had not engaged: the firm's Chief Risk Officer (influence score: 84, role: decision_maker), a VP of Compliance (influence: 67, role: blocker), and a senior IT architect (influence: 58, role: stakeholder). The CRO had been receiving forwarded evaluation documents from two committee members and had accessed the vendor's security documentation portal three times. The VP of Compliance had been cc'd on four internal threads. The IT architect had reviewed the integration spec shared by a colleague.
None of them had ever spoken to the selling team. All three influenced the outcome. In this case, the selling team used the intelligence to proactively engage the CRO and VP of Compliance before the final review stage. The deal closed 18 days ahead of forecast.
Not every story ends that well. When invisible members are not surfaced, the failure mode is almost always the same: a concern raised late, in an internal forum the selling team cannot see, by a person the selling team did not know existed. Our data shows that deals with unaddressed invisible members close at 31% the rate of deals where all committee members are engaged. That is not a marginal difference. It is the difference between a healthy pipeline and a fiction.
Industry Variation
Buying committees are not uniform across industries. The differences are structural, not cultural, and they have direct implications for how you run a deal.
| Metric | Technology | Healthcare | Financial Services | Manufacturing |
|---|---|---|---|---|
| Avg. committee size ($500K deal) | 10.4 | 13.7 | 12.1 | 9.8 |
| Avg. departments involved | 4.2 | 5.8 | 5.1 | 3.9 |
| % deals with compliance/legal member | 41% | 89% | 82% | 34% |
| % deals with clinical/domain expert | N/A | 76% | N/A | 52% |
| Avg. cycle length (days) | 94 | 147 | 128 | 112 |
| Invisible member prevalence | 68% | 81% | 79% | 64% |
| Avg. blocker count | 1.3 | 2.4 | 2.0 | 1.1 |
Healthcare has the largest committees, the longest cycles, and the most blockers. This is not surprising if you understand the regulatory structure. Any technology purchase in a health system must clear clinical validation (does it affect patient outcomes?), HIPAA compliance (does it handle PHI?), IT architecture (does it integrate with Epic or Cerner?), and often a medical staff governance committee. Each of these gates adds stakeholders who are not optional -- they are mandated by regulatory and accreditation frameworks.
Financial services is similar but for different reasons. Here, the complexity comes from risk management culture. Every vendor decision is evaluated through an operational risk lens. The Chief Risk Officer's team -- which in many financial institutions operates independently from IT security -- has effective veto power. In our dataset, the CRO or a direct report appears as a committee member in 71% of financial services deals over $200K. In technology companies, the equivalent role (if it exists at all) appears in only 23% of deals.
Manufacturing has the smallest committees and the fewest blockers, but a different challenge: the committee is geographically dispersed. In 44% of manufacturing deals over $300K, committee members span three or more physical locations. Cohesion scores in manufacturing are the lowest of any vertical (average 0.58 vs. 0.72 for technology), meaning committee members are less well connected to each other. This creates a fragmented decision process where individual approvals happen in silos and the "cascade of endorsement" that closes deals struggles to form.
Technology companies, despite having the smallest average committees, have a different structural risk: speed. The average cycle is 94 days at $500K, which means you have less time to find and engage all committee members. In technology, the invisible member problem is not that committees are big -- it is that the window to address gaps is compressed.
Committee Structure and Deal Outcomes
The question revenue leaders care most about is whether committee structure predicts outcomes. It does. The correlations are strong enough to be operationally useful.
We segmented our dataset into won and lost deals (excluding "no decision," which we analyzed separately) and compared committee characteristics:
| Metric | Won Deals | Lost Deals | No Decision |
|---|---|---|---|
| Avg. committee members engaged | 8.4 | 4.9 | 5.6 |
| Coverage quality: excellent or good | 74% | 29% | 33% |
| Cohesion score (avg.) | 0.74 | 0.51 | 0.48 |
| Champion influence score (avg.) | 72 | 58 | 61 |
| Decision makers directly engaged | 2.1 | 0.8 | 0.9 |
| Invisible members addressed | 81% | 22% | 19% |
| Departments represented | 4.6 | 2.8 | 3.1 |
| Multi-threaded (3+ contacts) | 89% | 41% | 44% |
Several patterns are worth calling out.
Cohesion is a leading indicator. The cohesion score -- measuring how interconnected committee members are -- is the single strongest predictor of deal outcome in our dataset. Won deals average 0.74; lost deals average 0.51. When committee members talk to each other, endorsements cascade. When they don't, each stakeholder evaluates in isolation, and isolation breeds doubt.
Champion strength is necessary but not sufficient. Won deals have higher champion influence scores (72 vs. 58), but the gap between lost and "no decision" deals is narrow (58 vs. 61). A strong champion who cannot mobilize the broader committee is an advocate shouting into a void. What separates won deals is not champion strength alone -- it is champion strength combined with broad engagement (8.4 members engaged vs. 4.9).
Coverage quality at "excellent" or "good" correlates with a 2.5x win rate. Deals rated excellent or good on coverage close at 41% vs. 16% for fair or limited. This is the single most actionable metric in the system: if coverage quality is below "good" at the midpoint of a deal, the probability of closure drops to levels that should trigger a hard conversation in deal review.
"No decision" looks more like a loss than a win. Across every metric, "no decision" deals resemble lost deals far more than won deals. The committee is undermapped, the cohesion is low, invisible members are unaddressed, and the selling team is single-threaded. "No decision" is not a timing problem. It is a committee structure problem.
The Adaptive Sizing Model
One of the most common mistakes in buyer group mapping is applying a fixed template. I have seen enablement teams distribute frameworks that say "every enterprise deal has 8-12 stakeholders" or "always map the economic buyer, the champion, the technical evaluator, and the coach." These frameworks are not wrong, exactly. They are just not specific enough to be useful.
Our platform uses an adaptive sizing model that estimates expected committee size based on three variables: deal value, company headcount, and industry vertical. The model then adjusts based on observed signals -- the number of distinct people engaging with content, the number of email domains involved, the organizational depth visible in engagement patterns.
The reason this matters is that a fixed model creates false confidence. If your framework says "8-12 stakeholders" and your rep maps eight, they feel complete. But if the deal is a $900K platform sale into a 5,000-person healthcare system, eight is probably half the actual committee. The expected range for that deal profile is 12-18. Eight is not "good coverage." It is a gap that will likely surface as a late-stage blocker or a stalled consensus process.
The adaptive model produces a coverage quality assessment based on the ratio of engaged committee members to expected committee size:
| Coverage Quality | Engaged / Expected Ratio | Win Rate |
|---|---|---|
| Excellent | > 85% | 47% |
| Good | 65-85% | 36% |
| Fair | 45-64% | 19% |
| Limited | < 45% | 8% |
The win rate difference between excellent and limited coverage is nearly 6x. This is not a subtle effect. It is the difference between a pipeline that converts and a pipeline that evaporates.
The practical implication: committee size expectations should be calibrated to each deal, not set by generic templates. A $150K deal at a 200-person startup may genuinely have four people involved. The same sized deal at a 10,000-person enterprise may have twelve. A rep who maps four in both cases has done complete work in the first and incomplete work in the second. Without an adaptive model, they cannot tell the difference.
How Committees Are Changing: 2025-2026
Two shifts are worth noting because they affect how every B2B organization should think about committee dynamics going forward.
First, AI adoption is adding new stakeholders. In our 2025-2026 deal data, 34% of technology purchases now include a stakeholder whose explicit remit is AI governance, AI ethics, or AI risk -- a role that effectively did not exist in our 2023 data. These stakeholders appear most often as blockers (41% of the time) or stakeholders with influence scores above 60 (68% of the time). They are senior, they are skeptical by mandate, and they are almost always invisible to sellers who are not looking for them.
In one illustrative case, a $600K deal for an analytics platform at a 4,000-person technology company was tracking well through month two. All mapped stakeholders were engaged. Coverage quality was good. Then the deal stalled for three weeks. The cause: the company had formed an AI Review Board six months earlier, staffed by the VP of Engineering, a data ethics researcher hired from academia, and the General Counsel. The board had the authority to approve or reject any purchase involving machine learning or automated decision-making. The selling team did not know the board existed. Their champion did not mention it because, from her perspective, it was "just an internal process." The deal ultimately closed after a four-week delay, but only because the selling team scrambled to prepare an AI risk assessment that addressed the board's concerns.
This pattern will become more common, not less. Organizations are building governance structures around AI faster than selling teams are adapting to them. If you are selling any product that touches AI, ML, or automated decision-making -- which, in 2026, is a rapidly growing share of enterprise software -- an AI governance stakeholder should be on your default committee map.
Second, buying committees are becoming more asynchronous. The percentage of committee members who engage exclusively through asynchronous channels -- document review, email, recorded video, shared workspaces -- has increased from 18% in 2023 to 31% in 2025. These members never join a live call. They never attend a demo. They form their opinion based on materials shared by colleagues, and they communicate that opinion through internal channels the selling team cannot observe.
This is the invisible member problem compounding. Not only are there stakeholders you do not know about, but an increasing share of the stakeholders you do know about are forming opinions in ways you cannot see or influence directly. The cohesion score of committees with high asynchronous engagement is 12% lower on average, and the win rate is 18% lower. Asynchronous committees are harder to align because the cascade of endorsement -- which depends on real-time interaction, shared context, and visible momentum -- is weaker.
The implication is that enablement materials matter more than ever. If a VP of Compliance is going to evaluate your product by reading a forwarded document rather than attending a call, that document is your pitch. It had better be excellent, function-specific, and structured to address the concerns of someone who was not in the room when context was shared.
What This Means
The buying committee is not a concept. It is a system -- a dynamic, adaptive, partially visible network of people whose individual incentives, organizational positions, and interpersonal relationships determine whether your deal closes. Understanding that system is not optional at scale. It is the single highest-leverage capability a revenue organization can build.
Three implications stand out.
The first is measurement. Most revenue organizations track pipeline by dollar value and stage. They should be tracking pipeline by committee completeness, cohesion, and coverage quality. A $500K deal at Stage 3 with two contacts engaged is not a $500K deal. It is a fiction with a dollar sign attached. The data is unambiguous: coverage quality is a better predictor of deal outcome than stage progression, champion sentiment, or rep confidence.
The second is adaptive specificity. The buying committee for a $150K deal at a 500-person technology company is a fundamentally different organism than the buying committee for a $1M deal at a 10,000-person healthcare system. Templates that treat them the same produce false confidence and missed gaps. Committee mapping must be calibrated to the specific deal, the specific company, and the specific industry -- and it must update continuously as the deal evolves.
The third is the invisible majority. In deals over $100K, the people you cannot see are more influential, on average, than the people you can. Invisible members have higher influence scores, are more likely to be decision makers or blockers, and are present in nearly three-quarters of deals. A sales process that only manages visible stakeholders is managing the minority of influence. The majority is elsewhere, forming opinions in forwarded emails and internal meetings, and the only way to reach it is to know it exists.
The era of selling to individuals ended a decade ago. The era of selling to accounts is ending now. What comes next is selling to committees -- and the teams that see the committee clearly will be the ones that win.
Analysis based on 4,100+ buying committees tracked through Adrata's Buyer Group Intelligence platform, 2023-2026.
