GTM Agents for CROs
Ross Sylvester, Co-Founder & CEO, Adrata | Feb 2026 | ~12 min read
Every CRO I talk to is getting the same pitch from twelve different vendors: "Our AI agent will replace your SDR team." The demos are spectacular. The results, mostly, are not.
This is the most important technology shift in go-to-market since Salesforce moved CRM to the cloud. It is also the most overhyped. The gap between those two truths is where CROs will either build durable competitive advantage or waste eighteen months and seven figures chasing demos.
This article is an attempt to separate signal from noise -- with data, frameworks, and the uncomfortable specifics that vendor decks leave out.
The Market Is Real. The Hype Is Also Real.
Let's start with what the data actually says.
Gartner predicts that 40% of enterprise applications will feature task-specific AI agents by 2026, up from less than 5% in 2025.1 That is an 8x increase in a single year. By 2028, they project 33% of enterprise software will include agentic AI, up from less than 1% in 2024 -- a 33x increase in four years.2
The AI SDR market alone is projected to grow from $4.1 billion in 2025 to $15 billion by 2030, a 29.5% CAGR.3 Salesforce's Agentforce product hit 18,500 enterprise customers and crossed $540 million in ARR by Q3 FY2026, growing 330% year over year.4 These are not experimental numbers. This is a market forming in real time.
But here is the counterweight every CRO needs tattooed on their forearm: Gartner also predicts that over 40% of agentic AI projects will be canceled by end of 2027 due to escalating costs, unclear business value, or inadequate risk controls.5 Of the thousands of vendors selling "agentic AI," Gartner estimates only about 130 are real. The rest are engaged in what analysts politely call "agent washing" -- rebranding existing chatbots, RPA tools, and automation scripts with agentic language.6
BCG's September 2025 research is even more sobering: 60% of organizations generate no material value from AI despite significant investment. Only 5% create substantial value at scale.7 McKinsey reports that just 39% of organizations see impact on EBIT from AI deployments.8
The technology is transformative. The execution success rate is abysmal. That gap is the CRO's problem to solve.
The Five Agent Types That Matter
Not all GTM agents are created equal. There are five categories that a CRO should understand, each at a different stage of maturity and each carrying a different risk profile.
1. Research and Enrichment Agents
Maturity: High. Deploy with confidence.
These agents gather, synthesize, and structure information about accounts, contacts, market signals, and buying intent. They pull from dozens of data sources, cross-reference signals, and produce account briefs that would take a human researcher hours to compile.
Clay is the defining example. Founded in 2017, Clay spent six years building product before hitting inflection. It went from $1M to $100M ARR in roughly two years. Its August 2025 Series C valued the company at $3.1 billion; by January 2026, secondary tender offers pushed that to approximately $5 billion.9 Over 10,000 customers including OpenAI, Anthropic, Canva, and Rippling use it in production daily.10
Clay works because it does not try to replace humans. It replaces the manual, repetitive work that humans do poorly -- stitching together data from LinkedIn, 10-K filings, job postings, technographic databases, and news feeds. The agent augments the seller's judgment rather than substituting for it.
Why research agents succeed: The failure modes are low-consequence. A slightly inaccurate account brief gets corrected by a human before it matters. The output is consumed by a person who applies judgment. The agent handles volume and speed; the human handles nuance and strategy.
2. SDR and Outbound Agents
Maturity: Mixed. Highest hype-to-reality gap.
These agents autonomously send outbound emails, handle initial prospect responses, and attempt to book meetings. This is where the most money has been spent, the most promises have been made, and the most damage has been done.
The 11x debacle is the cautionary tale every CRO should study. Backed by a16z and Benchmark at a $350M valuation, 11x claimed $14M ARR. The actual figure was closer to $3M. ZoomInfo and Airtable demanded their logos be removed from the website. Former employees reported 70-80% customer churn within three months.11 One customer used the product for six months and had zero meetings to show for it.12
The most consistent complaint across every AI SDR product is the same: the outreach doesn't feel personal enough. Despite providing detailed ICPs, persona descriptions, and value propositions, the output reads like what it is -- AI-generated email at scale. And buyers have developed antibodies. When every inbox is flooded with AI-written sequences, the bar for what earns a response goes up, not down.
But there are real results emerging from companies that use AI SDRs as part of a system rather than as a replacement for humans. Demandbase reported a 2x increase in pipeline and 3x more meetings compared to human SDRs using its AI agent Piper, saving $80,000 in staff costs while booking 37% more meetings with tier-1 accounts.13 The difference: Piper handles inbound chat -- a constrained, high-intent environment where the buyer has already raised their hand.
The pattern: AI SDR agents work best in high-intent, constrained environments (inbound chat, warm lead follow-up, re-engagement of closed-lost). They fail most often in cold outbound at scale, where the quality of the message matters more than the speed of sending it.
3. Forecasting and Revenue Intelligence Agents
Maturity: High. Measurably better than the alternative.
These agents analyze deal signals, pipeline health, historical patterns, and conversation data to predict revenue outcomes. They are the most immediately valuable agent type for CROs because the incumbent process -- spreadsheets, gut feel, and Monday morning pipeline reviews -- is so demonstrably broken.
Gong's 2025 data tells the story: sales rep quota attainment fell from 52% to 46% year over year, while U.S. companies using AI for forecasting jumped 50%.14 The companies adopting AI forecasting aren't doing it because they're optimistic about technology. They're doing it because their existing process is failing.
Clari claims 98% forecast accuracy by week two of the quarter using AI models trained on historical deal data.15 Gong leverages 300+ unique signals to predict deal outcomes with 20% more precision than algorithms based on CRM data alone.16 Companies using AI "second opinions" on forecasting see 10-15% better accuracy because the prediction is evidence-based rather than sentiment-based.
Why forecasting agents work: They solve the CRO's most existential problem -- standing in front of the board with a number they believe. They don't replace human judgment; they inform it with data the human cannot manually process. And the feedback loop is tight -- you know within 90 days whether the forecast was right.
4. Coaching and Enablement Agents
Maturity: Medium. Results are real but adoption is uneven.
These agents analyze sales conversations, provide real-time guidance, score interactions, and surface coachable moments for managers. They represent the shift from post-mortem call reviews to continuous, data-driven coaching at scale.
Gong's data shows organizations implementing AI coaching report a 15-20% increase in deal velocity, a 12-18% improvement in win rates, and 25-30% reduction in forecast variance.17 Companies using AI task prioritization have reported 60% increases in rep capacity as administrative tasks disappear, with revenue per rep jumping 30%.18
Gong's landmark 2025 study found that sales teams using AI generate 77% more revenue per rep.19 That is not a marginal improvement. That is a structural advantage.
The challenge is adoption. Sales reps resist tools that feel like surveillance. The companies that get results deploy coaching agents as tools that serve the rep, not tools that monitor the rep. The distinction is subtle in product design and enormous in adoption rates.
5. Orchestration and Workflow Agents
Maturity: Emerging. The highest ceiling and the most complexity.
These are meta-agents -- agents that coordinate other agents, route work between AI and humans, and manage multi-step GTM workflows end-to-end. Salesforce's Agentforce 360, released in October 2025, is the enterprise bet on this category.20
This is also where the "GTM Engineer" role has emerged -- a hybrid of software engineer, RevOps architect, and GTM strategist who builds and manages fleets of interoperable AI agents. LinkedIn showed over 1,400 GTM Engineer postings in mid-2025, growing to 3,000+ by January 2026, with salaries ranging well into six figures.21 Clay takes credit for catalyzing this role, and the data supports it.
Orchestration agents represent the future but carry the highest implementation risk. They require clean data, well-defined processes, and organizational maturity that most revenue teams do not yet have.
What CROs Should Actually Do
The framework I recommend to CROs is built on a simple principle: deploy where the failure cost is low and the feedback loop is fast. Then expand.
Step 1: Start With Research and Forecasting (Months 1-3)
These two agent categories have the highest maturity and lowest deployment risk. Research agents (Clay, or similar) improve pipeline quality immediately. Forecasting agents (Clari, Gong Forecast) give you better visibility within one quarter.
Neither requires organizational change. Neither threatens existing team dynamics. Both produce measurable results within 90 days.
Metrics to track: Time-to-account-brief (should drop 80%+), forecast accuracy vs. prior quarters, pipeline coverage ratio improvement.
Step 2: Add Coaching Agents to Your Highest-Performing Team (Months 3-6)
Deploy coaching agents with your best team first, not your worst. Your top performers will adopt faster, generate the proof points, and create internal pull from other teams. Deploying with struggling reps first is the most common mistake -- it conflates a coaching tool with a remediation tool and breeds resentment.
Metrics to track: Win rate delta (coached vs. uncoached deals), deal velocity, rep NPS on the tool itself.
Step 3: Test SDR Agents in Constrained Environments (Months 4-8)
Deploy AI SDR agents for inbound lead response, closed-lost re-engagement, and event follow-up -- environments where intent is established and the cost of a mediocre message is low. Do not deploy them for cold outbound to your tier-1 accounts. The downside risk of a bad AI-generated email to your most important prospects is not worth the efficiency gain.
Metrics to track: Speed-to-lead (should be near-instant), meeting conversion rate vs. human SDR baseline, negative reply rate (the most undertracked metric in AI SDR deployments).
Step 4: Build the Orchestration Layer (Months 6-12)
Once you have individual agents producing measurable results, begin connecting them. Your research agent feeds your SDR agent. Your forecasting agent informs your coaching agent. Your coaching agent's insights feed back into your research agent's account prioritization.
This is where you either hire a GTM Engineer or develop the capability internally. Do not outsource this. Gartner's prediction that 40% of agentic AI projects will be canceled is heavily weighted toward companies that treated AI deployment as a vendor problem rather than an operational capability.
Build vs. Buy: The Honest Answer
The build-vs-buy question in GTM agents is a false binary. The real answer is: buy the foundation, build the differentiation.
Buy: Data enrichment engines (Clay), conversation intelligence (Gong), forecasting platforms (Clari), CRM-native agents (Salesforce Agentforce). These are horizontal capabilities where the vendor's data network effects and R&D investment will always exceed what you can build internally.
Build: The orchestration logic, the prompt engineering, the workflow design, the integration layer that connects these tools to your specific GTM motion. This is where competitive advantage lives. Two companies can use the same Clay instance and get radically different results depending on how they structure their enrichment waterfall, their scoring logic, and their routing rules.
The companies getting the best results treat AI agents like a new hire: you don't build the person, but you absolutely build the onboarding, the playbook, the feedback loops, and the performance management system.
Forrester reports that organizations achieved 210% ROI over a three-year period on AI agent investments, with payback periods under six months.22 But that is the mean, not the median. The distribution is bimodal -- companies that operationalize agents well get extraordinary returns, and companies that don't get nothing.
The Companies Getting It Right vs. Wrong
Getting it right: Clay. Six years of product development before hypergrowth. A tool that augments human sellers rather than replacing them. An ecosystem of 100+ agencies and a new job category. Revenue tripling year over year because the product works, not because the contracts are structured to obscure churn.23
Getting it right: Gong. Named a Leader in Gartner's 2025 Magic Quadrant for Revenue Action Orchestration.24 Rather than promising to replace sellers, Gong made existing sellers 77% more productive. The coaching and forecasting agents work because they serve the human workflow rather than circumventing it.
Getting it wrong: 11x. The demo-to-production gap was a chasm. The metrics were engineered to impress investors rather than reflect customer value. The product required manual human correction that defeated the purpose of buying it. Cautionary tale for every CRO evaluating an AI SDR: ask for cohort retention data, not contracted ARR.25
Getting it wrong: The "agent washing" vendors. Gartner estimates that of thousands of agentic AI vendors, only ~130 are legitimate.26 The rest have rebranded existing automation as "agents." The CRO's filter: Does this product make autonomous decisions and learn from outcomes? Or does it execute pre-programmed rules and call itself an agent?
A Framework for Evaluating GTM Agents
Before you sign another vendor contract, score every GTM agent on these six dimensions:
| Dimension | Question | Red Flag |
|---|---|---|
| Autonomy Level | What decisions does the agent make without human approval? | Vendor can't clearly define the human-AI boundary |
| Feedback Loop | How does the agent learn from outcomes? | "It uses the latest GPT model" (no custom learning) |
| Failure Mode | What happens when the agent is wrong? | Wrong answer goes directly to a prospect or customer |
| Data Dependency | What data does it need, and do you have it? | Requires CRM hygiene you don't have |
| Integration Depth | How does it connect to your existing stack? | Standalone product with CSV export |
| Measurement Clarity | Can you A/B test agent vs. no-agent? | "ROI is hard to isolate" |
If a vendor cannot give you clear answers on all six, they are selling you a demo, not a product.
The Uncomfortable Truth
Here is what the vendor decks won't tell you: the bottleneck in GTM is not automation. It is judgment.
The reason most AI SDRs produce mediocre results is not that the language models are bad. It is that great outbound requires understanding why a specific person at a specific company would care about your product at this specific moment. That is a judgment problem, not a generation problem.
The reason most AI forecasting tools require months to calibrate is not that the algorithms are weak. It is that most organizations have CRM data so polluted with optimistic stage progressions and stale opportunities that no model can extract reliable signal. That is a data discipline problem, not a technology problem.
The reason Gartner predicts 40% of projects will be scrapped is not that agentic AI doesn't work. It is that most organizations lack the operational maturity -- clean data, defined processes, clear ownership, realistic expectations -- to deploy it successfully.27
AI agents will not fix a broken go-to-market motion. They will accelerate whatever motion you already have. If your ICP is wrong, AI will help you reach the wrong people faster. If your messaging doesn't resonate, AI will send messages that don't resonate at higher volume. If your pipeline inspection is theater, AI will generate more sophisticated theater.
The CROs who win in 2026 and 2027 will not be the ones who deployed the most agents. They will be the ones who understood that agents are an amplifier, not a strategy -- and who invested in getting the underlying motion right before turning up the volume.
What Comes Next
Three predictions for the next eighteen months:
1. The GTM Engineer becomes a mandatory hire. By end of 2026, every revenue organization above $50M ARR will have at least one dedicated GTM Engineer managing their agent ecosystem. CROs who wait will find themselves managing fragmented point solutions instead of a coherent system.
2. Agent-to-agent selling becomes real. Forrester predicts that 20% of B2B sellers will be forced to engage in agent-led quote negotiations in 2026.28 When your buyer's AI agent is negotiating with your seller's AI agent, the quality of your data, your pricing logic, and your competitive positioning becomes the product. The human selling layer becomes strategic, not transactional.
3. The vendor landscape consolidates violently. With only ~130 legitimate agentic AI vendors out of thousands, and 40% of projects facing cancellation, the AI GTM vendor market will compress dramatically. The survivors will be platforms (Clay, Gong, Salesforce) and deeply vertical solutions. Everything in between gets acquired or goes to zero.
The window for CROs to build institutional knowledge in agent deployment is open now. It will not stay open long. The organizations that treat this as a capability to develop -- not a product to purchase -- will compound that advantage for years.
Start with research and forecasting agents. Measure relentlessly. Expand deliberately. And never, ever confuse a good demo with a good product.
Footnotes
-
Gartner, "Gartner Predicts 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026," August 2025. Link ↩
-
Gartner, "Strategic Predictions for 2026: How AI's Underestimated Influence Is Reshaping Business." Link ↩
-
MarketsandMarkets, "AI SDR Market Size, Share and Global Forecast to 2030." Link ↩
-
Futurum Group, "Salesforce Q3 FY 2026: AI Agents, Data 360 Lift Bookings and FY26 Outlook." Link ↩
-
Gartner, "Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027," June 2025. Link ↩
-
Ibid. ↩
-
Crunchbase, "AI-Powered Sales Automation Startup Clay More Than Doubles Valuation To $3.1B." Link ↩
-
BusinessWire, "AI GTM Leader Clay Raises $100M Series C to Fuel GTM Engineering Roles Industrywide," August 2025. Link ↩
-
TechCrunch, "a16z- and Benchmark-backed 11x has been claiming customers it doesn't have," March 2025. Link ↩
-
Enginy, "11x Reviews 2026: Is this AI sales tool really worth it?" Link ↩
-
Landbase, "AI SDR Dream Teams: Multi-Agent Strategies for 7x ROI (2026)." Link ↩
-
VentureBeat, "Gong study: Sales teams using AI generate 77% more revenue per rep." Link ↩
-
Clari, "AI Sales Forecasting & Revenue Insights Solution." Link ↩
-
Gong, "Achieve Accurate Sales Forecasting with Gong's AI Software." Link ↩
-
Oliv AI, "Gong vs Clari: Real User Reviews Reveal Which Delivers Better ROI in 2025." Link ↩
-
Gong, "AI Sales Task Prioritization 2025: Boost Rep Productivity." Link ↩
-
VentureBeat, "Gong study: Sales teams using AI generate 77% more revenue per rep." Link ↩
-
Salesforce, "Welcome to the Agentic Enterprise: With Agentforce 360," October 2025. Link ↩
-
ScaleVP, "The rise of AI operations and the GTM Engineer." Link ↩
-
Forrester, "B2B Marketing Predictions for 2026," via Demand Gen Report. Link ↩
-
OpenAI, "Achieving 10x growth with agentic sales prospecting" (Clay case study). Link ↩
-
VentureBeat, "Gong study: Sales teams using AI generate 77% more revenue per rep." Link ↩
-
Gartner, "Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027." Link ↩
-
Ibid. ↩
