Token Economics
Ross Sylvester, Co-Founder & CEO, Adrata | Feb 2026 | ~12 min read
Sometime in the last eighteen months, a new line item appeared in the cost structure of every revenue organization that uses AI. Nobody named it. Nobody budgets for it. Most finance teams have no idea it exists.
It is the token.
Every time an AI agent drafts an email, it consumes tokens. Every time your platform analyzes a buyer group, classifies a stakeholder, or generates a deal coaching recommendation, it consumes tokens. Every time a rep asks an AI assistant to research a prospect, summarize a call transcript, or prepare a meeting brief -- tokens.
Tokens are the atomic unit of AI computation. They are to AI-powered revenue what API calls were to SaaS, what compute hours were to cloud infrastructure, what minutes were to telecom. And just like those predecessors, tokens are about to reshape the unit economics of selling.
Most CROs have not looked at this number. They should. Because tokens are quietly becoming the second-largest variable cost in AI-augmented sales, behind compensation. And unlike compensation, nobody is managing them.
What a Token Actually Is
A token is a fragment of text -- roughly three-quarters of a word, or about four characters. The sentence "Can you summarize this deal?" is approximately 7 tokens. A 500-word email is about 670 tokens. A 45-minute call transcript is 8,000 to 12,000 tokens.
When an AI model processes information, it reads tokens (input) and generates tokens (output). Both have a cost. The costs differ by model, by provider, and by capability tier.
Here is the current landscape as of early 2026:
| Model Tier | Input Cost (per 1M tokens) | Output Cost (per 1M tokens) | Use Case |
|---|---|---|---|
| Frontier (Claude Opus, GPT-4.5) | $15 | $75 | Complex reasoning, strategy, analysis |
| Standard (Claude Sonnet, GPT-4o) | $3 | $15 | Email drafting, summarization, coaching |
| Fast (Claude Haiku, GPT-4o mini) | $0.25 | $1.25 | Classification, routing, simple tasks |
| Reasoning (o3, extended thinking) | $10-15 + thinking tokens | $40-60 | Multi-step analysis, planning |
The prices are falling. They have dropped roughly 10x in two years and will likely drop another 5-10x in the next two. But volume is growing faster than prices are falling. The number of tokens consumed per deal is increasing as organizations deploy AI across more stages of the revenue process.
This creates a dynamic that every CRO should understand: the marginal cost per token is declining, but the total token cost per deal is rising.
The Four Token Categories
Not all tokens are equal. Understanding the categories changes how you think about cost management.
1. Input Tokens (Context)
Input tokens are what you feed the AI: CRM data, email threads, call transcripts, company intelligence, buyer profiles, deal history. The AI reads this context before it generates anything.
Input tokens are the largest category by volume and the cheapest per unit. A typical deal coaching request might send 15,000 input tokens (the deal context, stakeholder profiles, engagement history, and similar deal patterns) and receive 2,000 output tokens (the coaching recommendation).
The strategic question is how much context to provide. More context generally produces better output, but the relationship is not linear. There is a point of diminishing returns. The best AI implementations are precise about what context they include -- not "send everything" but "send the signals that matter for this specific decision."
2. Output Tokens (Generation)
Output tokens are what the AI produces: the drafted email, the deal coaching recommendation, the stakeholder analysis, the meeting brief. These cost 3-5x more than input tokens because generation is computationally harder than comprehension.
Output tokens are where the value lives. A well-crafted 500-token coaching recommendation that identifies a missing stakeholder in a $400K deal is worth orders of magnitude more than its $0.008 cost. But a 2,000-token email that gets ignored costs the same whether it works or not.
The unit economics favor precision. Short, targeted outputs that drive action are dramatically more cost-effective than verbose outputs that get skimmed.
3. Reasoning Tokens (Thinking)
This is the newest category and the least understood. Models like OpenAI's o3 and Anthropic's extended thinking generate internal reasoning tokens -- chains of thought that the model produces to work through complex problems before generating a visible output.
Reasoning tokens can multiply costs by 3-10x. A standard deal analysis might cost $0.05 with a frontier model. The same analysis with extended reasoning might cost $0.30 as the model works through multiple hypotheses, considers counterarguments, and evaluates confidence levels.
When is the premium worth it? For high-stakes, complex decisions: which of these five deals should the VP of Sales engage on? What is the optimal entry strategy for a new enterprise account? Should we restructure the buying committee approach given this stakeholder departure? These are not classification problems. They are judgment problems. And judgment is where reasoning tokens earn their cost.
4. Cached Tokens (Memory)
Most AI providers now offer context caching -- the ability to store and reuse common context across multiple requests at a 50-90% discount on input token costs. This matters enormously for revenue applications.
Consider a deal that generates 50 AI interactions over its lifecycle: initial research, buyer group analysis, email drafts, meeting preps, coaching recommendations, forecast updates. Without caching, each interaction re-reads the full deal context. With caching, the stable context (company profile, stakeholder map, deal history) is read once and referenced cheaply thereafter.
Organizations that implement caching well reduce their input token costs by 40-70%. Most have not implemented it at all.
Tokens in the P&L
Here is a model for what tokens actually cost in a revenue organization. These are illustrative numbers drawn from real usage patterns.
Per-Rep Token Cost
| Activity | Frequency | Tokens per Instance | Monthly Token Cost |
|---|---|---|---|
| Email personalization | 40/week | 3,000 (in+out) | $7.20 |
| Meeting prep briefs | 8/week | 8,000 | $4.80 |
| Call transcript analysis | 6/week | 15,000 | $5.40 |
| Deal coaching recommendations | 5/week | 12,000 | $3.60 |
| Buyer group analysis | 3/week | 20,000 | $3.60 |
| Prospect research | 10/week | 5,000 | $3.00 |
| Forecast intelligence | 2/week | 10,000 | $1.20 |
| Total per rep | ~$29/month |
At $29 per rep per month, tokens look trivially cheap. And they are, today. But three dynamics change this calculation:
Scale. A 200-rep organization spends $5,800 per month, $70,000 per year. Not trivial for a growth-stage company managing burn rate.
Agent multiplication. When you add AI SDR agents that operate continuously -- prospecting, qualifying, engaging -- the token volume increases by an order of magnitude. An AI SDR agent processing 500 prospects per day at 10,000 tokens per prospect consumes 5 million tokens daily. At standard model pricing, that is $15-75 per day per agent, or $450-2,250 per month per agent.
Quality escalation. As teams discover that frontier models produce better emails, sharper coaching, and more accurate analysis, they migrate from fast models to standard to frontier. The per-token cost increases 10-20x. The quality improvement is real, but so is the cost increase.
Tokens as a Component of CAC
This is the framing that should matter to every CRO.
Customer Acquisition Cost has always been: (Sales & Marketing Expense) / (New Customers Acquired). The numerator included compensation, tools, travel, events, content, advertising. It did not include tokens.
Now it does. And the proportion is growing.
| CAC Component | Traditional | AI-Augmented (2026) | Projected (2028) |
|---|---|---|---|
| Compensation | 55% | 45% | 35% |
| Software/tools | 20% | 18% | 15% |
| Marketing/content | 15% | 12% | 10% |
| Travel/events | 8% | 5% | 5% |
| Tokens | 0% | 8% | 20% |
| AI agent labor | 0% | 10% | 15% |
| Other | 2% | 2% | 0% |
The token share of CAC is still small. But the trajectory matters. As AI agents take on more of the revenue workflow -- qualifying leads, conducting initial outreach, researching accounts, drafting proposals, following up on stalled deals -- the token cost grows in proportion to the human cost it displaces.
The critical insight: tokens should be evaluated against the compensation they replace, not in isolation. If an AI SDR agent costs $2,000 per month in tokens and replaces $8,000 per month in human SDR compensation, the ROI is obvious. But if you are not tracking the token cost, you cannot compute the ROI. And if you cannot compute the ROI, you cannot optimize it.
Seven Strategies for Token-Aware Revenue Operations
1. Route by Complexity
The single highest-leverage optimization. Not every AI interaction requires the same model. Email classification (is this a positive reply, objection, or noise?) can run on a fast model at $0.25/M tokens. Buyer group analysis should run on a standard model at $3/M tokens. Strategic deal coaching might warrant a frontier model at $15/M tokens.
Most AI platforms use a single model for everything. The ones that route by complexity reduce costs 60-80% with negligible quality loss on the simpler tasks.
2. Cache Aggressively
Company profiles, stakeholder maps, deal histories, product positioning -- this context is stable across interactions. Cache it. Reuse it. Pay the full input cost once, the cached cost (10-50% of full) every subsequent time.
The implementation is straightforward: identify the context that changes less than once per day and cache it. For most revenue applications, this is 60-80% of the total input context.
3. Measure Token ROI by Activity
Track tokens consumed per activity type (email drafting, research, coaching, etc.) and correlate with outcomes. You will discover that some activities have exceptional token ROI -- buyer group analysis that costs $0.12 and identifies a $500K influence path -- and some have poor token ROI -- verbose email drafts that get rewritten by the rep anyway.
Invest tokens where the ROI is highest. Cut tokens where the output is ignored.
4. Set Token Budgets by Deal Size
A $50K deal does not warrant the same AI investment as a $500K deal. Set tiered token budgets: fast models and basic analysis for SMB deals, standard models and full buyer group intelligence for mid-market, frontier models and extended reasoning for enterprise.
This mirrors how you allocate human time. You do not send a VP on a discovery call for a $30K deal. You should not run $5 of frontier reasoning on one either.
5. Compress Context, Not Capability
The art of token management is providing the AI with the right context in the fewest tokens. This does not mean providing less information. It means providing better-structured information.
A 15,000-token CRM dump produces worse results than a 4,000-token structured brief that highlights the key signals. The AI does not need every email ever sent. It needs the three emails that reveal the stakeholder's priorities.
This is an engineering problem, not a usage problem. The platform should handle context compression. If it does not, you are paying for waste.
6. Track the Token-to-Revenue Ratio
Just as you track CAC payback period and LTV/CAC ratio, track the ratio of token cost to revenue influenced. This metric tells you whether your AI investment is becoming more or less efficient as you scale.
A healthy trajectory: token cost grows linearly while revenue influenced grows exponentially (because the AI gets better at routing tokens to high-impact activities). An unhealthy trajectory: both grow linearly, meaning you are scaling AI usage without scaling AI impact.
7. Plan for the Price Curve
Token prices have dropped 10x in two years. They will drop further. Strategies that are uneconomical today at frontier pricing may become profitable at next year's prices. Build the infrastructure now -- the routing, caching, measurement, and budgeting systems -- so you can capture the value as costs decline.
The organizations that treat tokens as an operating expense to be managed, rather than a cost to be minimized, will have a structural advantage. They will invest more aggressively in AI capabilities because they understand the unit economics. And when prices drop by another 5x, they will scale their AI operations faster than competitors who are still figuring out how to track the spend.
The Token-Aware CRO
The CRO who understands token economics has a new lever that their peers do not. They can make informed decisions about which AI capabilities to deploy, at what model tier, for which deal sizes, and with what expected return. They can have an honest conversation with their CFO about AI costs that goes beyond "we use ChatGPT" and into the territory of unit economics, marginal returns, and investment optimization.
This is not a technology conversation. It is a P&L conversation. Tokens are a variable cost that scales with revenue activity. They should be managed with the same rigor as any other variable cost in the revenue organization.
The companies that figure this out first will not just save money on tokens. They will deploy AI more aggressively, more precisely, and more profitably than their competitors. They will know exactly what each token buys them. And in a world where AI capability is increasingly commoditized, that operational intelligence -- knowing where tokens produce value and where they produce noise -- may be the most durable competitive advantage a revenue organization can build.
