Three AI Signals Worth Sharing with Your Board This Month
Dispatches from the Agentic Frontier is a new regular intelligence briefing for leaders in knowledge-intensive sectors. Each dispatch translates evidence from the frontier of agentic AI — practitioner experience, investor signals, strategy research, market events — into what it means for enterprise leaders building for competitive advantage. Some dispatches are field reports from practitioners who are 12–18 months ahead. Others synthesise emerging research. All are filtered through the Intelligence Capital framework developed in The AI Your Competitors Can’t Buy.
Two weeks ago I published The AI Your Competitors Can’t Buy, arguing that most enterprise AI investment is buying parity — and that the only durable advantage comes from building Intelligence Capital: institutional judgement that compounds over time and cannot be replicated by purchasing the same software as your competitors later.
The response from senior leaders across insurance, banking, wealth management and other knowledge-intensive sectors has been strong — and the most common question is some version of: “How fast is this really moving?”
This week I’m drawing on two conversations that represent two very different vantage points on that question.
The first is a discussion hosted by Peter Diamandis (XPRIZE / Singularity University) with Ben Horowitz (Andreessen Horowitz), Salim Ismail (OpenExO), Dave Blundin (Link Ventures) and Dr. Alexander Wissner-Gross (Reified). These are people whose capital allocation decisions are shaping the AI infrastructure landscape that every enterprise will operate within. Their predictions matter not because they are necessarily right, but because the bets they are placing will determine what capabilities arrive, how fast, and at what cost.
The second is a conversation between Azeem Azhar (a leading AI analyst, hands-on power-user and super-early adopter of AI agents) and Rohit Krishnan (Strange Loop Cannon) — a hedge fund manager and engineer who sits at the intersection of economics, technology and systems thinking. Like many in AI Risk’s own team, they both run multiple persistent AI agents as part of their daily workflows. They bring something the investor group cannot: hard usage data and lived operational experience from inside the agentic transition.
Neither group is focused on the enterprise context. That’s where my translation comes in.
1. Demand will explode — and your budgets are wrong
Azhar reports that his personal AI consumption went from roughly one million tokens per day to 100 million tokens per day within weeks of deploying a persistent agent. (A token is roughly three-quarters of a word — so 100 million tokens represents a volume of text equivalent to more than a thousand books, processed every day.)
Krishnan corroborates at a different scale:
17 billion tokens consumed in Q4 2025, then 50 billion in January 2026 alone — a threefold increase in run-rate in a single month.
This is not a curiosity about power users. It is an early indicator of what happens when AI shifts from reactive (a person asks a question, gets an answer) to agentic (a system works persistently on your behalf, delegating tasks to other AI agents, pulling data from multiple sources, iterating until the job is done). Consumption doesn’t grow linearly. It grows by orders of magnitude.
Most enterprise AI budgets are still structured around per-seat licensing, fixed platform fees, or projected query volumes based on current usage patterns. None of these models accounts for what happens when AI shifts from reactive to agentic. Those models are going to be wrong — potentially by 50 to 100x — once agentic systems are deployed at scale. The CFOs who approve budgets based on today’s usage patterns will inadvertently create the bottleneck that strangles value creation.
Simultaneously, cost per unit of intelligence is falling fast. Krishnan states bluntly: what costs $1,000 per day today will cost $1 per day within two years. A thousand-fold reduction in 24 months.
Put those together and you get the strategic picture: massively more intelligence consumed as your knowledge-workers become more proficient, at radically lower unit cost.
But only organisations that have built the capacity to absorb that intelligence will benefit. In practice, that means data that agents can actually access (not locked in departmental silos), governance frameworks that allow agents to act within defined authority boundaries, and institutional memory systems that capture what agents learn so the next decision is better than the last.
Without that architecture, intelligence becomes nearly free — but you can’t use it. This is the first-mover dynamic I described in The AI Your Competitors Can’t Buy, now with hard consumption data behind it.
2. The trust boundary is the new organisational boundary
Azhar makes a point that deserves more attention. He argues that the boundary of the firm — the classic question of what an organisation does internally versus externally — is being redrawn. But the defining cost is no longer transaction costs or communication costs. It is security and verifiability costs.
Internal-facing agents will operate with broad data access and minimal friction. External-facing agents — those that interact with brokers, counterparties, customers, or the AI agents of other organisations — will operate on tighter “contracts,” within stricter sandboxes, spending resources to verify identity and actions.
This maps directly onto the architecture I described in my original paper. The Coordination Layer sits inside the trust boundary: Level 4 Agentic Teams that deliberate, capture reasoning, and generate Intelligence Capital. The Level 5 Ecosystem Agents sit at the boundary and beyond, interacting across trust perimeters with cryptographically verifiable identity.
The investor group in the Diamandis podcast arrives at the same conclusion from a completely different direction. They argue that consumer-grade synthetic video and voice are reaching a point where any security approach relying on recognising a person by sight or sound is, in their words, compromised. If you lead an insurer, bank, or wealth manager, this is not a media story. It is a claims, fraud, identity and dispute-resolution story.
Any process that currently accepts a video, voice recording, or screenshot as “strong evidence” needs to be reviewed — now, not after the first major loss.
These are converging signals. Authentication based on “recognition” — voice, video, even documents — is weakening. Authentication based on cryptographic verification is becoming essential. And the architecture of your agentic systems needs to reflect this from the start — which is why, in our own advanced agentic deployments with clients, we have embedded cryptographic agent identity and immutable audit trails into the architecture from day one.
Retrofitting trust onto agents designed without it is significantly more expensive than building it in.
3. The best agentic systems eliminate adoption friction — but that doesn’t mean your workflows stay the same
Krishnan makes a deceptively simple observation. He has tried every to-do list and productivity application for 20 years and failed with all of them because they required him to change his behaviour. His AI agent works because it requires zero change from him — it reads his email, assembles his priorities, and surfaces them through a channel he already uses. He didn’t adapt to the tool. The tool adapted to him.
For enterprise leaders, this is a critically important design principle.
The most successful agentic deployments will meet employees where they already work — inside their existing email, messaging, and workflow tools. The adoption problem that has plagued every previous wave of enterprise technology can be designed out from the start.
But — and this is where I want to be precise — this is a principle about the user interface, not the organisational architecture. The front door should be frictionless. The plumbing underneath must be redesigned.
This is the General Motors versus Toyota distinction I shared in my original paper. Deploying a beautifully frictionless agent interface on top of workflows riddled with unnecessary approvals, redundant handoffs, and exception-laden processes is the GM mistake in new clothes. The agent will either push work back to humans or produce outcomes that erode trust. Process redesign is a prerequisite to agent deployment, not a consequence of it.
Zero friction for the user. Radical redesign underneath. Both are required.
The board question
Silicon Valley will always sound breathless. But behind the breathlessness is a consistent signal from both investors and practitioners: capability is improving non-linearly, costs are falling by orders of magnitude, and trust assumptions are breaking.
The board-level question shouldn’t be “what AI tools are we deploying?” But rather: which institutional capabilities are we building that will still matter when everyone has the same tools — and the cost of intelligence is effectively zero?
In my view, the two capabilities that create compounding advantage are: 1.) faster organisational learning in the decisions that drive your economics, and 2.) stronger trust and authority infrastructure that lets you operate safely at higher autonomy than competitors.
Both take time to build. Neither can be purchased off the shelf. And every month of delay is a month of Intelligence Capital not generated.