Six months ago, I wrote about why AI is an accelerant, not a fixer — that AI multiplies whatever is already present in your organization, good or bad. Aligned teams get accelerated progress. Fragmented teams get accelerated confusion.
That thesis hasn't changed. But the landscape has.
AI agents — software that doesn't just answer questions but actually does work — are no longer a futuristic concept. They're scheduling meetings, triaging support tickets, managing outreach pipelines, writing code, and making real decisions inside real companies, right now. According to a 2025 PwC survey of 300 senior executives, 79% reported that agentic AI was already being adopted in their organizations, and 88% planned to increase their AI budgets specifically because of it.
This isn't a trend. It's a structural shift in how work gets done.
So if you're a CEO watching this unfold and wondering what to actually do about it — not in theory, but tomorrow morning — this post is for you. I'm going to share practical lessons from deploying agentic workflows inside our own firm, hard-won insights from working with growth-stage companies, and data from the organizations leading this charge.
And I'm going to make the case — again — that the single most important thing you can do to succeed with AI agents isn't picking the right tool. It's building a business that can actually absorb and direct that power.
What "Agentic AI" Actually Means (Without the Jargon)
Let's start simple. Traditional AI is like a very smart intern who answers questions when you ask them. You type a prompt, you get a response. Useful, but passive.
Agentic AI is different. Think of it more like a capable team member who can take a goal, break it into steps, use the tools available, make judgment calls along the way, and come back with results — checking in with you when they hit something they shouldn't decide alone.
Here's a concrete example from our own operations: We have an AI agent named Clio that manages parts of our business development pipeline. Clio reads our strategy documents, researches prospective companies, drafts outreach sequences, maintains our project status documents, monitors for blockers, and flags decisions that need a human. She doesn't wait to be asked — she works through the queue, reports progress, and escalates when something is stuck.
That's what agentic means in practice: AI that operates, not just AI that responds.
The important nuance is that Clio doesn't run unsupervised. She works within clearly defined boundaries — what she can draft versus what she can send, what she can research versus what she can commit to. The system has explicit approval gates, escalation triggers, and a human always makes the final call on anything that leaves the building.
The Data: Where Companies Actually Stand
The adoption numbers are striking, but the reality underneath them is more nuanced than the headlines suggest.
The momentum is real:
- EY's 2025 Technology Pulse Poll found that nearly 48% of tech executives were already deploying agentic AI, with 50% predicting that over half their AI operations would be agentic within two years.
- Dynatrace's Pulse of Agentic AI 2026 report, surveying 919 senior global leaders, found that almost half anticipated budget increases of at least $2 million for agentic AI alone.
- KPMG's Global Tech Report 2026 reported that 88% of companies were investing in building agentic AI into their systems.
But the execution gap is real, too:
- A Chief Executive/Salesforce survey of 330 CEOs found that only 13% said their AI strategies were clearly defined. 42% described themselves as still "exploring possibilities."
- MIT Sloan's 2026 analysis from researchers Thomas Davenport and Randy Bean warns that agentic AI "isn't ready for prime time yet" — citing ongoing hallucinations, security vulnerabilities, and the irony that keeping humans in the loop undermines the productivity advantage agents are supposed to deliver.
- Perhaps most sobering: Deloitte's State of AI 2026 report revealed that while nearly three-quarters of organizations planned to deploy autonomous agents, only 21% had proper governance in place.
What emerges from these numbers is a pattern I see constantly: Companies are investing heavily in AI agents but struggling to make them productive — not because the technology doesn't work, but because their organizations aren't structured to direct and absorb that capability.
Sound familiar? It should. That's the same problem we've always had with powerful new technology.
Five Practical Lessons from Running Agentic Workflows
Here's what we've actually learned from deploying AI agents in our own operations. Not theory — experience.
1. Start With Structure, Not Software
Before we gave our AI agent a single task, we defined: What's the goal? What are the boundaries? What requires human approval? What gets escalated, and when?
We built what amounts to an operating plan for the agent — a state document that tracks priorities, a decision log, a blocker log with escalation timers, and approval gates for anything customer-facing. The agent works within that structure. Without it, you get a very productive system doing the wrong things very quickly.
The lesson: If you don't have clear priorities, defined processes, and accountability structures before you deploy AI agents, you'll amplify your existing confusion at machine speed.
2. Define Roles the Way You Would for a Human
We explicitly defined our AI agent's role as "Integrator" — responsible for execution, research, drafting, quality assurance, and status reporting. The human is "Visionary" — setting strategy, making judgment calls, and approving anything external.
This isn't just organizational theater. It's how you prevent the two most common agentic AI failures: the agent doing too much without oversight (risk), or the agent doing too little because nobody defined its scope (waste).
The lesson: Treat your AI agents like team members. Give them a job description, clear authority boundaries, and explicit escalation paths.
3. Governance Is Not Optional — It's the Whole Game
The Deloitte finding that only 21% of organizations have proper governance for their AI agents should alarm every CEO reading this. Without governance, you get what the industry calls "shadow AI" — employees creating their own agents, automating their own workflows, with no coordination, no security review, and no accountability.
In our system, every action the agent takes is logged. Every decision that crosses a threshold requires approval. Every deliverable goes through a review queue before it reaches anyone outside our walls. This isn't bureaucracy — it's how you run a reliable operation.
The lesson: Build your governance framework before you scale your agents, not after something goes wrong.
4. Channel Discipline Prevents Chaos
One of our early mistakes was letting operational information scatter across too many surfaces — chat messages, documents, email threads, status updates all living in different places. When an AI agent is producing work at high volume, this problem compounds fast.
We solved it by enforcing strict channel discipline: operational state lives in structured documents, final deliverables go to a review queue, and messaging channels carry only short signals and links. The agent follows the same rules.
The lesson: Agentic AI produces a lot of output. If you don't have a clear system for where things live and how information flows, you'll drown in your own productivity.
5. Human-in-the-Loop Is a Feature, Not a Bug
MIT Sloan's Davenport and Bean make the fair point that keeping humans in the loop can undermine the productivity advantage of agentic AI. But I'd argue the opposite: for most organizations today, the human approval layer is what makes agentic AI safe enough to actually use.
The goal isn't to remove humans from the loop — it's to move them to the right place in the loop. Let the agent do the research, drafting, analysis, and status tracking. Let the human make the calls that carry real consequence: sending an email to a customer, committing budget, changing strategy.
Over time, as you build trust and the agent demonstrates reliability, you can expand its autonomy incrementally. But starting with broad autonomy is how pilots fail.
The lesson: Design for "human-on-the-loop" — humans monitoring and intervening at key decision points — rather than "human-in-every-loop" or "no human at all."
Why Your Business Operating System Is the Real Bottleneck
Here's where I come back to the core argument from my earlier post, because the data has only reinforced it.
The companies struggling with agentic AI aren't struggling because the technology is immature (though it has rough edges). They're struggling because they lack the organizational infrastructure to make it work:
- No clear priorities → the agent works on the wrong things
- No defined processes → the agent invents its own, inconsistently
- No accountability structures → nobody knows who approved what
- No regular check-in cadence → problems fester until they're crises
- No culture of surfacing issues → the agent can't escalate what people won't name
These aren't AI problems. They're business operating system problems. And every CEO has them to some degree, whether they're using AI or not.
A business operating system is simply the set of habits, cadences, and structures that determine how your company sets direction, stays focused, solves problems, and holds people accountable. It's not software. It's not a methodology you buy. It's the way your company actually runs.
When it's working well:
- Everyone knows the handful of priorities that actually matter this quarter
- Teams meet with purpose, make decisions, and follow through
- Data flows to the right people at the right time
- Issues get surfaced, solved, and closed — not buried
- People trust the system enough to be honest about what's broken
When it's not working well, you see the symptoms I hear about constantly: strategic whiplash, team fatigue, shadow experimentation, and the quiet skepticism of veteran employees who've seen too many initiatives fizzle.
A Framework That Works
At Meritage, we've seen one framework deliver these outcomes more consistently than any other: the Entrepreneurial Operating System (EOS). It's used by over 250,000 companies worldwide, and it's become core to our investment thesis because we keep seeing how effectively it de-risks execution and drives velocity for our portfolio companies.
EOS works by giving you a simple, practical structure across six components that every business has:
- Vision — Getting everyone aligned on where you're going, why it matters, and how you'll get there. Not a 50-page strategy document, but a clear, shared understanding that fits on one page.
- People — Making sure you have the right people in the right roles, and that everyone understands what "right" means. This becomes even more critical when some of your "team members" are AI agents.
- Data — Identifying the small number of leading indicators that actually tell you whether you're on track, and assigning clear ownership for each one. Too much data is the same as no data. You need the right few numbers.
- Issues — Building a culture where people surface real problems quickly, and a disciplined process for solving them at the root rather than slapping on band-aids.
- Process — Documenting the essential steps of your core operations so they're consistent, trainable, and — critically — automatable. This is where agentic AI becomes incredibly powerful: a well-documented process is exactly the kind of work an AI agent can take on.
- Traction — Breaking your long-term vision into annual goals, quarterly priorities, and weekly rhythms that create accountability and forward motion.
None of this is complicated. All of it is hard to sustain. But here's what makes it especially relevant right now: every one of these components directly determines how effectively you can deploy AI agents.
The Operating System + AI Agent Equation
Let me make this concrete.
Clear Vision & Priorities: Agents work on what matters, not what's loudest
Right People in Right Seats: Humans and agents have defined, complementary roles
Leading Indicator Data: Agents can monitor, alert, and report on what actually drives the business
Issue Resolution Discipline: Agents can surface and escalate issues; humans can solve root causes
Documented Processes: Agents can follow, execute, and eventually optimize your core workflows
Quarterly Execution Rhythm: Agents operate within predictable cycles of plan → execute → review → adjust
When all six components are working, AI agents become a force multiplier. When they're not, agents amplify whatever dysfunction already exists — exactly the accelerant effect I described in my last post.
What to Do Monday Morning
If you've read this far, here's my practical recommendation:
This week:
- Audit your operating system honestly. Score yourself 1-10 on each of the six components above. Where are you weakest? That's where AI will cause the most problems — and where you should focus first.
- Pick one process to document end-to-end. Not your most complex one. Pick something repetitive, well-understood, and high-volume. Write down every step. This is your future AI agent's first job.
- Define your governance guardrails. Before anyone deploys an AI agent, answer: What can it decide? What requires approval? What's off-limits? Who's accountable when it makes a mistake?
- Run a controlled pilot this quarter. Deploy one AI agent on one documented process with clear success metrics. Keep a human in the loop for every external action. Log everything.
- Strengthen your operating rhythm. If you don't have a regular cadence of priority-setting, progress review, and issue resolution — weekly, quarterly, annually — build one. This is the infrastructure that makes AI agents productive instead of chaotic.
- Invest in your people. The organizations succeeding with agentic AI aren't replacing their teams — they're upskilling them. Help your people understand what AI agents can do, how to work alongside them, and where human judgment remains irreplaceable.
- Expand incrementally. As your pilot proves out, add agents to additional processes. Increase their autonomy gradually as they demonstrate reliability. Build your internal capability to create, test, and govern agents.
- Consider a formal operating system. If you don't already run on one, explore frameworks like EOS that provide the structural foundation for disciplined execution. The companies that thrive in the agentic era won't be the ones with the most sophisticated AI — they'll be the ones with the strongest organizational immune systems.
The Bottom Line
The agentic workforce is here, and it's going to reshape how every company operates. The technology will only get more capable, more autonomous, and more pervasive.
But the lesson from every previous technology wave holds true: the winners won't be the companies that adopt the fastest. They'll be the companies that are most ready to absorb what they adopt.
That readiness comes from having clear direction, disciplined processes, the right people, honest data, a culture that surfaces problems, and a rhythm that turns intention into execution.
AI agents will accelerate whatever you already are. Make sure what you are is worth accelerating.
If your company is experimenting with AI agents but struggling to translate effort into results, let's talk.
Sources & Further Reading
- PwC, "AI Agent Survey," May 2025 — Survey of 300 senior executives on agentic AI adoption.
- EY, "Technology Pulse Poll," April 2025 — Survey of 500+ senior tech leaders.
- Dynatrace, "The Pulse of Agentic AI 2026," 2026 — Survey of 919 senior global leaders.
- KPMG, "Global Tech Report 2026," January 2026.
- Chief Executive/Salesforce, "The CEO's Guide to Agentic AI," 2025 — Survey of 330 CEOs.
- MIT Sloan Management Review, "Action Items for AI Decision Makers in 2026," 2026.
- Deloitte, "State of AI 2026," March 2026.
- CrewAI, "2026 State of Agentic AI Survey Report" — Survey of 500 senior executives at large enterprises.
- Meritage, "AI Is an Accelerant, Not a Fixer," 2025.
- EOS Worldwide, "Entrepreneurial Operating System."







