Where Is Your Business Actually At With AI? A Practical Maturity Check.

Most leaders I talk to describe their AI adoption the same way: "We're experimenting." It's the safe answer, but it's rarely the honest one. Some are further along than they realise. Many are further behind. Almost none have a clear-eyed picture of where they actually sit.
I've been reading Microsoft's new partner playbook for Copilot and Agents — a document written for IT consultancies, but containing a genuinely useful model buried inside it. A three-level maturity ladder that describes how businesses actually progress with AI, from first use to real transformation. Stripped of the vendor language, it's one of the clearest self-assessment tools I've seen this year for non-technical leaders.
Let me translate it into something you can actually use.
Level 1: You're using AI as a productivity tool
At this level, AI is a product. You've turned on Copilot or Claude. Some of your people use it. Most don't, or they use it for low-stakes things like drafting emails, summarising meetings, polishing a first draft. The conversations in your organisation about AI are framed around features and demos. "Did you know Copilot can do X?"
The work being done on AI in your organisation is largely IT-driven. Readiness checks, permissions, basic rollout. Value is talked about qualitatively — "people seem to like it" — rather than measured.
This is where most businesses actually are in 2026, regardless of what they say publicly. It's not a bad place to be. It's the starting point everyone goes through. The mistake is staying here while believing you've moved on.
The honest signal you're at Level 1: If someone asked you what measurable business outcome your AI spending has produced in the last six months, you couldn't give a specific answer.
Level 2: AI changes how specific processes work
At Level 2, the conversation shifts from "what can AI do?" to "which workflows should we redesign around it?" The work is led by business leader, heads of sales, ops, customer service, not just IT. Specific processes have been targeted and reworked: proposal generation, customer triage, onboarding, reporting.
You have scenario libraries. You have a few custom agents doing real work. You can point to specific workflows where the before-and-after is measurable. You've had your first honest conversations about which processes benefited and which didn't.
This is where the real ROI starts appearing. It's also where most organisations hit their first serious friction because changing workflows means changing roles, and changing roles means change management. The businesses that stall at Level 2 are almost always the ones that treated AI as an IT project rather than an operations one.
The honest signal you're at Level 2: You can name three specific workflows that have been redesigned around AI, and you have numbers even rough ones on what changed.
Level 3: The operating model itself changes
Level 3 is where most of the coverage you read about AI is aimed, and where almost no businesses actually are. At this level, AI isn't embedded in processes. The organisation has been rebuilt around the assumption that AI is present.
Roles change. New ones appear — people who manage agents the way managers used to manage employees. Old ones shift — individual contributors become orchestrators of AI work. Executive sponsorship is assumed. KPIs shift from "AI adoption" to business outcomes where AI happens to be a lever. The question isn't whether you're using AI; it's how you compete given that your competitors are too.
This is genuinely hard. It takes years, not quarters. It requires operating model redesign, not just tooling. Microsoft calls this "Frontier Firm" status. The name is marketing; the underlying shift is real. A small handful of companies are genuinely here. Most businesses that claim Level 3 are actually a well-resourced Level 2.
The honest signal you're at Level 3: Your org chart has changed because of AI, not just your tool list.
Why this ladder matters
The reason I'm writing this isn't to rank companies. It's that each level has a different critical question, and getting the question wrong wastes years.
Level | The critical question | Common mistake |
|---|---|---|
1 → 2 | Which specific workflows justify the investment? | Adding more tools instead of picking workflows |
2 → 3 | What would our operating model look like if AI were assumed? | Scaling Level 2 wins without rethinking structure |
Staying at 1 | Is the usage we have producing any measurable value? | Declaring victory because adoption is up |
If you're at Level 1 and asking Level 3 questions — "how do we become an AI-first organisation?" you'll design a transformation programme that collapses under its own ambition. If you're at Level 2 and stuck asking Level 1 questions "which tools should we buy next?" you'll plateau on tooling and never get to the operating model conversation.
The most common mismatch I see in Atlantic Canada businesses is Level 1 reality with Level 3 ambition. The second most common is Level 2 reality with Level 1 measurement — real process change happening, but no one counting it properly.
A practical self-check
Four questions I'd ask any leader trying to place themselves honestly on this ladder.
One: When you describe your AI adoption, are you naming tools or naming workflows? Tools means Level 1. Workflows means Level 2. Operating-model changes means Level 3.
Two: Who leads the AI conversations in your organisation? IT or a vendor rep means Level 1. Business leaders means Level 2. The CEO and COO means Level 3.
Three: What's the last AI-related metric you looked at? Usage and adoption means Level 1. Process cycle time or error rate means Level 2. Revenue, margin, or cost-of-service means Level 3.
Four: What does "progress" mean in your next six months? More seats activated means Level 1. A new process redesigned means Level 2. A new way of organising work means Level 3.
You should answer these quickly and without flattering yourself. Most leaders overestimate by one level. That's normal and it's fixable. What isn't fixable is planning a Level 3 transformation while still operating at Level 1.
What I'd do this quarter
If you're honestly at Level 1, your job isn't to plan an AI strategy. It's to pick one workflow — the most boring, well-understood, high-friction one you have and run a real pilot with measurement attached. The pilot is the asset, not the plan.
If you're honestly at Level 2, your job isn't to add more workflows. It's to answer the operating-model question before you scale: if this way of working becomes the default across the business, what changes about how we're organised, how we hire, and what we measure? Answering that is a board-level conversation, and it's the one most businesses skip.
If you're honestly at Level 3, you already know what to do, and you're probably reading this for entertainment rather than guidance.
The deeper thing
The value of any maturity model isn't in the label. It's in what the label forces you to admit. The Microsoft playbook I pulled this from is written to help IT consultancies sell up the ladder. That's fair; it's their business. But the ladder itself predates the vendor version. It's how every previous technology adoption curve has worked, from spreadsheets to cloud to SaaS. AI isn't different in structure. It's just faster.
The businesses that do well in the next three years won't be the ones at Level 3. They'll be the ones who knew they were at Level 1 and acted accordingly. Clarity about where you are beats ambition about where you want to be, every time.













