Back to all articles
AI & Automation
6 min read

The AI Implementation Process That Survives Real Organisations

A practical guide to AI implementation from discovery through to embedding. What works in real organisations, what fails, and the steps most teams skip.

The AI Implementation Process That Survives Real Organisations

The typical AI implementation begins with a vendor demo. Someone sees a shiny tool, gets excited, and starts imagining use cases.

This is backwards to me.

Imagine buying the first project management tool that was sold to you, without checking if it was compatible with your teams’ existing software, whether it matched existing workflows or could account for the quirks of your business. Not chance it’d happen, right?

It’s the same as buying a new AI tool off-the-shelf just because it looks cool. It doesn’t mean it’ll fit.

Finding the best tools is about beginning with implementation, then selecting the best tools for the job. The same way you would a project or transformation.

Start with discovery

Discovery means understanding three things before you touch any technology: What processes are causing the most friction for your people right now? Where is repetitive work blocking higher-value activity? What would improve if those frictions were removed?

I worked with a professional services firm last year. They wanted AI for document generation. During discovery, we found that document generation was not the bottleneck. It was actually version control and approval workflows. The documents were fine. The chaos around who had edited what and when was the problem.

We implemented simpler workflow automation with light AI assistance for flagging conflicts. It solved the actual problem, where a full AI document generator would have created more work.

Discovery is about finding the real problem, not the one someone’s selling a solution for.

Select use cases using a decision framework

Once you understand the problems, you need a way to decide which ones to tackle with AI. Most organisations skip this step and go straight to piloting whatever sounds impressive.

Here is a simple framework I use with clients:

  • Impact: Does solving this problem materially improve performance, productivity or revenue?
  • Feasibility: Can we implement this with current technology and our existing systems?
  • Adoption risk: Will people use it, or will it be a tool nobody touches?

You are looking for high impact, medium-to-high feasibility, and manageable adoption risk. If all three are low, do not pilot it. If impact is low but feasibility is high, you are just automating something that does not matter.

The best early use cases are ones where the team doing the work is actively asking for help. They know the pain. They want it solved. They will use the solution if it works.

2020-10-16_%2812%29.webp

Run a structured pilot with clear success criteria

A pilot is a contained experiment with defined inputs, outputs and success criteria.

Before you start, define what success looks like in measurable terms (time saved, error reduction, tasks completed). Set a timeframe (usually 4–8 weeks for a first pilot). Identify who will use it, how often, and in what context. Agree on how you will measure whether it worked.

During the pilot, you are testing whether people will use it when they are busy, which will depend on if it fits into their existing workflow and the output is trustworthy.

Most pilots fail because the success criteria are vague. Avoid simple, generic questions like 'Did people like it?'. A more effective metric might be: 'Did it reduce invoice processing time by 30% without increasing errors?'.

Build guardrails that people will follow

This is where most AI implementations break down. You have a working pilot. You have proven the use case. Now you need to roll it out safely.

Guardrails ensure people have clear boundaries so they can experiment confidently without creating risk. They might include what data can and cannot be put into AI tools, what outputs need human review before being used, what to do if the AI produces something wrong or problematic, and who to ask when you are not sure.

The test of good guardrails is whether frontline staff can follow them without needing to ask permission every time. If your guardrails require three approvals and a policy check before someone can use the tool, you’re making life harder. People will eventually find workarounds.

I have seen organisations write 40-page AI governance documents, and others use a one-page checklist that gets printed and stuck on desks. Guess which one people follow.

Embed the new workflow into operations

A successful pilot does not mean anyone will use it next month. You need to give it time to embed. That’s achievable by making your new AI-enabled workflow the default way of working instead of an optional extra.

This requires training that focuses on the workflow, so people understand when and why to use it, as well as how. To make this successful you’ll need to adjust existing processes so the AI tool fits naturally into the flow of work. Remove the old way of working after a bedding-in period to avoid bad habits.

Maybe most important is identifying champions in each team who receive advanced training. They’ll not only be able to help with answering questions and troubleshooting issues, but they’ll be advocates. It’ll make a big difference to your teams’ confidence if some of their colleagues have already made the leap.

Embedding takes longer than the pilot. It is also harder to measure. This is where the value gets realised. A tool that five people use brilliantly is less valuable than a tool that 50 people use adequately every day.

2020-10-16_%2816%29.webp

Measure what matters and iterate

You need to know whether the AI implementation is working. Track usage rates, task completion metrics, error or rework rates, and time or cost saved. It should be pretty easy to infer adoption, effectiveness and efficiency from there.

Most organisations stop measuring after the pilot, which is a mistake. The first three months post-rollout are where you’ll likely learn what needs fixing. I can guarantee it’ll be more than you think. Don’t shy away from it, start making changes. Iteration is a good thing — it’s a sign you’re paying attention.

What most AI implementations get wrong

Having run this process with health organisations, professional services firms and operations teams, the failure patterns are fairly consistent. Most implementations fail because they start with technology, pilot use cases that sound impressive but do not matter to the people doing the work, skip the guardrails conversation until someone creates a risk incident, assume that if the pilot worked rollout will be straightforward, and don’t resource the embedding phase properly because they think the hard part is over.

The AI implementation process is not complicated, but it requires discipline. You cannot skip discovery. You cannot pilot everything. You cannot ignore adoption risk and hope people will just use it.

The organisations that succeed with AI treat it like any other operational change: clear problem, simple plan, measurable success, and time to make it stick.

Struggling with the same challenges?

Book a consultation
Share this article
Martin Sandhu

Martin Sandhu

Fractional CTO & Product Consultant

Product & Tech Strategist helping founders and growing companies make better technology decisions.

Connect on LinkedIn
Now accepting applications

The Startup Launchpad

A 90-day programme for founders who are building or have built and want results not theory. 6 modules. Limited places.

Want to apply these ideas?

Let's talk about how to put this into practice for your business.

Martin