What Are the Practical Implications of AI?
The practical implications of AI aren't about job losses or magic solutions. They're about governance, process, and who owns decisions. Here's what actually matters.

I sat in three board meetings in the last eight weeks where the question was the same.
"What's our AI plan?"
The answers were all over the place. One organisation wanted an AI strategy document. Another wanted to ban ChatGPT. A third wanted to do what the competitors are doing.
None of them were asking the right question.
The practical implications of AI aren't about whether your team uses ChatGPT or Copilot. They're about who makes decisions, how work gets designed, and what happens when your guardrails conflict with how people work. Let me break down what I've seen organisations over the last 18 months.
The Governance Question: Who Decides What Gets Automated?
The first practical implication is that someone has to own the decision about where AI goes and where it doesn't.
Most organisations I work with have skipped this step. They've either let every team experiment, or they've banned everything (and people use it anyway). Unsurprisingly, neither approach works.
What works is a simple framework that answers three questions:
- Where can teams experiment freely?
- Where do we need approval first?
- Where is AI banned outright?
This decision-making framework is vital. If you don't create one, your staff will create their own — and you won't know about it until something goes wrong.
I worked with a professional services firm where junior staff were using ChatGPT to draft client proposals. No one had said they could or couldn't. When leadership found out, the response was panic and an immediate ban.
Two weeks later, the same staff were still using it. Except now it was on personal accounts, copying and pasting client data. The ban made the risk worse, not better.
The practical implication of AI is that you need governance that channels behaviour, not rules that get ignored.

The Process Question: What Actually Changes in How Work Gets Done?
The second implication is process design.
In one organisation I worked with, their finance team used to spend two days a month reconciling supplier invoices. They automated most of it with an AI-enabled tool. The work dropped to half a day.
The question then became: what do you do with the other day and a half?
Some organisations answer this by cutting headcount. Others answer it by moving people to higher-value work (in this case, supplier relationship management and contract renegotiation).
AI creates capacity, but you have to decide what to do with it. If you automate a task and then just pile more tasks on top, you haven't improved performance. You've just made people busier.
The other process change is workflow redesign. When you introduce AI into a process, you often have to redesign the whole process, not just bolt the AI on the end.
For example: a client services team I worked with wanted to use AI to draft responses to common customer queries. Simple enough. But when we mapped the process, we found that customer queries were being routed through five different people before anyone responded.
The AI didn't fix that, it just automated the final step of a broken process.They needed to fix the routing and triage first — otherwise the AI just made bad responses faster.
AI exposes bad processes. That's actually useful, but only if you're willing to redesign them.
The Skills Question: What Do People Need to Learn?
The third practical implication is skills, but not the ones people think.
Nowadays, teams must know when to use AI, how to frame a good prompt, how to check the output, and when to ignore it entirely. Their judgment needs to be honed.
I ran a session with a marketing team recently. Half the room was already using AI for copywriting. The other half was sceptical. They weren’t wrong to be. The output I saw was bland, generic, and full of AI-tell phrases.
The people using AI hadn't learned how to edit it. They'd just learned how to generate it.
As with all tools, you must learn to work with its strengths and limitations. That means teaching people to:
- Write clear, specific prompts
- Recognise when AI output is usable vs when it's garbage
- Know which tasks AI handles well and which it doesn't
- Check outputs for accuracy, bias, and tone

The Risk Question: What Could Go Wrong?
As with all new technology, the introduction of AI brings new risks that didn't exist before.
The obvious ones are data leakage and hallucinations, but the less obvious is over-reliance. When people start trusting AI without checking its work, mistakes slip through.
I've seen this in three organisations in the last six months:
- A team using AI to summarise meeting notes that missed a critical action because the AI didn't understand context
- A sales team using AI-generated proposals that included a competitor's product name
- A finance team using AI to categorise expenses that misclassified a supplier payment as travel costs
None of these were catastrophic. But all of them happened because someone assumed the AI was right and didn't check.
You need a ‘trust but verify’ culture. AI outputs should be treated like junior staff outputs — useful, often good, but always needs review.
What This Means for Leadership
If you're a CEO, board member, or senior leader:
- You need a decision framework, not a strategy document. Who decides what gets automated? Who reviews AI outputs? What's the escalation path when something goes wrong?
- You need to redesign processes, not just automate tasks. Bolting AI onto a broken process just makes it broken faster.
- You need to invest in judgment, not just tools. The skill gap is knowing when to use AI and how to check it, not how to click a button.
- You need to communicate what happens to capacity. If you automate a task, what do people do with the time they get back? If you don't answer that, they'll assume redundancy.
- You need to treat AI outputs like junior staff outputs. Always check. Always review. Never assume it's right.
The organisations that get AI right think hardest about where it fits, who owns it, and what changes as a result.
If you're navigating AI adoption in your organisation and want a second pair of eyes on your approach, I run half-day AI workshops that map out where AI fits, what guardrails you need, and how to get started without the chaos. Get in touch and we'll talk through what makes sense for your situation.

Martin Sandhu
Fractional CTO & Product Consultant
Product & Tech Strategist helping founders and growing companies make better technology decisions.
Connect on LinkedIn



