AI Safety and the Future of Work: What UK Leaders Need to Know Now
In the last month, Anthropic — the company behind Claude, one of the most safety-focused AI systems on the market — sued the Department of Defense (DoD) after it terminated all of their business relationships with US federal agencies.

This is following a months-long battle between Anthropic and the Pentagon, in which Anthropic has been attempting to prohibit its AI model from being used in domestic mass surveillance and fully autonomous lethal weapons. They argued that these activities would violate their founding safety principles, as was the case when they found Claude had been used in Venezuela without their knowledge.
The company that built the tool didn't know how, when or where it was being deployed, and have since faced immense pressure from the DoD to meet their demands.
If a company like Anthropic no longer has control how their own technology gets used, what chance do the rest of us have?
I keep thinking about this when I work with organisations adopting AI. The conversation usually starts with enthusiasm, "We need an AI strategy", and ends with a harder question: "How do we make sure people use this safely?"
AI and the future of work isn't just about what's technically possible. It's about what happens when smart tools meet real organisations with real people making quick decisions under pressure.

What's Actually Happening Right Now
The gap between AI demos production is wider than most people think.
Take AI agents. You've probably seen the demos — agents that can book meetings, analyse documents, run workflows. It sounds like impressive stuff. And yet Irregular, an AI security lab, found that AI agents will bypass security measures to publish password information, override anti-virus to download files they knew contained malware, and even forged credentials.
Why? Because the default behaviour of many AI agents is to overshare information when prompted. They're designed to be helpful, not secure.
This isn't theoretical: the 26 experts behind the report ‘The Malicious Use of Artificial Intelligence’ predict AI will allow for large-scale, finely-targeted and highly-efficient attacks that make use of speech synthesis, automated hacking and exploitation of AI systems themselves.
People are building and deploying these tools faster than they're thinking about what could go wrong.
The pattern I see in organisations that someone tries ChatGPT or Claude, gets impressive results, shares it with their team. Within weeks, half the company is using AI tools in ways IT doesn't know about. Shadow AI spreads because people want to work faster and the tools genuinely help.
Until they don't.
The Questions Leaders Should Be Asking
Will AI replace jobs?
Not the way most people think. We shouldn’t expect the disappearance of white-collar roles, but we can anticipate a shift. The repetitive, process-heavy parts of jobs will be increasingly handled by AI while human workers focus on judgment, context and decision-making.
The risk we should focus on isn’t mass unemployment, it’s the rising demand for new skills. People who spend their time doing tasks that AI can automate will find their roles shrinking, whereas those who learn to work with AI will become more valuable. We’re likely to see a sharp increase in workers upskilling to work with AI and meet demand.
What does good AI governance actually look like?
Most governance frameworks I see are either multi-page processes nobody follows, or limited guidance with a ‘just be sensible’ sentiment and no guardrails. These are too heavy and light, respectively.
Good governance is practical - it’s simple enough people can and will follow it, but robust enough that it catches real risks. Here’s what it looks like:
- Clear rules about what data can go into external AI tools and what happens when the rules are broken
- A short list of approved tools that have been security-reviewed, with a clearly stated, easy process for adding new ones.
- Training that focuses on real scenarios.
- Regular check-ins to see how AI is being used.
As we saw with AI Agents, even with the best intentions and most robust frameworks, thing will slip through. Your governance framework needs to assume that bad things will happen and people make mistakes. Build in checks accordingly.
Where should we start?
Find three workflows where AI could genuinely help your team. These should be your boring, repetitive, functional tasks that are valuable to the business but take time away from more valuable tasks. Lay out their processes, then identify areas that AI may be able to automate or assist. Remember that this should be as part of your existing workflow, not a divergence from it.
Once you’ve identified where AI sits, it’s time to implement. Test it properly before deploying, and ensure guardrails are in place for go-live.
Watch what happens once it’s out there - does it work? Do people actually use it? What breaks and what risks emerge? You’ll learn more from three real implementations than six months of strategy papers.

The Accuracy Trap
There’s a tendency to over-delegate to AI because it seems smart. It gives confident, well-structured answers, so we assume it’s checked its own work and stop verifying.
We know that AI hallucinates and makes mistakes, but this still happens at every level. Junior staff will assume senior leadership has it covered. Senior leadership assumes the same of the IT team. IT assumes people will use the tools properly. Nobody’s checking and they think someone else is accountable for doing so.
It’s crucial to have a human-in-the-loop for all AI processes, checking the quality of its work. The chain of responsibility should be clearly laid out in governance and communicated to relevant parties.
The future of work will be in humans deciding what AI should do, when to trust it, and where human judgment still matters. That’s a question for leadership.
What This Means for 2030
By 2030, the organisations that thrive will be the ones that adopted AI best, not fastest.
That means people using AI to do the right work, not just the same work faster. Leadership will understand where AI helps and where it introduces risk, and develop easy-to-follow Governance based on this. The organisation will have a culture of testing, learning and adjusting to get the best out of AI.
The hardest part won’t be the technology, it’ll be the organisational change. All the conversations with staff about how their roles will shift, the discipline to kill AI experiments that aren’t working, resisting pressure to do something just because competitors are. Organisations with a curious, open and iterative approach will be the ones that succeed in adoption.
AI and the future of work is less about what's possible and more about what's practical. Most organisations already have access to AI tools, but they struggle to use them because they can’t govern them. Remember that demos don’t tell the truth, to start with use cases, and build governance that people will actually follow.
Remember: the goal isn't to adopt AI faster, it's to adopt it better.
If you're a UK leader navigating AI adoption and want a practical, no-hype approach to getting this right, I run AI implementation programmes designed for organisations like yours. Let's talk.
Ready to turn your idea into a product users actually want?
Book a free discovery call
Martin Sandhu
Fractional CTO & Product Consultant
Product & Tech Strategist helping founders and growing companies make better technology decisions.
Connect on LinkedIn



