December 16, 2025

AI for Founders: How to Use AI in Your Product and Team

A couple of years ago, “AI” meant impressive demos, viral screenshots and pitch decks full of big promises. Today it’s quietly baked into almost every tool your team touches. Your CRM, your helpdesk platform, your analytics stack and even your calendar are all already doing small bits of machine learning in the background.

The question for founders in 2025–2026 is no longer “Should we use AI?” but “How do we make AI actually move the numbers?” Investors have stopped being impressed by vague AI slides. Customers are less excited by chatbots and more interested in products that feel faster, smarter and easier to use. Teams want tools that reduce their cognitive load, not one more thing to log into.

In my piece on why 2026 is the year AI becomes boring (and useful), I talk about this shift. The real opportunity is in the unglamorous, operational side of AI: removing friction, automating repetitive work, and letting a small, senior team behave like a much larger one. This article is a practical guide to doing that. It’s written for founders and leaders who want to move beyond AI theatre and use it to improve their product and team.

The wrong way to start with AI

Let’s start with the mistake I see most often. A leadership team becomes convinced they “need an AI strategy”. Someone suggests a chatbot. A prototype appears on a website. A few people click it, play for a minute, and then go back to emailing support. Six months later, nobody can really explain what that AI work achieved.

The pattern is always the same. The starting point is the technology rather than a specific problem. There is no clear definition of success beyond having “an AI feature”. The project is judged on how clever it looks, not on what it changes in the business. When you approach AI this way, it almost guarantees disappointment, because there is nothing meaningful to measure and no real user need sitting underneath it.

In my 5-Step AI Validation Framework, I suggest flipping the process. The starting point should be a single, concrete business problem that already exists in your product or operations. You then define what success would look like in numbers, before you touch any models or vendors. Only when you know what you are trying to improve do you move on to choosing tools and implementation details. AI becomes one possible way of solving the problem, not the justification for the project.

Start with one ugly workflow

Almost every company has a few workflows that everyone complains about but nobody properly owns. It might be how support tickets get triaged, how leads are qualified and routed, how management reports are assembled, or how data moves between systems. You can usually spot these processes because they are repetitive, involve a lot of copying and pasting, and depend heavily on one or two people’s “knowledge of how things work”.

If you get your core team in a room and ask which parts of their week feel the most tedious and mechanical, people will not be shy about sharing. What you are looking for are tasks that happen frequently enough to matter, follow a reasonably consistent pattern, and currently burn time or create delays. You are not looking for the most glamorous use case, you are looking for the one that will be obvious to everyone when it gets better.

This is where AI starts to earn its keep. The goal is not to replace the team; it is to remove the work that is clearly beneath their pay grade. When you put that lens on, the first AI project stops being “let’s add a chatbot to the homepage” and becomes “let’s halve the time it takes to get a qualified lead in front of a salesperson” or “let’s eliminate the manual effort involved in preparing weekly operations reports”.

Choosing how to build: app builders, assistants or a custom stack

Once you know the workflow you want to improve, the next decision is how to build the solution. This is where many founders get stuck, because the tooling landscape is noisy. Underneath the noise, there are three broad patterns: no-code and AI app builders, AI-assisted coding, and more traditional custom development on top of a modern backend.

If you are non-technical or you want to test an idea very quickly, AI app builders and no-code tools are usually the best first move. In my comparison of builders like Lovable, Bolt, v0 and Replit and in my guide to Vibe coding, I show how you can describe screens, flows and behaviours in natural language and let the system scaffold the app for you. For internal tools, prototypes and early versions of customer-facing products, this is often enough to validate that the idea has legs.

If you already have engineers on the team, AI coding assistants become your leverage point. Tools in this space, which I cover in AI Coding Assistants Compared, are very good at writing boilerplate, refactoring existing code and helping you explore implementation options. They are not a replacement for proper architectural thinking, but they dramatically reduce the amount of time your developers spend on repetitive work. This approach makes sense when you need deeper integration with your existing systems, more precise control over performance, or when you are dealing with sensitive data.

As your AI features become more central to the product, you will hit questions around data models, permissions, latency and real-time behaviour. That is where using a modern backend such as Supabase comes in. In Supabase for AI Builders and Supabase + AI, I walk through how a small team can get a robust database, authentication, storage and real-time messaging without building it all from scratch. At that point, your AI features sit on top of infrastructure that can grow with you.

The pattern that emerges is simple: very early on, you stay close to Vibe coding and app builders; as you see traction, you bring in AI-assisted coding and a proper backend; when you reach scale, you have a foundation that can handle more complex requirements.

Moving beyond chatbots: thinking in agents and workflows

Most people’s first experience of “AI in a product” is a chatbot. There is nothing wrong with that, but it is usually a shallow integration. The bot might answer simple questions or route people to help articles, yet the real work still happens elsewhere. Useful, but not transformative.

The bigger opportunity lies in treating AI as a colleague that can move work across your tools. In Beyond Chatbots: Automating Boring Workflows with AI Agents and How AI Agents Can Automate Complex Tasks and Boost Efficiency, I describe agents that can pull data from multiple systems, make simple decisions within clear guardrails, and hand over results to humans for review.

Imagine, for example, lead qualification. Instead of a salesperson or SDR manually looking up information on every new enquiry, an agent can gather context from your CRM, email threads, website behaviour and third-party enrichment tools, then propose a priority and suggested next step. A person still decides what to do, but they start from a much richer picture, assembled in seconds rather than minutes. The same pattern applies to support triage, regular reporting, invoice processing and a long list of operational chores.

When you design AI this way, you are not building entertainment. You are redesigning workflows so that humans and agents both do what they are best at. The agents handle the mechanical, multi-system work. The humans handle judgement, exceptions and relationships. Over time, you can give the agents slightly more autonomy as you gain confidence in their behaviour.

The new AI plumbing: why MCP and integration layers matter

If you have ever built a product that talks to several external APIs, you will know how fragile integrations can be. Authentication changes, rate limits, version bumps and subtle differences between vendors create a lot of engineering drag. That drag only gets worse when you start wiring AI into the mix, because you are not just calling APIs; you are orchestrating conversations between models, tools and your own systems.

In Why MCP Is About to Change Everything for Founders, I describe Model Context Protocol (MCP) as a way of simplifying this landscape. Rather than writing bespoke glue code for every integration, MCP gives you a standard way for AI tools to talk to data sources and services. It does not magically solve every problem, but it reduces the friction involved in adding or changing integrations, and it makes it easier to test new ideas without committing huge engineering time.

When you combine this kind of integration layer with a backend like Supabase and careful validation work from the 5-step framework, you get something powerful: a stack in which one or two engineers can safely experiment with AI-driven features across your product. Instead of each experiment becoming a major architectural project, it becomes a configuration and orchestration exercise. That is the difference between AI being a lab project and AI being an everyday part of how you ship.

AI as a tool for capital efficiency

All of this sits inside a bigger shift in how companies grow. In The future of startups: lean, agile and capital-efficient and How tech advancements are enabling startups to scale with less funding, I talk about the end of the “grow at all costs” era. The teams that do well now tend to be smaller, more senior and much more careful about what they build.

AI is one of the reasons that shift is possible. If agents can take on a chunk of the repetitive operational work, and AI-assisted tooling can reduce the time engineers spend on boilerplate, then each person on the team can have a much bigger impact. That changes your hiring plan, your burn, your funding needs and even your strategy. It also changes how you think about roles such as fractional leadership and flexible expert engagements, which I cover in more detail in your leadership and capital-efficiency content.

The important point is that AI becomes part of your financial story. When you talk to your board or investors, you are not just saying “we have an AI feature”. You are showing how AI has allowed you to ship more experiments per quarter, to deliver the same service with fewer manual steps, or to expand the product without doubling the team. That is what capital efficiency looks like in practice.

A realistic 90-day AI plan

Founders often ask how to “get started with AI” in a way that doesn’t turn into a never-ending side project. A useful way to think about it is as a 90-day cycle aimed at getting one meaningful use case into production and learning from it.

The first month is mostly about discovery and decision. You spend time with your team to identify the workflows that feel the most painful and repetitive, then pick one that has clear business impact and is technically feasible. You define what success would look like in concrete terms: for example, reducing response times in support, cutting manual hours in reporting, or improving the speed and quality of lead qualification. You resist the urge to start building until that is clear.

The second month focuses on building the smallest viable version of the solution using the lightest tools that make sense for your context. If you are non-technical, that might mean leaning on an AI app builder or Vibe coding workflow, drawing on the principles from What Is Vibe Coding? and the app builders comparison. If you have engineers, you let them move fast with AI coding assistants and a simple Supabase backend. The aim is not to build a perfect system; it is to get something in front of a small group of real users and start measuring.

The third month is about integration, iteration and decision. If the early signs are good, you start wiring the solution more deeply into your stack, perhaps introducing a simple agent to orchestrate the workflow end to end, using ideas from Beyond Chatbots and How AI Agents Can Automate Complex Tasks. You pay attention to edge cases, to how the team feels about the change, and to whether the numbers match your expectations. At the end of the 90 days, you decide whether to scale this pattern to other workflows, to refine it further, or to park it and move on.

The point of working this way is that AI becomes a series of focused, bounded experiments, not a vague transformation programme. Each cycle leaves you with more knowledge about where AI helps, where it doesn’t, and how your organisation reacts to it.

Where to go next

If you want to go deeper after this article, the most useful next reads are:

AI doesn’t need to sit in a separate “innovation” lane. It is simply one of the tools you have for building a better product and a more effective team. If you keep your focus on real problems, measurable outcomes and the lightest stack that gets the job done, you will already be ahead of most of the market.

Insights

More like this