Hiring an “AI Team” Is the Wrong Move

Job board. “Head of AI, reporting to CEO, owns the company’s artificial intelligence strategy across all functions.” Competitive salary. Equity. The post gets 400 applications in 48 hours.

Somewhere in that same company, the marketing team is copy-pasting outputs from ChatGPT into a Google Doc. The support team has a Zapier workflow that someone built on a Friday afternoon and nobody fully understands. The product team is having a separate, parallel conversation about an AI-assisted onboarding flow that nobody told the new Head of AI about, because they weren’t hired yet when it started.

This is what AI adoption actually looks like inside most companies. Messy, distributed, already happening everywhere, driven by whoever was curious enough to start experimenting. And the response from founders, almost universally, is to hire a person or build a team to own it centrally. To bring order to the chaos. To have someone accountable for the strategy.

It feels sensible. It is, in almost every practical sense, the wrong call.

The Silo Problem

The logic behind a centralised AI team is borrowed from how companies handled data science a decade ago. Build a specialist unit, let them develop the capability, then let the business pull from it as needed. It produced data teams that spent most of their time fielding requests from other functions, struggling to prioritise, and slowly becoming a bottleneck that everyone complained about and nobody wanted to dismantle.

AI is replicating that pattern at speed, and the consequences are likely to be worse because the surface area is larger. Data science touched certain functions. AI touches everything, every function, every workflow, every person who produces anything. Centralising ownership of that into a single team doesn’t consolidate capability. It creates a queue.

The teams waiting in that queue don’t sit still. They either work around the AI team entirely, which produces fragmented, ungoverned adoption, or they defer to it and slow down. Neither outcome is what the founder had in mind when they posted the job.

What Gets Lost in Translation

There is something more fundamental going wrong here than a resource allocation problem. When AI capability lives in a separate team, the people closest to the actual work stop developing their own relationship with the tools. They become consumers of AI output rather than practitioners. The marketing manager who could have spent six months developing genuine fluency with how AI fits into their specific workflow instead raises a ticket and waits.

This matters because the most valuable AI applications inside any company are almost never the obvious, generic ones. They’re the specific, contextual ones. The way a particular sales team structures discovery calls. The specific tone and format that retains customers at the renewal stage. The edge cases in the support queue that a model could be tuned to handle differently. That knowledge lives in the people doing the work, and it requires their direct involvement to translate into anything useful.

A centralised AI team, however talented, is always working at one remove from that context. They can build tools. They can set standards. They can run evaluations. What they can’t do is replicate the intuition of someone who has been doing the job for two years and knows exactly where the friction is.

Why Founders Build Them Anyway

Part of it is the same instinct discussed whenever a company makes a premature senior hire: it feels like seriousness. Having an AI team signals to investors, to the board, and to the market that the company is treating this as a strategic priority rather than a collection of Friday afternoon experiments.

Part of it is genuine uncertainty. AI is moving fast enough that founders don’t always feel equipped to make decisions about it themselves. Hiring someone who appears to know more about it is a way of outsourcing that discomfort.

And part of it is a real governance problem that does need solving. Ungoverned AI adoption creates security risks, inconsistent outputs, compliance exposure. These are legitimate concerns. The mistake is conflating governance with ownership. A centralised team that owns AI strategy tends to accumulate control over things it shouldn’t, and governs the things it should through bureaucracy rather than enablement.

What Embedded Actually Looks Like

The alternative is less tidy but considerably more effective. AI fluency gets built inside each function, by the people in that function, with support rather than direction from the centre.

In practice, this means the marketing team owns its own AI workflows. The sales team owns theirs. Support, product, finance, each develops capability that is specific to its context, accountable to its own outcomes, and iterated by the people with the most relevant knowledge. A light central function, maybe two or three people, handles security standards, model governance, shared infrastructure, and cross-functional learning. It shares and enables. It does not own.

The early employees who develop genuine AI fluency inside their function become something valuable: practitioners who understand both the domain and the tools well enough to push each other forward. That combination is hard to hire from outside and compounds quickly when it’s cultivated deliberately.

The Bottleneck You Built Yourself

Two years from now, the companies with centralised AI teams are going to be dealing with a familiar set of problems. A backlog of requests the team can’t clear. Functions that have quietly built their own shadow workflows to get around the queue. A Head of AI who is spending most of their time in stakeholder meetings and not enough time building anything. Political tension between the AI team and the functions it was supposed to serve.

The companies that embedded the capability early, that made AI fluency a functional skill rather than a specialist one, will have something different. Not a team that owns AI. A company that knows how to use it.

That’s a harder thing to build deliberately. It requires tolerating a messier, less legible kind of progress for longer than feels comfortable. But the alternative is a bottleneck with a headcount and a strategy deck, and those tend to age poorly.