A founder comes to me wanting to deploy AI. We talk for a while about their company, processes, data. Then I ask: "Who at your company is going to make these decisions?"
Silence.
Most companies approach AI like buying software — pick a vendor, sign the contract, deploy. But deploying AI is a series of decisions that fundamentally change how the company operates. And these decisions can't be delegated to a vendor or to IT. They belong with leadership.
Build, Buy, or Integrate?
The first strategic fork. Three options:
Buy — buy a ready-made SaaS solution. Quick to deploy, lower upfront investment, limited customization.
Build — develop your own solution. Full control, higher investment, longer timeline.
Integrate — take existing AI tools and connect them through an automation layer. A flexible middle path.
In practice most projects combine all three — and the buy → integrate → build progression is usually safer than building from scratch.
→ A complete Build vs Buy vs Integrate guide for AI
Tooling — specialization over generalism
ChatGPT is excellent for personal productivity. Gemini fits work with Google Workspace documents. Neither is enough for automated company processes — you need specialized tools connected into a functional whole.
Instead of one generic model, consider a set of specialized tools:
- Voice communication — speech-to-text for transcription, text-to-speech for synthesis, a separate language model for processing. Each layer can be optimized independently.
- Knowledge base and company data — a RAG architecture where AI draws from your own data, not from generic training. Results are more accurate and auditable.
- Orchestration and automation — tools like n8n connect AI components to your existing systems (CRM, ERP, email) without custom code for every integration.
- Monitoring and logging — a separate layer for tracking what AI is doing, how it answers, and where it fails.
Concrete example: We wanted to automatically audit customer support conversations — identify recurring issues, measure answer quality, flag exceptions. Instead of one comprehensive solution, we connected three specialized components: transcription, an analysis model, and reporting into a dashboard. Each part could be swapped out without touching the others.
An architecture that lets you swap one component without rewriting the whole pays off — AI tools evolve fast.
Where to bring a human in, and where to let AI decide?
Most companies, on first contact with AI, want to automate everything. That's a mistake — not because AI isn't good enough, but because zero human oversight creates uncontrolled risk.
Where human oversight makes sense:
- Customer communication in sensitive situations — complaints, escalations, complex inquiries
- Financial and legal documents where a mistake has measurable consequences
- Decisions about customers where there's regulatory risk (the EU AI Act 2024 explicitly defines categories where human oversight is legally required)
Where it slows you down without value:
- Answers to recurring questions with a clear correct answer
- Classification and routing where a mistake has minimal impact
- Internal drafts and summaries used as working material
A practical test: what happens if AI answers wrong? If correction is easy — you probably don't need oversight. If a legal or reputational problem arises — oversight is mandatory.
Governance — who runs this?
This is the decision companies most often postpone. And then they pay for it.
AI projects work well for the first 6 months — while everything is run by an enthusiastic champion. Then the champion picks up other priorities, the knowledge base goes stale, nobody knows who can approve what. The project quietly dies.
Minimal governance framework:
- AI owner — a specific name, specific accountability. Not "IT", not "everyone".
- Review cadence — how often outputs and metrics are reviewed. At least monthly in the first year.
- Data policy — what can go into external models, what can't. One page is enough.
- Incident process — who handles an AI mistake and how.
- Update process — who refreshes the knowledge base and when.
These decisions belong with leadership
A vendor can propose the architecture. IT can implement. But decisions about where to put a human in the loop, what can go to an external model, who's accountable for AI outputs — those are business decisions with material impact.
Companies that make them deliberately and upfront have a markedly higher chance of success. Companies that postpone or delegate them usually come back with a project that works technically but doesn't help the business.
If you want to walk through these decisions with someone who's seen them play out across different companies — I'd be glad to take a look together.
/ Terms in this article /


