When people talk about AI in marketing, the conversation tends to go in one of two directions.
Direction one: AI as a tool you prompt. You ask it to write something, it writes it, you edit it. You're still doing most of the work. The AI is fast but dependent.
Direction two: AI as autonomous operator. You set goals, the AI runs the campaigns, you check the results occasionally. The AI is fast and independent, but you're disconnected from what's actually happening.
Neither of these felt right to us.
The problem with each model
The first model doesn't solve the core problem. In-house marketing teams are stretched not because they lack writing ability. They're stretched because there's too much to monitor, too many small decisions to make, too many actions that need to happen for campaigns to improve. A writing assistant helps with one task. It doesn't reduce the cognitive load of the job.
The second model creates a different problem. When something goes wrong, and eventually something will, you have to debug a system you weren't watching closely enough to understand. You also lose the feedback loop. Marketing judgment gets sharper when you're making decisions and seeing the results. Hand everything to an autonomous system and you stop developing that judgment.
What human-in-the-loop actually means
The phrase gets used loosely. For us, it has a specific structure.
The AI does the monitoring, the analysis, and the preparation. It finds the search terms that need review. It drafts the social posts. It identifies the pages where a title tag change would lift click-through rate. It prepares the budget reallocation, complete with the reasoning. Then it stops. It places everything in an approval queue and waits.
You make the decision. You approve or reject. If you reject, you tell the system why. That calibrates the next round of recommendations.
When you approve, the AI executes. The keyword gets added. The post gets scheduled. The budget shifts. No manual follow-through required.
Why this model works
Decisions stay with the person who has context. You know things about your business that no AI does. A competitor just entered your market. There's a product change coming next month. Your most important customer is in a specific industry. These things affect whether a recommendation is right or wrong. The AI can't know them. You can, and you should be the one deciding.
Accountability stays clear. When a campaign change works, you know why. When it doesn't, you know what you approved and can adjust your thinking for the next decision. The feedback loop stays intact. You get smarter at the same time as the system does.
Trust builds in both directions. You trust the system more as you see that what it proposes is well-reasoned and based on real data. The system learns your preferences as it sees what you approve and what you reject. Over time, the recommendations get closer to what you'd actually do.
Where autonomy fits
Full autonomy is still in the product. For recurring, low-stakes actions, once you've built enough confidence in the system's judgment, you can turn off the approval step for specific action types. But it's opt-in, not default. You choose to extend autonomy, not have it assumed.
The reason for this design: autonomy works better when it's earned. A system that's been running in approval mode for three months, with a clear track record of what it recommends and what you actually want, is in a much better position to act autonomously than one you handed the keys to on day one.
That's the model. Not because autonomy is bad. Because it works better when it starts with trust.