
When I started in marketing a decade ago, I wanted to do strategy right away. That was my jam, even if I had very little experience in it (except board/video games).
But there’s a catch-22, you need to be a manager usually to lead the strategy. But you also need management experience to become a manager.
That’s actually how I lost my SEO internship btw 😅 I told them I wanted to help with strategy/branding, stopped working there a few weeks later…it was not renew because I was not “loving” SEO enough (their words, not mine)
In hindsight, that make sense. Why would you let someone manage if they never did the hard work themselves? Which is why I think product marketers are a pivotal role to put the 2nd M of PMM front and center of the job.
The new generation of PMMs will be managers of AI agents, here’s what the future will look like 👇
We’re tinkerers at heart
We ran a survey to 100 PMMs to create the PMM Software Report. 91% of them said they want at least one AI tinkering tool. Most of them think the job is using one, they're already wrong.
From the data, Claude beats GPT 4:1 on what product marketers prefer. But as the models are getting better, they will automatically handle smaller and more concrete problems, which just give you more bandwidth.
If there’s a problem in a competitor analysis, a lower-level frame would require you to describe this in details (e.g. inadequate competitor classification, hallucination on features parity, misunderstanding of the product, etc) and then suggesting possible solutions. But with a higher frame, you can just be abstract with: “there seems to be a problem, can you fix it?”
The PMM role will be less about communicating the mechanics of a problem, and instead defining what the most important problem is. The higher the frame, the more possible solutions unfold, and the more room into what make a solution in the first place.
That means: less "write the battle card" and more "specify what a battle card needs to be, then audit the agent's draft."
What AI agents actually look like?
Well, to answer this question, we first need to understand what actually make an AI agent.

The anatomy of an AI agent
There’s four components, the harness is what’s making them work together to be agentic (AKA hold your content, run your frameworks, enforces your standards and produce a structured output)
Agents can handle sophisticated tasks, but their implementation is often straightforward. They are typically just LLMs using tools based on environmental feedback in a loop.
Let’s dig deeper into each of them:
Session: This is where your discussion get the context and state injected. A good setup removes the early back & forth to understand the important things for you. This is how you avoid content rot or loss of progress.
Tools: When you are using MCP servers, skills or software, this is where you will have clear instructions on what and where to use them.
Sandbox: By having everything in a sandbox environment, with clear access and permissions, you’re protecting yourself from the agent going rogue (and sharing confidential data).
Orchestration: Then the agent execute specific actions, such as running code, searching databases, or browsing the web, giving the model "hands" to interact with the real world.
The Harness
Keep in mind that there’s no bad agents, there’s only bad context.
While prompt engineering focuses on how to communicate with the model, harness engineering focuses on the systems, constraints, and orchestration logic that make autonomous AI agents reliable and actionable.
It’s the loop that makes everything work together.
The Approval
Of course, that also means that you’re responsible of everything your agents ship (and hallucinate), so it’s important to have guardrails in place to make sure you don’t push bad data or decisions.
Here’s a easy workflow to make sure the quality stays there:
1. Finish the task, then ask for a confidence score. Once Claude Code has a working solution, type: “How confident are you in this, 1–100?” If it comes back above 90, move on. If not, go to step 2.
2. Send it back. Tell it: “Find improvements and get to 90+.” Claude will catch edge cases, tighten logic, or flag assumptions it glossed over the first time. Repeat until it crosses the threshold.
3. Ship at 90. Don’t chase 100. That’s where you burn tokens on diminishing returns. At 90, it’s checked its own work and flagged what it wasn’t sure about.
With this in place, you can put the Manager back in Product Marketing Managers.
Why this hits PMMs harder than other functions?
The role of product marketing is already a “manager without reports” role. We’ve been debating on our podcast We’re Not Marketers where does the job land (product, marketing, or someplace else like strategy).
You usually brief sales, product, or content. And since we’ve been doing IC (individual contributor) work for our entire career, managing an agent is just one more report.
Most other roles need to learn this, while PMMs are already doing it. The transition and window are shorter than you think.
Here’s an example of what agent management look like in Discord, when I was supervising two agents helping me to do outbound.
The post-AI PMM will be managers running a one-person team that scales by spec, not headcount. You will own a system, instead of overpaying to rent a service.
The PMM is the protagonist who hires the agent, not the worker the agent replaces.
That’s why we need to build with AI, not just using it.
Merci, salut 👋
