Can you orchestrate trust with AI agents?

Inside the hidden challenge of the agentic revolution and its implications for buyers

There is a question that most B2B marketers haven’t yet stopped to ask. They’ve heard of agentic AI — the term is everywhere right now — and many have a working sense of what it means. But few have mapped what happens to buyer trust when an AI agent starts acting on their behalf. Initiating contact. Handling responses. Making decisions. Often without the buyer knowing.

That question is the starting point for the latest recent episode of the Trust & Influence in B2B podcast, in which Joel Harrison spoke with Andy Johnson, founder and director of client strategy at HUT3, one of the most respected ABM agencies operating in B2B today — and one of the earliest to move from talking about agentic AI to actually building with it.

What agentic AI actually is — and why this moment is different

Before the trust implications can land, the definition needs to be clear. Andy Johnson offers one of the most accessible framings available: think of a conductor and an orchestra. A conductor alone, tapping a baton, produces nothing. Start adding sections — woodwind, brass, strings, percussion — and something remarkable begins to emerge. Agentic AI works the same way. It isn’t a new AI tool. It’s the orchestration of multiple specialist AI tools, working in sequence, each performing a defined function, all directed toward a single outcome.

“Agentic is basically when we’re pulling together all of those tools,” Andy explains. “It enables us to solve much more complex problems and challenges that a single AI tool would be able to do — particularly in a B2B space.” At HUT3, workflows can involve eight or nine different AI tools operating in concert. The analogy isn’t decorative. It’s structurally accurate — and it carries a warning. An orchestra without a conductor doesn’t just underperform. It can produce something actively harmful.

The trust infrastructure nobody is mapping

In traditional B2B, trust is built through human consistency. Timely follow-ups. Relevant content. Personal accountability. These are the instruments through which relationships are built and maintained over time. When AI agents start handling parts of that relationship, the infrastructure shifts — and most marketers haven’t yet mapped how.

Andy’s starting position is deliberately counterintuitive. “Agentic AI can actually be a real trust accelerator,” he argues. His reasoning: trust in B2B is already being broken — by slow responses, inconsistent communications, wrong content delivered to the right person at the wrong time. These aren’t failures caused by AI. They’re human failures that AI, properly deployed, is well positioned to address. “All those gaps where at the moment we might not be particularly good at building trust because there’s lots of challenges around volume, time” — that’s where agentic workflows can genuinely play.

But the accelerator only works if the orchestration is right. And that’s where the hidden challenge lives.

Scaling bad decisions fast

The most significant trust risk in agentic AI isn’t the technology. It’s what happens when organisations deploy it too quickly, on weak foundations. Andy is direct about this: “One of the biggest risks is scaling bad decisions fast — or scaling bad content fast.” A workflow that picks up wrong information, or draws from fragmented data systems, doesn’t just deliver one poor experience. It replicates that failure at volume, at speed, across every account it touches.

The governance gap compounds the problem. Andy points to a striking parallel: when organisations built web environments, they developed staging and sandbox disciplines as a matter of course — test environments that stood between development and live deployment. Those disciplines largely don’t exist yet for agentic AI. “We are seeing a significant lack of governance across the board when using AI tools,” he observes. Employees using ChatGPT with training mode enabled. Disconnected systems feeding incomplete data into workflows designed to act on it. Organisations moving fast because the competitive pressure to adopt is real — and governance feels like friction.

It isn’t friction. It’s the conductor’s baton.

Accountability doesn’t transfer to the agent

When something goes wrong — and in any system operating at scale, something eventually will — the question of accountability becomes urgent. Who owns the mistake when there’s no individual human who made the call? Andy’s answer is unambiguous: “The accountability always sits with the human and the governance.” The agentic workflow was built by people who understood the problem they were trying to solve. The guardrails were their responsibility to put in place. The outcome — good or bad — belongs to the orchestrator.

This isn’t merely a philosophical point. It has practical implications for how agentic systems should be designed. A well-built workflow, Andy argues, should be capable of catching errors faster than a purely human process — surfacing the problem, signposting the severity, and routing it appropriately. “If it’s an issue that really can’t be dealt with via Stream A, it needs somebody to actually pick up the phone and have that conversation — and the workflow can signpost that a lot quicker than possibly by the time it’s gone through customer services.” The agent flags the failure. The human makes the apology. That division of labour isn’t a limitation of agentic AI. It’s a feature of it, if you design for it.

The buyer’s perspective: relevance over mechanism

There is a legitimate question about transparency — whether buyers should know, or need to know, that they’re interacting with an AI-orchestrated workflow rather than a human. Andy’s position is pragmatic. “Ultimately, buyers care about relevance,” he says. “If the content is the right content, delivered in the right way at the right time, then I think it doesn’t really matter whether it’s coming from an AI.”

The analogy he reaches for is marketing automation. No B2B buyer stops to question whether a HubSpot sequence sent their email. What they notice is whether it was relevant and timely, or whether it wasn’t. The same filter will apply to agentic AI. Poor execution won’t just underperform — it will invite exactly the scrutiny that well-executed agentic marketing never attracts. “If it’s really poorly delivered, then it matters,” Andy notes. “It gives the opportunity for the end user to ask: who’s writing this? How am I receiving this?” Relevance is the shield. Governance is what keeps it in place.

The marketer’s role: shrinking middle, growing edges

One of the more striking claims in the conversation concerns the impact of agentic AI on marketing roles — a topic that generates considerable anxiety in the profession. Andy pushes back on the dominant fear narrative with evidence from his own organisation. HUT3 has grown its team over the last twelve months more than in any previous equivalent period — specifically because of agentic AI, and specifically into new roles that didn’t previously exist.

His framing of what’s happening is worth sitting with. The middle ground of marketing — activation, content generation, routine execution — will compress. That compression is real and shouldn’t be minimised. But the strategic edges on either side of that middle are expanding. “The orchestration of an agentic workflow is incredibly strategic,” Andy argues. “And I think it’s really quite an interesting space.” The marketers best placed to thrive are those who understand what good looks like in the execution layer — because that knowledge is precisely what’s needed to direct the AI that will increasingly handle it.

The skills required are shifting from task competency toward problem-solving, strategic judgement, and the ability to design and oversee complex orchestrated systems. “There’s a lot more work to be done at the front end now than ever before,” Andy observes. The conductor role isn’t being automated. It’s becoming more important.

Who owns trust? The human always does.

The question the title poses has a clear answer, and Andy gives it consistently throughout the conversation. Trust — the building of it, the maintenance of it, the recovery of it when it breaks — belongs to the human orchestrator. The agent is the instrument. The human is responsible for what it plays, how it plays, and what happens when it hits a wrong note.

That isn’t a constraint on what agentic AI can achieve. It’s the condition that makes it trustworthy. B2B marketers who approach the agentic revolution with that understanding — who invest in data foundations, governance frameworks, connected systems and clear role delineation before they scale — are the ones who will find that Andy’s counterintuitive claim holds true. Agentic AI, done properly, really can be a trust accelerator. The orchestra is already assembled. The question is whether someone credible is standing at the podium.

Practical takeaways: orchestrating trust with agentic AI

For B2B marketing leaders looking to deploy agentic AI with confidence, Andy Johnson’s experience at HUT3 points to five conditions that need to be in place before you scale.

1. Build on connected data, not fragmented systems. Agentic workflows are only as good as the data they draw on. Disconnected CRM systems, incomplete intent data, and siloed organisational intelligence are the primary source of bad outputs. Data foundations aren’t a precondition to starting — but they are a precondition to scaling.

2. Define the AI/human role boundary explicitly. Know which parts of your workflow the agent handles and which require human judgement. This isn’t a one-time decision — it should be revisited as capability evolves. The clearer the boundary, the easier accountability becomes.

3. Treat governance as infrastructure, not overhead. Build your staging disciplines now. Test before you scale. Establish who owns the workflow, who reviews outputs, and what the escalation path looks like when something goes wrong. The organisations that will struggle are those that treated governance as a blocker rather than a foundation.

4. Design for error detection, not just error prevention. No system operating at scale will be error-free. Build workflows that surface problems quickly and route them to the right human. The agent should flag the failure; the human should resolve it.

5. Invest in orchestration skills, not just AI tools. The competitive advantage won’t come from which tools you use — it will come from how well you orchestrate them. Strategic thinking, problem-solving capability and a clear understanding of what good execution looks like are the skills that will define the next generation of B2B marketing leaders.

In this article

    Let's connect

    Whether you’re looking for a speaker, collaborator, or advisor, I’d love to help you bring fresh perspective to your B2B audience.

    Name(Required)

    Follow me on: