

The first wave of interest in enablement agents centred on a straightforward question: what are they?
We covered the foundations in our earlier piece on AI enablement agents. What they are. How they work. Where they add value across sales, coaching, onboarding, and complex problem-solving.
The questions have shifted.
Now teams want specifics. What data do these agents use? How do they integrate with existing coaching workflows? How do they identify skill gaps across a team in practice, not in theory? These are great questions, and they signal organisations moving from curiosity to serious evaluation.
This piece goes deeper into the mechanics. If you haven't read the first piece, start there. This one assumes you already understand what an enablement agent is and want to know how the pieces fit together.
We covered the data architecture in detail in the first piece. The short version: enablement agents draw from four primary sources.
This assumes your data is clean and connected. For many organisations, it isn't. We'll address the reality of this later.
The point worth emphasising here is how these sources connect. The agent doesn't query each system in isolation. It cross-references across all four to build a complete picture.
In real life this might look like.... the agent sees a rep's discovery calls averaging three questions per call (conversation data). Their conversion rate from discovery to proposal sits 22% below team average (CRM data). The rep completed the advanced discovery module three months ago (LMS data).
The agent now has a specific, data-backed coaching recommendation. Not "improve discovery skills." Instead: "Despite completing discovery training, this rep's call behaviour hasn't shifted. Recommend targeted coaching on open-ended questioning with call review."
That cross-referencing is what separates an enablement agent from a dashboard. And it's what makes everything below possible.
The most frequent question we hear: "How does this fit into the coaching we already do?"
Technically, an enablement agent is capable of handling the entire coaching flow. Identify the skill gap from performance data. Deliver feedback directly to the rep. Run AI roleplay to practise the skill. Assign learning content. Track whether behaviour shifts. No human manager involved at any point.
Some organisations are doing exactly this, and for certain use cases it works. Practising objection handling against an AI customer at 9am before a big meeting has genuine value.
But "technically possible" and "best approach" aren't the same thing. In BetterUp's 2025 AI coaching pilot, only 15% of employees preferred AI-only coaching. 51% preferred a hybrid model with both AI and human involvement. The Conference Board's research found AI handles around 90% of daily coaching functions, but humans are still needed for conversations involving emotion, politics, or values. The stuff most critical to real development.
Our recommendation: AI for preparation and follow-through. Humans for delivery.
Before the conversation, the agent does the preparation most managers don't have time for. It pulls the rep's recent call recordings and analyses communication patterns against top performer benchmarks. It checks deal progression data for stalls or regression. It reviews learning history to see whether relevant training was completed and whether it produced a behaviour change. It looks at activity trends, meeting frequency, and pipeline health. Then it synthesises all of this into a coaching brief the manager reads in two minutes before walking into the conversation. The brief includes skill gaps from recent data, focus areas ranked by business impact, and suggested approaches based on what's worked across the team. The manager decides the focus. The agent removes the homework.
During the conversation, the agent stays out of the way. Coaching remains human-led. This is where trust, judgment, and the ability to read the room do the work AI still isn't equipped for.
After the conversation, the agent picks back up. It tracks follow-through. Did the behaviour change? Did deal velocity shift? If a coaching approach isn't producing results, the agent flags it early. No more waiting three months for a quarterly review to surface the issue.
This gives you the scalability and consistency of AI with the judgment and empathy of a human coach. All managers get the same quality of insight, and every rep gets coaching based on their actual data rather than their manager's availability.
Traditional skill gap analysis follows a predictable pattern: run an assessment, compare results to a competency framework, build a development plan.
The process isn't the problem. The data feeding it is.
Self-assessments are unreliable. Manager assessments are inconsistent. Competency frameworks describe ideal states but rarely connect to measurable business outcomes. The result? Development plans built on opinions rather than evidence.
Enablement agents approach this differently. They identify gaps through observed behaviour, not self-reporting.
Here's what this looks like in practice. The agent pulls a rep's call recordings and analyses them against top performer benchmarks for things like discovery question depth, talk-to-listen ratio, and objection handling. It cross-references these patterns against CRM data to see which behaviours correlate with wins and losses. It checks learning history to see if relevant training was completed and whether it changed anything. It reviews manager coaching notes and one-to-one frequency.
From this, the agent builds a skill profile for each individual. Not a point-in-time snapshot, but a trend over weeks and months. Which skills are improving. Which are declining. Which have stayed flat despite development investment.
A rep who rates themselves highly on negotiation but consistently discounts beyond authority limits has an observed gap the self-assessment missed entirely. The agent sees the discount patterns in CRM data, cross-references them against call recordings where pricing conversations occur, and identifies the specific breakdown point.
At the team level, the same process surfaces systemic patterns. If seven of twelve reps struggle with multi-stakeholder deal management, the response is different from if one rep struggles. A team-wide gap signals a training need. An individual gap signals a coaching need. The agent distinguishes between them.
Connecting gaps to business outcomes. This is where it gets powerful. Traditional approaches tell you a rep needs to improve their presentation skills. An enablement agent tells you their presentation-stage conversion rate is 18% below benchmark, and call analysis shows they spend 74% of presentations on features with minimal tie-back to discovery findings. Specific. Quantified. Actionable.
This is also what makes the business case for L&D investment more concrete. "We delivered 4,000 hours of training this quarter" is an activity metric. "Reps who received targeted coaching on multi-stakeholder selling improved their win rate by 11% over 90 days" is a business outcome. The enablement agent provides the data trail connecting one to the other.
"Personalised learning" has been an L&D aspiration for over a decade. Most implementations amount to role-based content libraries or manager-curated playlists. The personalisation in most organisations is categorical, not individual.
Enablement agents change the mechanics. The difference is how content reaches the right person at the right time.
Here's how the agent builds a learning path. It starts with the individual's skill profile (the one built from observed data, not self-assessment). It maps identified gaps against available content in the LMS, tagged by skill, level, and format. It checks what the person has already completed and whether it produced a behaviour change. It factors in role requirements, current deal complexity, and business context. A new enterprise rep ramping into a territory with a dominant competitor receives different content than a tenured rep expanding into a new vertical. The agent knows the difference because it reads the data. Nobody manually created a learning path for each scenario.
The sequencing isn't static either. If a rep demonstrates strong discovery skills but weak commercial acumen, the agent deprioritises discovery content and accelerates commercial training. The path adjusts as the data changes. It's not set-and-forget.
The agent also builds in spaced reinforcement. A rep completes negotiation training. Two weeks later, the agent prompts a micro-assessment. Retention strong? Path moves forward. Retention dropped? Targeted refresher before advancing. This isn't new science. Spaced repetition has decades of research behind it. The enablement agent operationalises it at individual scale.
Format matters too. A field-based rep travelling between client meetings benefits from short audio or mobile-optimised micro-modules. A desk-based account manager absorbs longer-form case studies more effectively. The agent factors in role, location, tenure, and even calendar density when recommending format and timing. The same skill gap produces different learning recommendations for different people. The delivery method matters as much as the content itself.
Compare this to a traditional LMS recommending content based on role and completion history. The inputs are fundamentally different, and so are the outcomes.
Everything above connects into a single closed loop. Data identifies a gap. The agent prepares a coaching brief. The manager coaches. The agent assigns targeted learning. Then it monitors whether the behaviour shifts and adjusts accordingly. If it doesn't shift, it escalates.
The agent handles the pattern matching, the cross-referencing, and the monitoring. The humans handle the coaching, the relationship building, and the judgment calls.
And it compounds. Early recommendations are based on general patterns. Six months in, they reflect your organisation's specific data. The longer it runs, the more precise it gets.
Building the agent is a small part of the work. The thinking you need to do before, during, and after deployment is where organisations succeed or fail.
The technology questions are the straightforward ones. Data cleanliness. System connectivity. Content structure. Coaching workflow readiness.
The harder questions sit around the technology. Risk assessment. Stakeholder approvals. Security implications of connecting conversation intelligence, CRM, and performance data into a single agent. Privacy legislation and employee data governance, particularly in regions with strict regulatory frameworks. Change management across teams who've never worked with AI-assisted coaching. Defining success metrics before you deploy, not after. And the quality of every recommendation the agent produces depends on how well its prompts, logic, and guardrails are designed. Bad prompt design, bad outputs.
Organisations getting this right treat enablement agent adoption as an organisational initiative, not an IT project. The build is weeks. The preparation, alignment, and change management around it takes months.
There's also a governance question most organisations haven't addressed yet. AI tools are now accessible enough for anyone to build an agent. A manager creates one to summarise meeting notes. A team lead builds one to track activity metrics. An L&D coordinator sets one up to recommend learning content.
Some of these are low-risk. Summarising an email inbox or consolidating meeting actions isn't going to cause harm.
An agent recommending coaching needs, identifying skill gaps, or flagging performance patterns is a different category entirely. These outputs influence how people are perceived, developed, and evaluated. They affect reputations and careers. If the data feeding the agent is incomplete, the model is biased, or the recommendations are taken as directives rather than inputs, the consequences are real and personal.
Without governance, organisations end up with a patchwork of agents built to varying standards, drawing from inconsistent data, with no oversight on accuracy or fairness. The people affected by the outputs have no visibility into how those outputs were generated.
Strong rigour needs to be in place before any agent touches people-related decisions. Who approved the agent's design. What data it accesses and why. How recommendations are reviewed before action. What recourse exists when the output is wrong. These aren't optional extras. For agents operating in the enablement and coaching space, they're foundational.
Get this wrong and the risks aren't theoretical. They're reputational, legal, and deeply personal for the people on the receiving end.
None of this is designed to discourage. It's designed to direct preparation. The organisations getting the most from enablement agents are the ones who worked through the hard questions before they selected a platform.
Understanding how enablement agents work is step one. Building the organisational readiness to deploy them effectively is step two. Having someone in the room who's done it before makes both steps faster and significantly less risky.
If your team is trying to figure out where enablement agents fit and what needs to be true before you invest, we work with organisations across ANZ to cut through the noise and build practical AI capability into L&D and enablement operations.
Get in touch to start the conversation.