
September 2023 Releases
Whether you’re spot-checking performance after a prompt tweak, a Knowledge Base update, or a switch to a new LLM, you need confidence your AI Agents aren’t slipping.
Whatever the case, for high-volume contact centers, there isn’t a clean way to track intraday AI Agent performance without heavy manual lift.
It’s too time-consuming to manually spot-check call outcomes on a daily basis.
So, you’re either left in the dark waiting for in-aggregate reporting to update, or you’re acting on small batch sample sizes—reviewing conversation quality and customer comfort with the AI, hoping the small batch trends carry throughout the rest of your conversations.
Real-time Agent Stats bring live visibility to the performance of your human and AI Agents, showing live metrics for the current day (updated every 10 seconds).
So, whether you’re spot-checking daily performance, validating the impact of a workflow change, or benchmarking against human agents, you can use these metrics to confidently deploy, measure, and optimize AI Agents—while maintaining call quality and addressing small issues before they impact customer satisfaction or KPIs.
The following metrics are updated every 10 seconds in the “Agents” page of the Regal platform:
Admins often rely on manually vetting transcripts to confirm things are running smoothly. But when your AI Agents are handling thousands of calls a day, manual review alone won’t scale. It also doesn’t give you a complete picture of the data (since you can’t review every single call).
With live metrics, you get a complete picture of intraday performance, broken down by agent.
So, you can quickly identify if you need to manually review calls, debug, or test new iterations of AI Agents.
The faster you can identify and act on performance trends (like a drop in engaged conversations), the faster you’ll be able to take action to protect conversion rates and overall customer sentiment.
In an instant you can now:
For example:
An admin for a health insurance company checks into the Agents page to check on their inbound lead qualification agents.
They review their AI Agent and notice the Transfer Rate is 42%, when it usually sits around 25-30%. Normally, for qualification, this would be a good sign, since a transfer means the contact is qualified, and going to speak to a licensed agent (i.e. their closer).
Quickly, however, they also notice that Conversations > 15s and AI Receptiveness are dropping dramatically (which is obviously not a good thing).
In real-time, they’re able to identify this abnormal performance trend.
They dive into a few recordings and hear that the AI is giving vague, unhelpful answers to questions regarding a recently launched promotional campaign that offers new policyholders a discounted 6-month premium when switching over from a competitor.
This immediately signals that the AI needs more information and more specific direction on how to handle questions about the promotion while still properly qualifying contacts.
That could be as simple as a quick prompt update for questions and objection handling, or it might require new knowledge base information to be added around current promotions.
When you make an update to an AI Agent—whether it’s a prompt revision, a knowledge base update, a voice change, a routing adjustment, or switching to a different LLM—you’re almost always going to see an immediate impact to performance.
For example, an LLM switch is always done with the goal of improving performance. However, when a switch is made, the way that performance is impacted can be slightly unpredictable.
Why? Because different models can react much differently to the same prompt, and some models are worse than others at execution on complex logic or invoking actions.
With Realtime Agent Stats, you’re able to monitor whether your updates are improving or regressing performance, and quickly roll back your updates or drill into calls to identify what future changes need to be made.
After deploying changes, you can watch:
For example:
A financial services company updates their competitor pricing Knowledge Base for an AI renewals agent. They also update the prompt for distinct objection handling around competitor pricing.
Within a few hours, they see AI Receptiveness holding steady alongside the number of conversations and conversation length, but also see that Transfer Rates are 12% higher than usual.
In this case, a transfer means the contact is being passed to a licensed agent to finalize the terms of their new contract. This is a clear signal that the updates are driving positive change.
In many cases, seeing that the new (or updated) AI Agent is performing to a similar level as humans also signals success.
Now, if AI Receptiveness and Transfer Rates were to both drop after deploying these updates, you’d be able to immediately revert the changes, and dive into live recordings to see where the AI Agent is mishandling competitor pricing questions or objections.
If you have a history of human-driven calls, you’re able to validate the performance of your AI Agent against human benchmarks to gain confidence before scaling. With Realtime Agent Stats, you can filter by Agent Team and Agent Type to directly compare AI and human agents side by side.
By combining this with historical human agent benchmarks, you can:
For example:
Let’s say you’re an insurance company with an inbound call flow that requires agents to field highly complex, person-specific questions (e.g. eligibility across multiple plan types, state-specific policy rules, or combining benefit riders).
For your AI Agent to succeed, its prompt and connected knowledge bases must be comprehensive enough to make decisions or transfer intelligently, as a human would. Since AI Agents can retrieve contextual information from a Knowledge Base faster than humans, you might even expect it to answer questions faster and more accurately.
Upon deployment, you can compare Completed Tasks, Transfer Rate, and Conversations > 15s to human benchmarks. In real-time, you’ll know whether the AI is fielding questions and handling them as well as human agents (i.e. Transfer Rate), and doing so in an efficient manner (i.e. Completed Tasks and Conversation > 15s).
Below, you can see an AI Agent performing very similarly to a human agent of the same type (inbound support). That would signify the AI is doing its job:
With live visibility into the metrics that determine business success for your AI Agents, you can deploy and improve AI Agents more quickly and confidently, whether it’s validating a workflow change, reallocating volume, or addressing emerging performance issues.
From high-value AI metrics like AI Receptiveness and Transfer Rate to operational indicators like Completed Tasks and Activity Status, you get a complete, up-to-the-second view of performance.
That means reduced risk of letting issues go unaddressed, and more time making informed decisions that protect KPIs, improve customer experience, and scale AI with certainty.
Looking for live visibility into the performance of your agents? Get a live demo to see these metrics in action.
Ready to see Regal in action?
Book a personalized demo.