
September 2023 Releases
At HumanX 2026 in San Francisco, Lex Sivakumar, VP of Growth at Regal, took the stage to share what his team has learned from analyzing more than 400 million voice calls across Regal's customer base. Lex oversees Marketing, Business Development, and Rev Ops, which means he sits at the intersection of what buyers are asking for and what real deployments are actually delivering.
His talk centered on a finding most contact center leaders are not yet ready to hear: customers are not just accepting well-built voice AI agents. They are opening up to them more than they open up to humans.
Read more about the data that backs this up.
The most common objection Lex hears at conferences and in sales conversations is some version of the same sentence: "Our customers won't talk to a bot." It is the question that follows the category everywhere.
The data points the other way, and Lex was direct about why:
"Humans have a high bar for trust in order to have a conversation with you. Regardless of if it's a human or a voice AI agent."
That bar applies in both directions. Drop a human contact center rep on the phones with no training, no QA cycles, and no familiarity with your customer base, and customers reject the call. The same physics applies to a poorly built AI voice agent. If it is not trained on your real calls, scoped to a clear use case, monitored for failure points, and continuously refined, customers reject it.
The rejection has nothing to do with the customer's tolerance for AI. It has everything to do with the customer's tolerance for a bad experience. When the underlying system is designed properly, the result is the inverse of the objection. Customers tell AI voice agents things they will not tell your reps.
Payment collections is the cleanest case study for this dynamic, because it sits on top of human psychology in a way most contact center use cases do not.
When a human collections agent makes the call, the consumer on the other end is often delinquent, embarrassed, or in a difficult financial moment. Listen to those calls and a pattern emerges quickly. The customer is hesitant. They do not want to share their situation. They do not want to be judged. They give the minimum information needed to end the call.
Now listen to the same conversation when an AI voice agent introduces itself as a virtual assistant. The judgment falls out of the room. People say things like "it's been a tough month," "I get paid in two weeks," and "can you help me set up an installment plan?" They volunteer the context that the human agent could not extract.
That is not a magic trick. It is the predictable consequence of removing the social pressure of being judged by another person. When customers feel safe, they tell you what is actually going on. When they tell you what is actually going on, you can resolve the call.
The same psychology shows up at the top of the funnel.
Across financial services, healthcare, insurance, and home services, the standard motion looks the same. A consumer fills out a web form with a few details, abandons partway through, or lands on a phone number to ask questions. A human agent calls them back, or they call in.
Listen to those calls and the customer is editing themselves in real time. They ask the high-level questions they think are worth a human's time. They skip the simpler questions, the ones they are afraid will sound uninformed. They underweight the things they actually need to know.
When the same call is handled by a voice AI agent, the editing stops. People ask the simple questions. They mention they are remodeling a kitchen because their spouse really wants it. They ask whether their dependent is covered. They give a budget. The AI agent does not raise an eyebrow, does not seem rushed, does not feel like a stranger whose time they are wasting.
The result is dramatically richer customer context, captured at the start of the relationship, where it can shape every interaction that follows. Discovery is the foundation of any sales or service motion, and AI voice agents are now the channel where customers do their best discovery for you.
The richer discovery is not the only shift. Lex's second observation, drawn from the same body of calls, is about what AI voice agents are now being trusted to do once the conversation begins.
A year ago, the impressive demo was an informational Q&A bot that gathered name, age, and location, then transferred to a human. That is now table stakes. AI voice agents are owning outcomes that ladder directly into business-critical metrics: scheduling appointments, processing payments, dispatching roadside assistance providers, even running negotiations with labor in two-sided marketplaces.
In Lex's framing:
"Now you have an AI voice agent that quite literally has P&L responsibility, which is really cool that we're seeing."
That shift changes the conversation about voice AI from a cost reduction tool into a growth tool. The roadside assistance dispatch use case cuts time-to-resolution roughly in half. The labor marketplace use case automates a multi-step negotiation that frontline humans were never trusted to handle, with randomized negotiation logic that protects contribution margin from being gamed. These are not call deflection plays. They are P&L plays.
This is liberating, and it is dangerous. It is liberating because the central objection to voice AI dissolves once you accept it. It is dangerous because it is easy to misread it as "customers will accept whatever AI we put in front of them."
They will not. The bar has not gone down. It has moved.
The customer is no longer asking whether they are talking to a human. They are asking whether the experience they are getting is competent, personalized, and respectful of their time. An AI voice agent that introduces itself, knows who is calling, asks for the minimum verification needed, and resolves the issue without making the customer repeat themselves clears that bar. An AI voice agent that runs the same script a poorly trained human would run will not.
That is an architectural problem, not a tone-of-voice problem. It depends on a unified customer profile, integrated data, scoped prompts, simulated testing against your real production calls, and a continuous improvement loop. It is not solved by making the voice sound friendlier. It is solved by building a system the customer has reason to trust.
The contact center leaders who win the next cycle will stop framing voice AI as a threat to customer experience and start using it as the channel where customer experience compounds. They will treat customer experience not as a guardrail on cost reduction, but as a dependent variable they can grow alongside efficiency.
The first move is small. Pick a high-volume, sensitive use case where the human bottleneck is also a discovery bottleneck. Payment collections. Lead qualification. Eligibility questions. Build the AI voice agent properly, monitor every call, and watch what your customers volunteer.
Then ask the harder question: what conversations are you not having today, on any channel, that you could have once cost and trust are no longer the blockers?
Looking to take advantage of what voice AI agents can do in 2026? Let's build together.
Yes, when the agent is built properly. Across more than 400 million analyzed calls, customers not only accept well-designed AI voice agents but open up to them in ways they often will not with human agents, especially on sensitive topics like payment collections and lead qualification. Rejection of voice AI almost always traces back to a poorly scoped or undertrained agent, not to a categorical refusal of AI.
The strongest early wins come from high-volume calls where the human script is already standardized and where customer hesitation reduces information capture. Payment collections, lead follow-up, appointment confirmations, eligibility questions, and routing in regulated industries all see immediate impact. From there, more complex outcome-owning use cases like roadside assistance dispatch and labor marketplace negotiation become possible.
Trust is earned through architecture, not tone. Use a unified customer profile so the agent knows who is calling, scope each agent to a clear use case, simulate against real production calls before deployment, and run a continuous QA loop on every call. A friendly voice on top of a broken flow makes the experience worse, not better.
Regal is built on a CCaaS foundation and trained on more than 400 million real contact center calls, giving every deployment access to patterns the rest of the market has not seen. Regal Forward Deployed Engineers work alongside customer teams to scope, deploy, and continuously improve every agent, and Regal Copilot collapses what used to be a multi-week deployment cycle into a conversational interface that gets agents into production in hours.
Traditional voice AI deployments take weeks of coordination between vendor and customer teams. Regal Copilot brings build, simulation testing, deployment, and continuous improvement into one conversational system, compressing time to value to hours for many use cases. Complex multi-stakeholder deployments still benefit from Forward Deployed Engineering support, but the bottleneck has shifted.
Ready to see Regal in action?
Book a personalized demo.



