Mixed Outlook for Automated Voice Agents for Customer Service

If you take recent coverage of agentic artificial intelligence at face value, you'd conclude that AI-informed agents that talk are already employed widely for customer care. It starts with the ubiquity of voice assistants. The venerable Siri got a serious upgrade for Mac and iPhone users who activated Apple Intelligence. Google's voice AI capabilities are the basis of Gemini, an AI-based personal assistant for Android phones and NotebookML, which most recently appeared as part of a personalized podcast in which two synthesized hosts reviewed musical tastes as part of Spotify';s Wrapped.

Voice assistants also figure prominently in the product offerings of a variety of solution providers, including AWS' Amazon Connect, but also from a number of smaller participants like GridSpace, Cognigy, Talkdesk, and many others.

In spite of high levels of awareness and interest, voice AI has to overcome many near-term challenges. In no particular order, these include latency, security, and trust. Customers using the chat channel or interacting with bots can tolerate slow responses that we don't exhibit when talking to an agent (automated or otherwise).

In terms of latency, the new generation of voice AI bots have overcome many of the challenges, like detecting when a person has stopped talking to start responding. But most people have not experienced this improvement yet.

As for security, both IT managers and compliance professionals have to be assured that the systems that support voice AI conform to all privacy directives, as well as security concerns. Personal data must be encrypted end to end and stored on resources that are not shared with third parties. Abiding by these first two requirements is key to building the level of trust required to foster mass appeal.

Mixed Signals from Enterprise Decision-Makers

Vendors of voice AI know that enterprise decisions to develop (dare I say hire) AI agents to assist customers or live agents are increasingly made by the IT folks in charge of the systems that support both contact centers and the back office CRM, inventory management, billing and recordkeeping, and general operations that must be consulted by the savvy AI agent. During the past two years or so, they've been forced to up their working knowledge of conversational AI, then generative AI, and now, agentic AI.

During those years, the capital cost was very low, and there was ample budget for staff and resources for multiple digital transformation initiatives. When COVID drove many employees, including customer care reps, to work from home, investment in multiple cloud-based solutions proved timely Platforms for ubiquitous access to unified communications (UCaaS), contact centers (CCaaS), and a host of tools, databases, APIs, and algorithms to support cCustomer experience-as-a-service (CXaaS) were invaluable.

2025 marks the return of financial reality. Now that interest rates are more than zero, cost of capital is a real consideration. IT managers can no longer afford to support a multiplicity of point solutions or chase the latest new fad. Their first-order concern is to avoid the technology debt that rises from too many AI-driven initiatives.

AI Agents Do the Heavy Lifting, But That's Not Enough

Agentic AI has tremendous appeal. Ideally, the term describes an approach to automating customer care or agent assistance where the hard work is carried out autonomously, as if by magic, by AI-infused agents. They create workflows or build APIs to back-office systems and databases to carry out instructions that are embedded in well-crafted prompts. This does not eliminate the need for expensive humans for development. Instead it greatly reduces the person/hours required to get automated assistants up and running.

Yet, given the dearth of real-world implementations, it is clear that decision-makers do not see agentic AI as a magic bullet or clear path to large language models and voice AI adoption. To overcome inertia, solution providers must respect the roles and value of current personnel and processes. The most important roles for in-house personnel who had previously been dedicated to development, testing, and maintenance of language models and bots include conversation design and prompt writing, which includes the embedding of guardrails to minimize hallucinations and keep autonomous agents on task and on track.

The keepers of self-service platforms are vested in the current solutions that they have developed and still manage. It is difficult to move them to a new approach, even one that employs autonomous AI agents to do the heavy lifting. Even though the cost of interacting with LLMs is declining, it is still significant. Plus, dipping into remote LLMs to support conversational responses introduces concerns about security, trust, and latency that scripted services do not. Now that agentic AI is informing voice-based systems, when replacing interactive voice response systems with voice agents, lingering latency issues will be in the spotlight.

I remain very bullish that agentic AI and voice agents will assume a major role in automated customer care across a broad spectrum of verticals. Travel, utilities, financial service, and government still attract billions of voice calls globally. The migration should be swift but tempered by the known speed bumps described in this column.


Dan Miller is founder of Opus Research and a retired industry analyst.