Companies Need to Set AI Agent Interaction Policies and Procedures

An individual [name redacted] turns to one of the new "Cancel My X" apps. You know, the ones with names like Trim, Hiaitus, Bobby, Hubby, or Subby… whatever. He asks his personal virtual assistant (bot) to cancel one of many streaming services, which the bot dutifully tries to do. As an example of the power of voice AI, the bot calls into the service's contact center. Then, as an example of the equally powerful anti-fraud resources of the contact center, the request is denied because, as a matter of policy, the company requires direct contact with its customers for cancellations.

But the bot will not be denied. According to its programming, it calls the company back. Again it is denied. So it calls again… 247 more times.

It might just be a triumph for the company's retention strategy, making cancellation as difficult as possible. But it illustrates an instance where a personal virtual assistant's actions morph into a set of interactions that are indistinguishable from a denial-of-service attack. Considering that Google's Gemini or Apple's Siri could be calling on behalf of users to make purchases or book flights (rather than cancel services), it is clear that both sides have to iron out the details surrounding situations where customers delegate responsibility to personal virtual agents, or AI agents.

Suffice it to say that momentum has been building over many years to make these types of transactions more commonplace. For more than 40 years, I've sought to discover, understand, and encourage adoption of the technologies that improve conversations, especially between customers and the companies with which they want to do business. In the 1980s, toll-free numbers were a big thing, giving rise to the need for call centers capable of receiving thousands of calls simultaneously. That gave rise to interactive voice response (IVR) systems to discover the purpose of the call, and automated call distributors (ACDs) to route the calls to the agents standing by. Back in those days, all agents were live.

As time passed, we've seen constant technological improvements. Yet solutions have consistently fallen short for those of us who expected that, by now, we would have access to our own virtual personal agent (VPA); one that understands what we're trying to accomplish and then carries out tasks on our behalf. A succession of new technologies came along to fulfill those goals. It took on biblical proportions. Natural language processing begat speech recognition, which begat conversational AI, and then generative AI.

Each generation improved on the performance of its predecessors but also exposed gaps that required new new technologies to fill. In the case of conversational commerce, VPAs might act autonomously to log onto company web sites and take the necessary steps to complete transactions, or a company virtual agent might be called upon to build its own API into a product database to learn of inventory status. These are the sorts of tasks that agentic AI, and its anthropomorphized offspring, AI agents, are prepared to perform.

The real-world example above was described in a post on LinkedIn by John Walter, a tech CEO and president of the Contact Center AI Association (CCAIA). He went on to enumerate the "host of operational and legal issues" that bot-to-contact center rep interchanges pose. He notes "The cost of each attempt is negligible for a proxy with a bot… But the cost to the contact center is great. And the company receiving the request is exposed to legal risk under negative option marketing regulations."

In other words, failure to "set healthy boundaries with proxies" (using Walter's words), leads to significant costs that enterprises can avoid. Walter has established a company that will soon introduce the tools that set healthy boundaries. In the meantime, firms that include DataGrail, OneTrust, Ethyca, Transcend, Sourcepoint, and WireWheel.io have stepped in to handle tasks surrounding customer control of personal data, privacy protection, and regulatory compliance.

It starts with understanding AI agents and their features and functions. My bulleted shorthand is as follows:

  • They can act autonomously, powered by one or more language models, especially large action models (LAMs), which enable the AI agent to understand and address complex tasks.
  • They break down problems into sequential steps/sub-tasks, handling each individually.
  • Through iterative cycles of thought, action, and observation, they adapt responses based on feedback.
  • They use tools for interacting with systems like APIs or web searches, enabling them to handle diverse tasks and execute intricate workflows.
  • Each tool has a description in natural language, the AI agent then matches the sub-task at hand with the tool description to know which tool to match with which sub-task.
  • Tools can include functionality like web search APIs, data retrieval APIs, code execution environments, browser automation tools, natural language processing (NLP) APIs, file management systems, ACI-tools, vision, etc.

The key take-away is that AI agents understand instructions in plain English and then carry out all the sub-tasks required to get a job done.

Things will be different this time, right?

Agentic AI could eliminate the time it takes for Siri and its cohort of AI agents to establish relationships with companies and their sales or customer care systems. But a lot of loose pieces need to fall into place. AI agents are just that, agents, which is also a legal term for people authorized to act on behalf of an individual (or proxy). Terms and conditions need to be established to define just what they are empowered to do. There will also be significant issues surrounding the security of personal data (such as preferences) provided to those agents, and many details need to be worked out surrounding the terms and conditions under which legal departments will recognize instructions given by a live person's digital representation.

It is not too early for CX and contact center planners to include AI agent-to-AI agent communications while defining the personnel, policy, technology, and procedures they employ in their customer care contact centers.


Dan Miller is founder and an analyst emeritus at Opus Research.