One of Valoir's big predictions for 2024 was that we would see significant advances and disasters with artificial intelligence (AI). As 2024 draws to a close, we're a lot closer to agentic AI that completely automates customer interactions from a technology perspective, but early fear of missing out (FOMO) with AI led us to experiences that put FOMU&mdashfear of messing up—front and center. From vendors that released products that clearly weren't ready for prime time, to vendors and customers who played it a little too loose with customer data, to developers who had the hubris to think they were prompt engineers, we've seen many things in the past 12 months about AI to give contact center leaders pause.
AI, and specifically agentic AI, has great potential for automating customer interactions and improving both agent and customer experience, but only if it can be deployed with minimal FOMU risk. To ensure a balance between delivering value and managing risk, organizations need to consider a number of factors in their AI strategy.
Here are a few ways to keep things in check:
AI is only as good as your knowledge.
Organizations that already have robust, complete, and current knowledge bases will have more to ground whatever AI they choose and will be more likely to have accurate results. However, we're finding that even organizations that think they have pretty solid knowledge bases are discovering that they still need to rewrite and rechunk articles and do some testing and fine tuning to get their response rates up to acceptable levels. A knowledge base audit can help you determine how ready you are to deploy AI. Any vendor that says you can turn on its AI—agentic or otherwise—without tweaking and tuning your knowledge base is likely setting unrealistic expectations.
Transparency and consent are key.
We have not seen the last of customer lawsuits about contact centers using their data inappropriately or illegally for AI training purposes. Organizations should prioritize transparency about how AI systems operate and which data is being collected. Providing clear, easily accessible privacy notices and obtaining explicit consent before recording or analyzing calls is essential. Informing customers of the specific ways their data will be used fosters trust and reduces the likelihood of legal challenges. Contact center leaders should be asking for transparent and explainable data policies and practices from their vendors as well. Trust is essential, but as former president Ronald Reagan said, "Trust, but verify."
Expect to be iterative.
As AI technology continues to get better, expect that you will need to continually evaluate and evolve your solution. At the same time, you'll want to do regular testing to ensure the solution can handle high volumes of interactions without compromising accuracy and security. Ongoing monitoring and updates to AI models can prevent the accumulation of errors over time.
Guardrails are a must.
Yes, everyone's talking about governance and guardrails, but what does that actually mean? If you're considering agentic AI, you should expect that any gray area in policies and practices where agents are expected to determine the best course of action will be a pitfall for the AI and plan accordingly. You'll also want to have a way to monitor the quality and outputs of AI on a real-time basis, automated thresholds and alerts if accuracy starts to dip, and a virtual shutoff valve. Obviously your solution should ensure the agent doesn't answer questions it shouldn't, but it should also be clear to you at which accuracy threshold your solution provides a response (vs. saying it doesn't know or escalating to an agent). You might also need different levels of accuracy for different kinds of interactions.
There are clear benefits for deploying AI to streamline customer interactions, accelerate response times, and free up agent time for more complex requests requiring more human empathy, but only if you can manage its potential risk. Setting a strategy that is realistic about your knowledge hygiene and currency, has clarity about transparency and consent, and has the appropriate guardrails are key. Getting those things right and ensuring you have an ongoing strategy for optimizing and managing your AI over time will help you leverage AI with less FOMU.&
Rebecca Wettemann is founder and CEO of Valoir.