Customer service, among plenty of other industries, is often singled out as a key domain for artificial intelligence (AI) to thrive. In fact, it is one of the few industries where the use of generative AI is starting to take root.
However, adoption rates are slower than you might think for a sector that was one of the first to adopt early forms of the technology, particularly in chatbots and automation. Our recent survey found that while companies recognize the value of AI, as few as one in 10 have robust and clear implementation plans.
What's causing this slow adoption rate of AI? The data points heavily to concerns about security.
The proliferation of generative AI has made people more aware of the benefits and utility of AI, creating an increased demand for it in our daily lives. In customer service, the productivity gains of effective AI are well documented; something as simple as automating call summary notes can save 1,000 agent hours a month. More laterally, the U.S. healthcare industry could save $18 billion by 2026 using AI to automate administrative tasks like ordering prescriptions. Doctors are using AI to process mountains of data to help them deliver better, more efficient treatments.
It's not just businesses that recognize the potential of AI, fueled by access to generative AI. Consumers are also aware of the savings associated with generative AI. It likely would't surprise many readers that Reuters estimates that AI will save the average person four hours a week this year and 12 hours a week in five years. Perhaps this is why 80 percent of U.K. customers want to see AI within customer service pathways to help speed resolutions (39 percent) and ensure agents have the proper knowledge (37 percent). Globally, customers want AI support available in legacy industries like insurance; a third of Americans think AI could help them understand complex insurance information, and 45 percent of Australians want AI to help them compare insurance plans.
But experiences, so far, have been sub-par. Underscoring this is the reality that for most people, chatbots are their only experience of AI in customer service, and they tend to leave a bad taste in their mouths.
At its worst, businesses actively use chatbots and maze-like processes with multiple steps to stall customers and divert them from accessing human agents so that they can reduce running costs. The more typical experience is one of generic responses, continually repeating information previously provided and endless loops that take c ustomers back to the beginning.
Stuck on Repeat: The AI Implementation Loop
Ironically, for the most part, AI in customer service has been stuck in its own endless loop, which has stunted meaningful, deep implementation. We've observed a pattern of AI being superficially implemented without being sufficiently trained, resulting in an inability to resolve complex issues independently, and naturally, further development gets abandoned. These poor introductory experiences have contributed to a plateau of implementation that leaves everyone with AI 1.0 rather than the far more advanced AI we can access today, and the cycle repeats itself.
Our recent survey offers some clues: a third of CX professionals hold off AI implementation because of cybersecurity concerns. When 87 percent of consumers won't do business with companies that have had data breaches, minimizing your risk exposure is incredibly important. However, so is efficiency and ensuring your customers get the best experience possible: lest we forget that 51 percent of customers avoid spending money with companies, while a further 30 percent seek out competitors after bad customer experiences.
Ultimately, businesses need to implement AI to support their agents and meet customers' growing expectations without compromising cybersecurity.
Today, advanced chatbots can go beyond pre-set answers and pathways to independently resolve queries by fetching relevant and specific information. Equally, they can filter through swaths of company information for agents and return the most up-to-date advice from the company's knowledge center. Authentication processes need to be established throughout the knowledge-gathering process to ensure that the AI is doing this securely.
These advanced capabilities and the ensuing amounts of data these chatbots can access and hold make them a bigger target for hackers. Ensuring that your tools are certified by security standards such as ISO 27001, and digital hygiene practices like proper authentication procedures, and correct implementation can secure your chatbots and any data they hold. By using certified tools, your operations remain secure while your customers are satisfied. There needn't be a trade-off.
Dvir Hoffman is CEO of CommBox.