Why Incomplete AI Hurts CX and Productivity

Incomplete artificial intelligence solutions have high costs in terms of customer experience (CX) and worker productivity. When an AI solution fails to deliver a good CX, consumers become frustrated and lose trust in AI's capabilities. That has a ripple effect that, in addition to negatively impacting brand reputation, hurts productivity and hinders workflows, because workers end up spending more time intervening to complete tasks. With this, the benefits of AI are overshadowed by the costs effecting it. In effect, the solution doesn't just fail, its failure creates new problems. But the issue isn't caused by generative AI; it's caused by the implementation.

A recent McKinsey report determined that more than 95 percent of customer service interactions can be handled via AI on digital channels, as opposed to in-person or over the phone. But there's an important caveat here: That goal is only achievable for companies operating at the highest level of AI mastery. It takes time to achieve mastery in any field and, because generative-AI is relatively young, few firms have achieved mastery at this point. That's why Deloitte found that while nearly all CX leaders are confident that AI could improve customer experience, only three out of ten leaders surveyed said their companies were regularly using AI. That's a big disconnect between the exciting promise of AI and the reality of implementation.

To be successful, AI solutions must be able to complete the task and close the loop, facilitating the full customer journey from inquiry to resolution. That's a complicated mission in any industry, but it's especially complex in heavily regulated sectors like healthcare, where bad CX outcomes have both human and financial costs.

Understanding AI Limitations & Benefits Based on Use Case

A good place to start is by understanding what AI can and can't do. Unfortunately, there isn't a universal bright line here. As a researcher who embedded with Boston Consulting Group to chronicle the firm's early experiments with AI told Harvard Business Review, "[identifying the] jagged technological frontier [between what AI can and can't do] requires studying it with your people, deeply in your context." If that same researcher had embedded with a healthcare company, it would have found that AI can be beneficial in a number of contexts throughout the healthcare journey, but that the benefits vary widely from one context to the next.

Already, healthcare providers are using AI to improve scheduling, route patients to the correct provider, and instruct patients on what they need to do to prepare for routine procedures. When thoughtfully deployed, these kinds of solutions improve the patient experience. They also boost productivity by freeing up support staff to address specific non-routine issues.

During a visit to a provider, AI transcription tools can help doctors focus on their patients and provide better personal care. In effect, AI runs in the background—a big improvement over talking to a doctor who is preoccupied with inputting patient data into a tablet or desktop.  But while AI transcription tools are increasingly common, medical professionals need Health Information Portability and Accountability Act (HIPAA)-compliant solutions that make patients feel like their privacy is of the utmost concern. In this context, trust is an essential part of implementation for AI transcription tools because without trust the patient experience is poor and providers end up wasting time to address a failed AI deployment in addition to the patient's needs.

AI can also make a big difference for patients after they leave the doctor's office or hospital. Discharge instructions or instructions for taking medication are time-consuming and impersonal. If patients don't have questions, they end up wasting their time waiting for an overworked pharmacist to read boilerplate information. But if they do have questions, it's difficult to get timely answers. AI can walk patients through these instruction in their own time, using language they understand. But the quality, information appropriateness, and completeness of the input data is what makes or breaks the experience.

AI’s Impact on CX Depends on Completeness

AI solutions require complete data. When AI systems are trained on datasets with missing information, several things can go wrong, including the introduction of bias, reduced model accuracy, reduced decision-making reliability, overfitting (learning from noise rather than the true underlying patterns) and producing answers that might be factually correct but industry-inappropriate.

Horizontal generative AI initiatives often try to circumvent this challenge by using many datasets from a broad array of sources scraped from various pages on the internet, even if those sources are unverified or unknown to the developer. Many people believe that providing more data for AI can enhance completeness and remove biases. However, when AI solutions are built and trained on unverified data sources, it can result in hallucinations that produce answers with inherent bias, inappropriate responses, or prejudice. Organizations that leverage AI trained on unverified sources also open the door to legal trouble if the resulting output infringes on copyrighted materials (art, literature, proprietary information, etc.).

To truly make a positive impact on CX and productivity, we need end-to-end AI solutions that continuously learn and improve. AI systems should be designed to handle complete processes, from initial customer interaction to final resolution. This requires integrating with back-end systems and databases to access necessary information and perform required actions. It also means that the AI system should be capable of learning from interactions and improving performance over time.

Complete AI is Most Productive When Used as A Tool for Workers

Humans don't just have a role to play in creating and maintaining end-to-end AI solutions. We have a big responsibility here. On a tactical level, regular updates and refinements are necessary to address gaps in AI capabilities and ensure task completion. Healthcare data is doubling about every 75 days, and in that doubling there are relevant data points that must be introduced to keep the AI solution current. But those refinements must be measured against key performance indicators. An AI tool without metrics to track things like task implementation rates, customer satisfaction, and efficiency is an incomplete AI solution.

Completeness for AI isn't simply a matter of data, analytics, and technological capabilities. No AI system is complete without human oversight. As a general principle, tools serve humans, not the other way around. But human oversight isn't simply a matter of striking a balance with automation and double-checking the outcomes. Human oversight is essential because humans have something we can't build with an AI model—a conscience.

Humans Are Essential to the AI Equation

Without a conscience, AI risks compromising values like transparency, fairness, consent, and privacy. A machine without a conscience will hurt humans rather than help them. Building space for a human in the loop will improve CX and productivity. But importantly, humans in the loop, supported by complete AI systems, will ensure that we deliver on the promise to lift up humanity and improve patient care.>


Paul Chang is CEO of Brand Engagement Network (BEN).