What Customer Service and Support Leaders Should Know About ChatGPT

Since the launch of ChatGPT late last year, social media and the press have been abuzz with discussions around the possibilities and dangers of this innovation, ranging from its ability to debug code to its potential to write essays for college students. It captured the world's attention because it signifies the first widely known artificial intelligence technology that challenges the one trait humans always thought they would have over machines: creativity.

ChatGPT is a chatbot that combines advancements in AI and fine-tuned large language models (LLMs). ChatGPT was trained using 300 billion words taken from books, online texts, Wikipedia articles, and code libraries, then fine-tuned with human feedback. The technology can do everything from generating, classifying, and summarizing content to answering questions on a wide variety of topics and admitting its mistakes or challenging incorrect premises.

Clearly, ChatGPT seems to be raising the bar for general-purpose chatbots overall. But the question remains: Can ChatGPT be leveraged in service organizations, or is it just hype?

Chat GPT's current adoption is driven by the curiosity of customers and service organizations to generate content. At present, ChatGPT is available for free for direct access via the interface provided by OpenAI. The ChatGPT model can also be accessed using tools to create, tune, and evaluate the prompt (the user input) and ChatGPT outputs.

While ChatGPT's free direct access might not continue indefinitely, the service is available at a premium ($20 per month). In the near future, ChatGPT can be accessed for a price via APIs for incorporating into enterprise workflows.

Service and support organizations can leverage the direct interface to generate content such as email, FAQs, knowledge articles, and self-service guidance and to summarize case/issue information and classify or rewrite content. Service and support teams should establish governance and process to review the outputs of ChatGPT for accuracy before adopting.

Service and support organizations can also leverage the prompts to incorporate ChatGPT into their workflows.

Vendors will continue to build domain-customized models leveraging GPT 2 or 3 or other LLMs that are available and can be adopted by service and support organizations. Evidenced by the most recent announcement from Microsoft launching an AI-powered Bing search engine and Edge browser, companies will begin to integrate GPT models into their products/platforms.

Service Leaders should partner with other leaders in the organization and initiate projects to do the following:

  • Understand and evaluate GPT technology (including OpenAI GPT and other companies' LLMs) and best mode of access for the organization.
  • Identify workflows and use cases that can benefit from the technology.
  • Establish a governance body and training and coaching plan for employees.

While ChatGPT is creating excitement and hype, service leaders will need to address the following significant concerns before applying the technology:

  • Accuracy. With large models that involve billions of parameters comes a high probability of generating incorrect responses confidently, also called hallucinations.
  • Influence. The models are learning continuously, and bad actors can train the models to deliver responses that can be potentially damaging to the business and customers.
  • Verbosity. ChatGPT generates contextual text when answering questions from the user, which can be verbose at times.
  • Concentration of Knowledge/Information. Given the high cost of training and running the model, it is only possible for a few companies with deep pockets, such as OpenAI (currently, OpenAI spends more than $100,000 per day to run the ChatGPT), to host and operate these models. As companies use these models and the models learn from every interaction, the ownership of the resultant enhanced model (depending on where it lies) could create a significant knowledge center imbalance in the future.
  • Regulatory. From an IP perspective, the current ChatGPT model is trained on vast amounts of data from the internet, and it is unclear whether there might be legal implications for reuse of this information.

Regarding information security, companies will need to provide information to the base models to customize them. How the base models will use the data should be reviewed and understood to ensure the security of the company's information.

Service leaders can get started with ChatGPT by investing time to understand the technology and its potential applications, assessing available models, and putting in place the right talent to drive eventual ChatGPT adoption.

The important takeaway is that ChatGPT won't replace existing expert human agents or curated self-service applications but will augment the service and support ecosystem with a critical capability that will continuously learn and improve with each interaction.


Uma Challa is a senior director analyst within Gartner's Customer Service and Support Practice, covering digital customer service, CX, and customer service and support strategy/leadership.