How Leaders Measure the Success of Customer Service and Support

,

Customer service and support leaders have more than 200 metrics to measure performance, but some are more used than others. While it is easy to follow the crowd, our data suggests that doing so can result in missed opportunities to better demonstrate the function's value and diagnose performance issues.

Customer satisfaction score (CSAT), the Net Promoter Score (NPS), and average handle time (AHT) are the three most commonly used metrics within customer service and support. But none of these metrics best articulate the function's objectives of increasing customer loyalty, reducing organizational costs, or cross-selling/upselling. Furthermore, most metrics are derived from a single input source, such as operational data or post-transactional surveys, and when used cross-functionally, definitions and calculations are often inconsistent. This can lead to misinterpretation and difficulties in demonstrating functional value to stakeholders or making a compelling case for investment.

As a result, we have three key recommendations:

  1. Enhance the value and actionability of loyalty and cost reduction metrics by adding, replacing, or removing metrics to better align them with each business objective.
  2. Increase measurement accuracy and confidence by adopting multisourced metrics calculations.
  3. Improve internal calibration, collaboration, and the perceived value of service and support business outcomes by standardizing metrics definitions and calculations across the enterprise.

Service and support leaders' most common objective is to mitigate customer disloyalty caused by product or service issues and look for opportunities to grow customer loyalty through increased retention, wallet share, and advocacy after the interaction. However, the most commonly deployed metrics focus on the interaction itself rather than the outcomes it drives for the customer.

Service leaders frequently use CSAT and NPS in executive presentations, and C-suite members use them for peer benchmarking. However, these measures focus on the quality of the service interaction or customers relationships with the company/brand as a whole. Their ability to demonstrate service and support's contribution to customer loyalty is limited.

Our research has shown that, while commonly used, CSAT and NPS are less accurate in predicting service and support's impact on loyalty than other metrics. Specifically, the customer effort score (CES) and value enhancement score (VES) have greater predictive accuracy for assessing how a service and support interaction affects customers' likelihood to be retained, increase their wallet share, and advocate for the organization.

Service and support cost reduction efforts focus on channel-specific measures of efficiency rather than broader opportunities to reduce the cost of customers'; multichannel and multi-interaction journeys.

It is notable that the most commonly used metric for measuring cost reduction, AHT, measures time rather than money. Fifty-seven percent of organizations in the sample measured AHT with a cost reduction focus. This metric is slightly more prevalent in B2C organizations (68 percent), but it still remains the most used cost metric in B2B organizations (49 percent). While AHT does correlate to cost associated with specific interactions, it fails to address a variety of other factors that might negate any actual cost associated with the organization as a whole. Adopting a cross-channel financial measure like cost per resolution could help leaders to better understand where costs are incurred and take steps to drive longer-term cost reduction across the customer service journey.

Leaders Derive Their Metrics from Survey and Operational Data

Leaders use a wide variety of data types to inform their metrics, though some are more common than others. Our survey data suggests that, across all the metrics leaders reported measuring, operational data is the most common data source. This is the case for CX metrics (e.g., CSAT), financial metrics (e.g., cost per contact), and channel-specific metrics (e.g., average speed of answer or self-service containment). However, for CX metrics, leaders also rely heavily on post-transactional service and support voice of the customer surveys.

A challenge we observed, however, is an over-reliance on a single source of data for each metric. This is risky, considering low survey response rates, internal data management challenges that prevent leaders from understanding customers' end-to-end journeys in detail, etc. The most progressive customer service and support organizations are looking at how they can multi-source metrics to increase their accuracy.

Many metrics are shared either cross-functionally with the wider company (e.g., CSAT or NPS might also be used by marketing, sales, supply chain, etc.) or across channel teams within the service and support function (e.g., cost per case or first contact resolution).

Consistency of measurement is key to the success of cross-functional collaboration and efforts to break down channel silos. Specifically, functions and teams within the organization should define and calculate a metric in the same way. However, service and support leaders report that many of their metrics are measured inconsistently, either cross-functionally or within service and support.

Therefore, service and support leaders should work with their teams and cross-functional peers to adopt best-in-class measurements that are defined and calculated consistently across the organization. Failure to do this can lead to confusion, making it more difficult for service organizations to clearly demonstrate their value to stakeholders or make a compelling case for investment.


Christopher Sladdin is a senior principal analyst in Gartner's Customer Service & Support Practice. J.J Moncus is a principal researcher in Gartner's Customer Service & Support Practice.