Date Published July 11, 2018 - Last Updated December 13, 2018
Each month, I highlight one Key Performance Indicator (KPI) for service and support. I define the KPI, provide recent benchmarking data for the metric, and discuss key correlations and cause-and-effect relationships for the metric. The purpose of the column is to familiarize you with the KPIs that really matter to your organization and to provide you with actionable insight on how to leverage these KPIs to improve your performance! This month, I depart from my usual format. Instead of discussing a single metric, I will explore the cause-and-effect relationships between desktop support KPIs. A companion article to this one explored the cause-and-effect relationship of service desk KPIs.
Cause-and-Effect for Desktop Support KPIs
Many of us have heard the sage advice “You can’t manage what you don’t measure.” This is particularly true for desktop support, where effective performance measurement is not just a necessity, but a prerequisite for effective decision-making. Despite the widespread belief in this statement, few desktop support groups use KPIs to their full potential. In fact, the vast majority of desktop support groups use metrics to track and trend their performance, but nothing more! Unfortunately, in this mode desktop support misses the real value of performance measurement by failing to exploit the diagnostic and prescriptive capabilities of KPIs. The true potential of KPIs can be unlocked only when they are used holistically, not just to measure performance, but also to diagnose the underlying drivers of performance gaps and then take steps to close or mitigate negative performance gaps.
The key to using KPIs diagnostically and prescriptively is to understand their cause-and-effect relationships. You can think of these relationships as a linkage where all of the KPIs are interconnected. When one KPI moves up or down, other KPIs invariably move with it. Understanding this linkage is enormously powerful because it provides insight into the levers you can pull to affect continuous improvement and achieve desired outcomes.
The diagram below represents the desktop support KPI linkage and will be central to our discussion. The metrics shown in red have been the subject of past Metric of the Month articles.
The Foundation Metrics
Virtually everything undertaken by desktop support can be viewed through the lens of cost and quality. Will this new technology reduce my costs? Will this new process improve customer satisfaction? This insight is crucial because it greatly simplifies decision-making for desktop support. Any undertaking that does not have the long-term effect of improving customer satisfaction, reducing costs, or both is simply not worth doing. This is why cost per ticket and customer satisfaction are known as the foundation metrics.
These metrics are also very helpful for telling the story of desktop support performance. Most people instinctively understand cost per ticket and customer satisfaction, so it is easy to have a discussion about the performance of desktop support in the context of these two metrics. It is important to note, however, that the foundation metrics cannot be directly controlled. Instead, they are controlled through their underlying drivers.
The Underlying Drivers
Every KPI in desktop support is either directly or indirectly connected to cost per ticket and customer satisfaction. Those that directly impact the foundation metrics are called the underlying drivers. These include technician utilization, the ratio of technicians to total headcount, first contact resolution rate for incidents, technician job satisfaction, and mean time to resolve (MTTR). Improvements in any of these metrics result in corresponding improvements to the foundation metrics. But unlike the foundation metrics, which cannot be directly controlled, the underlying drivers can be directly controlled. In fact this is where you have the greatest leverage to impact the cost and quality of desktop support!
If a desktop support group is struggling with high costs, for example, reducing the cost per ticket can often be achieved by increasing technician utilization or by reducing technician absenteeism and turnover. Likewise, if the goal of desktop support is to improve customer satisfaction, you can achieve this by improving the primary service level metric, mean time to resolve. The cause-and-effect relationship between incident MTTR and customer satisfaction was discussed in my Metric of the Month: Incident Mean Time to Resolve and is shown below.
The Bellwether Metrics
Technician job satisfaction and technician training hours are considered bellwether metrics because they are at the base of the KPI cause-and-effect diagram and impact virtually every other metric in desktop support. Any movement in the bellwether metrics will be felt throughout the KPI linkage and will eventually have an impact on the foundation metrics. If I know the technician satisfaction and training hours for a desktop support group, I can almost always predict what the cost and customer satisfaction will be.
If I know the technician satisfaction and training hours for a desktop support group, I can almost always predict what the cost and customer satisfaction will be.
High levels of technician job satisfaction translate into lower absenteeism and turnover, which then translates into lower cost. Likewise training hours that are above average almost always have the effect of producing a higher first contact resolution rate for incidents, which then drives higher customer satisfaction levels. Moreover, training hours are also one of the key drivers of technician job satisfaction and therefore represent a high leverage opportunity for desktop support to improve performance for both cost per ticket and customer satisfaction. The figure below shows the impact of training hours on technician job satisfaction.
Once you become familiar with the cause-and-effect relationships of desktop support KPIs you will be in a much better position to identify, diagnose, and act upon any performance gaps in desktop support. This includes positive performance gaps, which you want to perpetuate, as well as negative performance gaps, which you can mitigate or eliminate by improving the underling drivers.
Please join me for the next Metric of the Month where I will discuss technician turnover and the key drivers of turnover in IT service and support.
Jeff Rumburg is the winner of the 2014 Ron Muns Lifetime Achievement Award, and was named to HDI’s Top 25 Thought Leaders list for 2016. As co-founder and CEO of MetricNet, Jeff has been retained as an IT service and support expert by some of the world’s largest corporations, including American Express, Hewlett Packard, Coca-Cola, and Sony. He was formerly CEO of the Verity Group and Vice President of Gartner. Jeff received his MBA from Harvard University and his MS in Operations Research from Stanford University. Contact Jeff at [email protected] . Follow MetricNet on Twitter @MetricNet.