Date Published December 1, 2020 - Last Updated November 30, 2020
This article first appeared in no jitter.
Artificial Intelligence (AI) is no longer just a buzz word being discussed in theory – we’ve reached a point where AI is delivering real, measurable impact across a wide range of industries. The AI solutions being used today are automatically capturing and making decisions based on enormous amounts of data – from supporting healthcare diagnoses and detecting fraud in financial services to improving how organizations engage with customers.
The application of AI to improve employee and customer experiences is one of the most powerful use cases, in my opinion. Many organizations already have a trove of customer interaction data at their fingertips – whether it’s via call center interactions, chatbots or other channels – and AI has the power to turn that data into insightful intelligence that boosts customer retention, improves consumer loyalty and engages employees.
But when AI is influenced by a flood of data with no human management, it has the potential to go in directions we don’t expect, and not all unintended consequences of AI are so benign. Remember the Microsoft Twitter bot that accidently learned to act racist?
Particularly in the customer service segment, it’s inevitable that AI will ingest data reflective of the ugliest parts of society, like bias, racism, and sexism. Companies know this – and many decide the best solution is to put guardrails in place to mitigate and eliminate data bias.
I believe there is a better way. This data is a reflection of what is, but it doesn’t have to be. What if, instead of editing it out, we kept it (for now), operationalized it and actually learned something from it? It’s possible that AI might even make us more human.
Taking a Different Approach to Biased Data
Most organizations – at least those that have the capability to spot abuse, bias, and racism in real time – default to forgoing analysis of these instances. This means they aren’t factored into go-forward decisions regarding customer service and the employee experience. On some levels, this makes sense. But what about the employee who was on the receiving end of the abuse or racism? What about the bottom-line impact on the entire organization and brand reputation?
Imagine you’re a customer service representative. You take a call and the words you hear from the consumer are, “You sound Black. I want to speak to a white person.” If that sounds hypothetical to you, it’s not. It actually happened to me when I was a contact center agent. And I’m definitely not alone.
Interactions like this occur more often than we like to admit – and have a significant impact on employees. In my personal experience, most racist interactions directly impact agents for 30 to 45 minutes after the call has concluded; yet during that time they are still being graded and scored on their performance.
For agents, resentment festers and trust that the next caller will be more human erodes. Many think, “Did no one hear what this customer just said?” They are on emotional guard and only able to give the bare minimum to each call, resulting in getting the bare minimum for performance scores. Talk about a one-two punch.
When harmful interactions occur, an organization’s top priority must be to protect and defend frontline employees. Leaders must recognize if, when and why their team is facing these situations so they can take real, difference-making action. These insights can help brands make important decisions – like banning the abusive customers, equipping employees with real-time, situational guidance to respond to abuse, and flagging when a supervisor needs to intervene. The insights also can help us know when to give employees a break and support after difficult engagements.
This is simply impossible unless a brand is monitoring 100% of its customer interactions in real time. The unfortunate reality: most organizations still only analyze 3 – 10% of their customer interactions. This is a mistake from a business perspective, as well as dangerous and potentially negligent from an employee experience perspective.
AI For Good: How Organizations Can Drive Change
Your customer engagement and interaction platforms – the call center, chats, social media, and more – are the epicenter of any business. The interactions that occur between your brand, employees, bots and customers have far reaching implications on customer experience, brand reputation, employee retention, and the entire organization. Unfortunately, they are also a platform for ugliness.
Being transparent about bias, racism and abuse leads to greater awareness of existing problems and, eventually, solutions. While many experts have rightfully pointed out the potential bias associated with some AI algorithms, AI can also be an enabler in our journey to spark change – when deployed correctly and thoughtfully. The correct way isn’t blinding yourself to what’s bad and ugly; it’s generating deep understanding to drive meaningful action, from improved employee experience to an organization built on ethics and fairness.
With humans learning about how and why these problems occur, we have a better avenue for instigating change - improving the customer and employee experience and systemic issues at large.
Conrad is currently a feature engineer with the CallMiner Research Team, where his 10-plus years of call center experience and programming skills are leveraged to help the migration of research projects from research to reality. He is the creator of the CallMiner Chrome extensions and possesses a strong technology background, specializing in business process optimization through the use of web application extensions and various technology suites and frameworks.