We live in a world where database and analytical technologies are maturing at an amazing pace. Capabilities that were once the sole province of extremely large companies and governments are trickling down to our own companies and even becoming stock functionality in the software we use every day, in IT, on our service desks, and even socially.
How do we take advantage of the power of software without crossing previously uncharted lines of ethics and privacy? In this article, we’ll explore some real-world examples where the line was crossed. I’ll then suggest some guiding principles and questions we should be asking ourselves and our fellow employees when using these technologies. All in all, our goal is to ask ourselves this: How do we prevent ourselves from turning our everyday software into creepyware?
We’ve all heard about various recent cases of identity theft, account hijacking, credit card fraud, and even cases of photos and videos of people’s private moments being made public. These things are obviously bad, but those kinds of incidents are more about nefarious people doing bad things and breaking into systems they shouldn’t be able to access.
Creepyware, on the other hand, can be defined as "good software gone bad," or, perhaps more to the point, "powerful software, misused under good intentions."
Our first example comes from a New York Times article from a couple of years ago. This article reported the story of a popular department store that wanted to market to expectant mothers. With the help of a mathematician, they found that their databases contained enough information about people and their purchasing habits to enable them to predict which women were expecting—in some cases even before the women themselves knew they were pregnant! They saw this as a great opportunity to target expectant mothers with special offers and improve the company’s bottom line by attracting new customers at a major pivot point in their lives.
Imagine the surprise of a father who, upon opening the mail one day, found a package of coupons for formula and diapers tailored for a pregnant woman. And it was addressed to his teenage daughter—by name. Awkward, right?
The retailer in question now embeds its formula and diaper coupons with a few other coupons for lawnmowers and wine glasses, so recipients think that they’re just getting the same coupon offers as their neighbors. Is this enough to not seem creepy? Was the retailer actually doing anything unethical or creepy, or is the fact that it appeared to be creepy to the public the real issue?
This kind of creepy behavior isn’t isolated to retail. Not long ago, a small manufacturing company was looking for a way to increase security by installing proactive monitoring of Internet traffic. So, it implemented a couple off-the-shelf products that would issue alerts based on incoming web content and outgoing search terms.
After the program had been in place for some amount of time, the day they’d all been dreading came to pass. Alert! So the IT guy got the CIO, the CIO got someone from HR, and HR grabbed a pink slip and security. The grand procession then marched to the employee’s office. The CIO reached for the door handle. The group opened the door and entered to confront the perpetrator of the offending search. (At this point, I have a mental image of this moment involves Inspector Clouseau jumping into the room and yelling "A-ha!" to great comic effect.)
The employee in question, a 30-something mother of three, looked up from the microwave lunch on her desk and calmly said, "Can I help you?" She had taken some time during her lunch break to shop for clothing for her upcoming vacation to Cancun. It turns out that the keywords one might use when shopping for swimwear at mainstream retail sites look an awful lot like keywords others might use for more inappropriate searches. Much awkwardness ensued.
Now, I happen to know this employee, and she’s one of the most good-natured people I know. As she related this story to me, we were both laughing hysterically. She just shrugged it off, but think about this: What if the employee hadn’t been so forgiving? A veritable task force arriving in one’s office with a pink slip and a security guard is not something most people would take lightly.
This isn’t as innocuous as a mistaken Amazon or Netflix suggestion, or a banner advertisement creepily following us from website to website. Potentially major events like this can affect one’s career and livelihood.
And this is where we start getting closer to home.
A wise man once said, "With great power, comes great responsibility." Now, that man may have been a comic book character talking to his Spandex-clad nephew, but you can be assured that the concept behind the quote is sound. Today’s IT and service desk software gives us more and more power with each version.
In addition, as IT departments and service desks, we have access to even more data than retailers or manufacturing organizations, as in the above examples. It’s not just about what a person’s preferences or brand loyalties are, or the search terms he or she has used. We often have access to which programs they’ve installed, how and when they use their systems or a given piece of software, and where they store their documents. We even have access to location history through GPS, door records, and RFID tags. All of these things give us far more power than marketing departments or large corporations could even dream of (to say nothing of governments).
But we use these powers for good, don’t we? Aren’t we the people who hold our companies together and keep the lights on? Isn’t it good to use analytics to predict usage patterns, anticipate capacity, and route people to the most qualified help as soon as possible? Of course! The technologies that are becoming available to us are exciting to the point where many believe the era of IT truly being seen as a valued business partner is here and achievable. (But that’s a different article altogether.)
We have to remember, however, that there’s not a lot of difference, technologically speaking, between a good use of data, such as:
"Fred, we see you solve the most issues with the printer in Accounting, so we’re going to route future questions about this printer to you."
And a creepy use of data such as:
"Fred, we’ve linked our help desk system to the ID badge database, and we see that you use the third-floor washroom a lot. We’re repainting that bathroom next week. Please click here to vote for a color."
It’s not the software that makes things creepy—it’s us!
We have all of this power, and we definitely have the best intentions, but the kinds of mistakes that have been made by much larger entities are now becoming ours to make as well. Can we use this software in a noncreepy way? Consider the following guidelines as a starting point for any potentially creepy project you undertake.
1. If you collect data on your users or customers, it’s your responsibility to protect it.
Even huge companies that presumably have scores of security experts working on their systems have data breaches. Smaller companies often have data breaches that they don’t even know about, and sometimes those breaches are caused by internal employees. If you store data, even if you think that data is anonymized (see the next guideline), you owe it to the people you serve to do your utmost to protect it. It’s not that collecting the data is necessarily bad; it’s that other people accessing this same data would be very bad.
2. There is no such thing as anonymized data.
No matter how much we want to believe that our data is anonymized, we live in a world where database systems and analytics are powerful enough to find and correlate data in ways we could never accomplish manually. Who would have even dreamed a database could accurately tell that a woman was pregnant before she knew herself? Even if we can’t correlate the data today, it’s likely that a future upgrade or a clever employee will figure out how to do so in the future. Treat all of your collected data as if it were personally identifiable, and protect it to the best of your ability.
3. A piece of data is only as reliable as the human who dreamed up the search criteria.
Whether you’re prosecuting employees for bikini searches or sending an email promotion to a vegan for a box of steaks, never forget that the metrics and alerts and reports are set up and maintained by humans. Humans are fallible, and, therefore, at least for the foreseeable future, so are computers. Before you gather up a posse to go fire an employee, or even give an employee a performance bonus for closing the most tickets in a month, make sure you look deeper into the data to validate and verify what the system is saying.
4. When kicking off a new project involving data-mining or analytics, appoint a designated dissenter.
This may sound weird, but this really helps. Always appoint someone on your team to play devil’s advocate and question whether a deliverable is really worth the effort or potential creepiness. Have this person put himself or herself in the shoes of the person whose data is being mined, and ask, "Would I want someone to be looking at data about me and using it in this manner?" It’s amazing the clarity a little empathy can bring to a project.
5. It doesn’t take a lot of high-falutin’ technology to get creepy.
Even something as innocuous as Microsoft Excel can be creepy, depending on how you use it. There are numerous stories about spreadsheets causing trouble in the wrong hands. Many companies even store salary and other HR data in spreadsheets on shared network drives, even emailing them back and forth between employees. The same guidelines above apply when it comes to protecting this data.
In summary, we’re surrounded by a lot of powerful technology that can be used to improve our lives and our businesses, but there are a lot of examples of entities choosing to use data in creepy ways. But there’s hope.
Rather than fear the software, we can embrace it in our own businesses by following the simple guidelines above and, most importantly, by being empathetic and thinking about the people behind the data.
You have the power to use your software for good—don’t be the one to turn it into creepyware!
Josh Caid is the director of product management at Cherwell Software. He has been managing best-of-breed software products in the CRM and ITSM spaces for more than fifteen years, and he has a passion for creating software that makes people’s daily lives better by solving real-world problems. Before settling down to focus his creativity in the software realm, Josh was a professional musician. He still loves to crank his amp up to eleven, much to the exasperation of his neighbors.