Date Published November 29, 2019 - Last Updated December 17, 2019
Once I was having coffee with a group of neighbors, one of whom happened to be a retired professor of economics. Since my degree was in political science, I engaged him on which economic philosophies he thought were most effective. Instead of throwing out the usual names like Hayek or Keynes, he responded, “The totality of economics can be boiled down to one question: What are the opportunity costs of doing something else?” At first, I thought he was being somewhat flippant, but as the conversation wore on, it became apparent that he wasn’t. This, for him, was a fundamental issue.
This equation also rears its head in IT, particularly when evaluating new software implementations and proof-of-concept projects. Perhaps because of the speed of change, or maybe in spite of it though, this gets morphed into a slightly different statement: “If it ain’t broke, don’t fix it.” “If it ain’t broke, don’t fix it” is a fallacy in technology. Let me show you why, how not to fall victim to it, and the best ways to self-correct if you are.
“If it ain’t broke, don’t fix it” is a fallacy in technology.
“If it ain’t broke, don’t fix it” only works if you’re in a closed system. IT is not a closed system. Even if your budgets are set, processes efficient, and ROI acceptable,if you’re not getting better, you’re getting worse. Why? Because, again, IT is not a closed system; the cloud is constantly driving agility and efficiencies, costs are routinely being driven down, new and more capable technologies are being developed every day, and, perhaps most importantly, new vulnerabilities are constantly being discovered. Believing that not switching out that mainframe in the back room running AS/400 because it has always run well is to compound risk. Now perhaps that’s acceptable risk. But that is a different question. Regulatory or audit requirements aside, the cost of consistently doing “nothing” will eventually have a trail of diminishing returns, after which stagnation will lead to negative results.
Now, let me state for the record: I am not saying run to the nearest software vendor and sign that PO. Proof-of-concept projects—even “free” ones—have a cost (time, effort), and the cost of poorly researched or implemented software purchases can often be worse than having done nothing at all. Technologies are also not a closed system; they’re (hopefully) constantly receiving updates, bug fixes, and new features. But understanding this pretty much invalidates the “If it ain’t broke, don’t fix it” line of logic. The very vendors you’re partnering with know that’s not the case, otherwise why would they ever push an update? There’s also end-of-life sunsetting to worry about. Adobe Flash is going away in 2020. A number of Microsoft products will lose their support in 2020. Early versions of TLS went out of PCI compliance in 2018. So if you’re thinking that doing nothing is “better” than doing something, remember that this tradeoff is not just a business decision (“We’ve always done it that way, and it works.”), but also a technological one (“You mean to say the cloud may actually be more secure than on premises?” Yes...and this article is two years old).
To cover your bases, I would argue there are four key areas of evaluation—each with their own requirements and processes—that happen in an effective review process. To evaluate effectively, you need to know:
- What You have
- What You Need
- What You Can Get
- How to Evaluate
Your mileage will vary on how often you need to review these, and there will be some cross pollination between inputs and outputs. But these are good swim lanes to operate from. Let’s look at each in a little more detail.
What You Have
What do you have? No, really. If you re-read that question does your mind immediately go to technology? If so, why? Why there versus headcount, or expertise, or budget, or processes in use? Surely all of these are interconnected. But when you read that sentence, your brain naturally went to one area above all others, and so first and foremost it’s important to understand that what you have is a bias, or at least an intuition. You need to be cognizant of this, because it is influencing your evaluation. Basic areas you should be looking at here are:
-
Inventory (ITAM). Is this in a flat file? A Procurement system? How is it updated? Who is responsible for the source of truth?
-
CMDB Reports. As I’ve written before, this is not the same thing as the above.
-
Maintenance and Warranty Contract Reviews. This is not the same thing as budgets.
-
Budgets. Increasing? Decreasing? Does your purchasing cycle match up with your maintenance cycle?
-
Headcount and Expertise. Do you have enough people? Are they trained the right way? How do you know / can prove it?
Knowing what you have will quickly identify gaps and let you transition to the next stage, determining what you need.
What You Need
What you need is not what you can get (that comes next). It’s the process of identifying the gaps (based off your inventory), determining which ones need to be filled and which ones are acceptable risk. Questions you should ask here are:
-
What are the gaps? Are they technical? Process based? Budgetary? Do they span different teams? Departments?
-
Is the root cause of these gaps internal or external? This is a question I often see conflated or not asked at all. Realizing you have a gap because you no longer have expertise in a dated technology is internal. This is a different gap than saying that a product has a new vulnerability that is rapidly being exploited. Also mind that internal versus external is a different question than whether or not either can be easily mitigated.
-
Is this a “hard” need or a “soft” one? A “hard” need is something that you must do or else the cost of not doing it extremely high. Soft needs are things that might provide more value or eliminate waste, but aren’t “must” dos (yet).
Now that you know what you have and what you need, you can move on to evaluating what you can get.
What You Can Get
What you can get is a matter of constraints. Every business will have different constraints, but the most common are things we’re aware of: budget, timing, resources. You should understand these, but I would also argue that you should spend at least some time actively researching new hardware and/or updates to your existing infrastructure. This process will of course be a cost, but how else would you even know that now cloud databases can do XYZ, or the application you previously bought now has a vulnerability? What you can get should not only be a resource and opportunity review, but also a conversation. Increase your skills at learning how to ask, and you’ll increase the probabilities that you’ll receive what you’re asking for.
How to Evaluate
Evaluating software versus processes versus other gaps all require different questions. You need to understand what these differences are before you start your process. But once you choose to start, keep the following in mind:
-
Understand the limits of RFPs. It doesn’t matter if two technologies can do the same thing. How does it feel to your users, what is the level of effort involved to get to the same results, what happens next? These can’t be understood by looking at a checkbox.
-
POCs versus guided evaluations. Proof-of-concepts are often where you take something for a test drive and kick the tires on your own. Guided evaluations usually involve more consulting but are a good way to discover more features and use-cases. There’s a trade off with each; it’s just important to understand which one you’re doing.
-
Is there a more “modern” way of evaluating? I work in BI consulting. Understanding the differences between “antiquated” BI and “modern” BI is key to making a proper evaluation. This isn’t about which is better; it’s understanding if it makes sense to ask the same questions.
-
Are you evaluating “everything else?” A piece of technology may check every box. But what does the company outlook look like? Does it have the resources such as an active community, knowledge base, support structure, etc? How fast does it push updates, and are those mandatory or discretionary?
IT Is not a Closed System
Sometimes the status quo works. Processes and standards wouldn’t have a chance to be effective if we were constantly changing them. But instead of thinking that just because they are working, doesn’t mean that they’re delivering the same value year over year or that your competitors are keeping the status quo, too. There is a cost for doing anything, including “nothing.” So take advantage of what you have, but also make sure you have the tools to identify when change is needed and the resources to implement it.
Adam Rauh has been working in IT since 2005. Currently in the business intelligence and analytics space at Tableau, he spent over a decade working in IT operations focusing on ITSM, leadership, and infrastructure support. He is passionate about data analytics, security, and process frameworks and methodologies. He has spoken at, contributed to, or authored articles for a number of conferences, seminars, and user-groups across the US on a variety of subjects related to IT, data analytics, and public policy. He currently lives in Georgia. Connect with Adam on LinkedIn.