I travel a lot with my work. One of the challenges of staying in a hotel for the first time is getting the shower to work. Certainly here in the UK, there is no universal shower fitting – sometimes there can be an assortment of taps, dials and plungers and getting the water to flow out of the shower hose can be a challenge at what I call “06:00 BC” (the “BC” stands for “before caffeine”).
Having got the water flowing, the next challenge is getting it to the right temperature. I’m sure we’ve all had experiences where it takes a few seconds for any adjustments to the temperature dial to affect the temperature of the water coming through the pipes. This often leads to a reinforcing and amplifying feedback loop: The water isn’t hot enough, so you turn the dial towards the “hot” side. Nothing seems to happen, so you turn the dial a little further… then a few seconds later the water is much too hot so you turn it back down. But then a few seconds later the water is ice-cold again, so the process repeats. This leads to an oscillation whereby you’re turning the shower controls backwards and forwards, never quite settling on the “perfect” temperature.
The problem here is that the “system” (in this case the shower and water system) is taking time to respond to the request to change temperature. It takes a few seconds for the hotter (or colder) water to come down the pipes. And the longer the “system” (shower) takes to respond, the more difficult it is to reach an equilibrium. Imagine if temperature adjustments took 3 minutes to come through—it would be virtually impossible to settle on a satisfactory temperature.
Another way of looking at this scenario is to imagine that data about your “request” is taking time to filter back to you. You make the “request” to change temperature, but you won’t know whether this has happened for a few seconds. So, in the absence of any data (or, in the possession of data that’s now a few seconds out of date) you make a decision to tweak the dial… and end up (quite literally) getting burned.
Think of this in a business context. Imagine if the data you are using to make critical decisions is out of date. Not just a few seconds or minutes out of date, but days or weeks out of date. Maybe you’re having to assimilate data manually from different sources: Perhaps some is on paper, some is on a spreadsheet. Perhaps some data is only updated at quarter-end. You might take corrective action based on old data that worsens the problem, rather than alleviates it. You might (metaphorically) slam the temperature dial over to “Hot”, when all it really needs is a subtle adjustment. The more out of date the data, the less likely you are to be able to make an accurate and timely data-driven decision. Even worse, the cause/effect link gets broken; when an effect (positive or negative) happens, it may have been caused by previous decisions.
The key here is velocity of data. It’s important for businesses to define not just what data is relevant for them, but also how frequently it needs to trickle through. It’s equally important to know how fresh the data is, and it’s important to know “how late is too late”. This kind of analytic capability was perhaps once reserved for large multinational organisations, but it is now completely relevant for mid-size and small firms too.
The moral of the story: Know your data, generate insight and don’t get burned!
This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.