Back when automatic elevators first came on the scene and replaced human operators, people didn’t entirely trust them. Sure, the elevator industry knew these devices were safer and more efficient than having human operators driving the box and pulling levers, but people resisted.
To soothe the public, elevator companies put in something very simple that acts as a human calming device: a big red “stop” button. There is, of course, very little need to ever push that button, but the appearance of human control puts humans at ease. The thought we can take control away from automation at any time makes the automation itself more palatable. Times are changing though. Tesla recently introduced a marvel of engineering, a car with fully automated, self-adaptive learning driving capabilities. Ten years ago few would have accepted this as even possible. Now, thousands of people are on the waiting list to buy this cutting-edge automobile. Machine learning will not only make the roads safer from distracted drivers, it will also allow us to be much more productive in the time currently wasted during our daily commute.
This same transition is happening today in business around the idea of data and machine learning.
Data is just numbers, texts, sounds, images and other pieces of digital detritus. We live under the fallacy that data is information and the more we have the better. It’s simply untrue. Data contains information, but you have to look for it, or better yet, employ a system that can automatically learn what’s relevant and what’s not.
In order for data to be useful we need to bring in its context. We also need to make sure that it is:
- Truly represents the process under study;
- Is detailed enough to be useful in a predictive model; and
- Deep enough to create trusted models.
A core issue, however, is that people want the data to provide insight that will predict the future. They want their version of the Oracle of Delphi, a wizard who tells you what will happen in the future. Still, it’s important to realize predictions are just that – likelihoods or probabilities of the predicted state. Experience has shown us that there will always be a bit of unpredictability, yet people like to make decisions with black and white certainty because they’re easier to understand and assess the risks. But no matter where we look in our evolutionary history, our brains are programmed to deal with uncertainty.
Although it’s impractical to throw all the data into a model and ‘hope something sticks’, there is another approach. Let machine learning tools adaptively ‘pick-and-choose’ (optimize) which data features best represent the underlying process of interest. This can be done in an ongoing manner by using evolutionary methods to constantly test alternative hypotheses, and actively learn what’s correlated and what’s noise.
The beauty in applying evolutionary optimization is that you can keep the process running in a loop and it will constantly adapt to changes in the environment. Things that are correlated today may not be correlated tomorrow. Data and data features may change but the evolutionary mechanism within the machine learning bots simply adapt. In fact, this at the core definition of learning – continually adapting to dynamic changes in an environment to solve problems. Once you stop adapting, the learning stops, and you start to fall behind the curve. Yes, data alone won’t save us, but machine learning tools can help us find our way and provide a valuable assist in navigating the world.
The challenge most people have, including luminary Clayton Christensen, is that they don’t trust machine learning to do the work for them, so they find places that humans insert themselves to control the data. I’ve heard people say things like “data only tells us about the past… it cannot help us see into the future.” This is true, if you only have the data and only read it as data.
Christensen suggests a framework that brings additional information into the system by deploying humans to observe, theorize, test and construct. This framework is analogous to the basic adaptive process of any learning system (observe, model, predict, act on the prediction(s), and update).
In other words, he’s resisting the automation that’s already here, just as early elevator users resisted the buttons on the wall.
Since the world is a dynamic place, the adaptive process must be continuous too. To paraphrase Lewis Carroll’s Red Queen (in Alice in Wonderland), you have to constantly be moving just to stay in the same place. In the world of marketing, where many of our customer’s focus lays, it is especially important to update models continuously to keep up with the rapid changes in consumer behaviors. This is where true machine learning excels. The same principles from biology apply directly to marketing – adapt or die. True machine learning automates this process, and allows businesses to accomplish much more than by relying on human resources alone.
Financial institutions have been using predictive models for several decades, and the better ones actually rely on uncertainty to guide and hedge their portfolios. Modern flight control systems rely almost exclusively on models to predict and navigate through airspace – autopilot systems are ubiquitous because they have proven extremely trustworthy. The technology works – we just have to get used to that fact.
Now it’s time for businesses to start trusting machine learning systems. There just aren’t enough resources to do everything manually and maintain market share in a world that is changing faster each day.
The systems in place already have a big red “stop” button – you can always turn them off. But I’m not sure you’ll ever need to do that.