With so many perspectives on the impact of artificial intelligence (AI) flooding the business press, it’s becoming increasingly rare to find one that’s truly original. So when strategy professor Ajay Agrawal shared his brilliantly simple view on AI, we stood up and took notice. Agrawal, who teaches at the University of Toronto’s Rotman School of Management and works with AI start-ups at the Creative Destruction Lab (which he founded), posits that AI serves a single, but potentially transformative, economic purpose: it significantly lowers the cost of prediction.
In his new book, Prediction Machines: The Simple Economics of Artificial Intelligence, coauthored with professors Joshua Gans and Avi Goldfarb, Agrawal explains how business leaders can use this premise to figure out the most valuable ways to apply AI in their organization. The commentary here, which is adapted from a recent interview with McKinsey’s Rik Kirkland, summarizes Agrawal’s thesis. Consider it a CEO guide to parsing and prioritizing AI opportunities. […]
As the cost of prediction continues to drop, we’ll use more of it for traditional prediction problems such as inventory management because we can predict faster, cheaper, and better. At the same time, we’ll start using prediction to solve problems that we haven’t historically thought of as prediction problems.
For example, we never thought of autonomous driving as a prediction problem. Traditionally, engineers programmed an autonomous vehicle to move around in a controlled environment, such as a factory or warehouse, by telling it what to do in certain situations—if a human walks in front of the vehicle (then stop) or if a shelf is empty (then move to the next shelf). But we could never put those vehicles on a city street because there are too many ifs—if it’s dark, if it’s rainy, if a child runs into the street, if an oncoming vehicle has its blinker on. No matter how many lines of code we write, we couldn’t cover all the potential ifs.
Today we can reframe autonomous driving as a prediction problem. Then an AI simply needs to predict the answer to one question: What would a good human driver do? There are a limited set of actions we can take when driving (“thens”). We can turn right or left, brake or accelerate—that’s it. So, to teach an AI to drive, we put a human in a vehicle and tell the human to drive while the AI is figuratively sitting beside the human watching. Since the AI doesn’t have eyes and ears like we do, we give it cameras, radar, and light detection and ranging (LIDAR). The AI takes the input data as it comes in through its “eyes” and looks over to the human and tries to predict, “What will the human do next?”
The AI makes a lot of mistakes at first. But it learns from its mistakes and updates its model every time it incorrectly predicts an action the human will take. Its predictions start getting better and better until it becomes so good at predicting what a human would do that we don’t need the human to do it anymore. The AI can perform the action itself.