Most predictions suck. When airplanes were sold to the US Army in 1909 the common idea of the time was
“With the perfect development of the airplane, wars will be only an incident of past ages.”1 Most predictions look like this. To be fair to the people who predicted this - at the time it seemed like a perfectly reasonable prediction. It was just based on a set of hidden assumptions that changed as soon as you could mount a gun on a plane and build flak cannons. Terrible predictions do not need to be the norm.
Good predictions are non-obvious and create agency
A good prediction enables people to think thoughts they would not have thought before and ideally take action on them. To achieve this a prediction needs to be non-obvious and create agency.
A good prediction is non-obvious. Obvious predictions won’t cause any new thoughts or shift any actions. Nobody cares if I predict the sun will come up tomorrow, or even that it will come up on this day five hundred years from now. Non-obvious predictions will usually be controversial or even heretical.
Predictions create agency in several ways. They can inspire you to take action, either to make the prediction come true or to make sure that it does not happen. They can give you the mental tools to extend the prediction and riff on it, forming your own opinion of the future. While it’s not absolutely necessary, precise predictions can help on both fronts.
Most predictions fail on one or both of these conditions.
Most predictions aren’t falsifiable. “VR is the future of work!” without a timescale or a reason why is a bad prediction because even if the statement is true, you would take very different actions if everybody will be working in VR in five years vs fifty years.
Most predictions present their vision as inevitable. ‘This is what is going to happen’ to you does not suggest any action or give you any agency. Most predictions just confirm priors, and extend obvious trend lines, which is obvious and doesn’t generate any new thoughts.
Most predictions follow the pattern ‘what is the future of X?’ or ‘How will X change the world?’ (Insert your favorite hype word for X: IoT, AI, VR, Blockchain, etc.) This stance assumes that everything else in the world is held constant. It also ends up looking ridiculous. See everything from the future of computers as recipe books for 1960’s housewives to malthusian visions of doom. You could think of these bad predictions as single-variate, first order Taylor expansions.
These first order predictions are fragile. They don’t give you much insight into the ‘underlying function’ they’re based off of, so it’s hard to play with with the ideas to create your own mental models and opinions. If you don’t agree with one assumption the entire prediction no longer holds. A report on ‘the future of AI’ could conceivably go through all the realms of life and enumerate how AI will affect each of them at some fidelity, leaving no room for someone else to say “what if…?” Self-consistent worlds enable you to draw off of the edge of the map but first order predictions have no mechanism to enforce self-consistency. They only look at the effect of a single variable on different areas without looking at how those changes would in turn interact. “If AI is both going to make universities irrelevant and replace the repetitive tasks that interns would normally do, how are people going to signal to future employers?”
Some predictions aren’t predictions
A perhaps more accurate (and charitable?) reason why most predictions suck may be that the purpose of the predictions was not to be good predictions in the first place. This explanation would be more consistent with the principle that most people are not malicious or stupid, so explanations based on that assumption are probably missing something.
One reason to speculate4 is attention. Echoing or riffing on a common opinion about the future is a great way to yell “hello world!” Putting on a contrarian hat and speculating that the common opinion is wrong can have the same effect. Attention-seeking speculation is closely tied to signaling group affiliation. Different groups hew to different narratives that you can enrich with speculation. As an illustration, imagine if I made predictions about a future where the US government builds recovered alien technology into next-generation military hardware.
Another reason to speculate is to influence people or a process. Unlike in physics, if you can get enough people to believe that something is true, that thing can actually become true. Real predictions can encourage people to fear or anticipate a specific set of outcomes and take action to prevent or encourage them. If you care more about the actions than predictive accuracy, speculation can often play the same role. Speculating to influence the future isn’t necessarily conscious or nefarious. Great ideas often start off as speculations! “What if …” The core of most “visions” is pure speculation. The only reason many companies and technologies exist is because speculation convinced people to give them enough time and money to make the speculation into a retroactively accurate prediction. It’s possible that the only reasons railroads and the internet were able to become utterly pervasive is because speculative predictions convinced investors and companies to try to make those speculations a reality. Of course, this is a gambit that can also fail.
Telling stories is also just plain fun and speculating is mostly telling stories about the future. So a lot of speculation is just a way of socializing and telling imagination-capturing stories to each other. What’s more interesting than a possibly true story about our own futures?
The Hanson/Simler-ian analysis would also suggest that many people who are making predictions for non-predictive reasons may think that they are genuinely trying to make good predictions. It’s also important to note that even when people are making genuine attempts to create good predictions, they are probably also shooting for at least a couple of these other outcomes. When I predict something I certainly don’t mind getting attention for it, influencing how other people act, and having fun in the process.
How can we make better predictions?
There are many different approaches approaches to creating better predictions. The Superforecasting approach made famous by Phillip Tetlock’s book of the same name progressively narrowed a pool of people who had no special knowledge but great abilities to do Bayesian inference on synthesized news. The superforecasting research was sponsored by IARPA (the US Intelligence community’s riff on DARPA) which is constantly trying different approaches to more accurately predict the future. IARPA is worth paying attention to because they are incentivized to get predictions right unlike pundits and thought leaders who are incentivized to make predictions that are either interesting or bias-confirming. Prediction markets are another way of rewarding people for accurate predictions and thus hopefully getting more accurate predictions over all. Prediction markets and ’wisdom of crowds’ methods in general work well in situations where the object of prediction is affected by many, distributed factors that allow the aggregation effect of markets to function.5 Betting markets are great for teasing out an honest consensus but not great at anticipating paradigm shifts. Peter Turchin’s Cliodynamics is attempting to be a primordial version of Foundation’s Psychohistory - teasing out the ‘physics’ of human activity. It’s not clear how predictive Turchin’s theories are, but they’re built on top of a causal and framework that you can engage with beyond “nuh uh” “yuh huh.” You’ll note that none of these approaches are widely used or have stunning successes. Prediction is hard. But the only way that predictions will improve is if they can actually be wrong.
All the prediction approaches above attempt to enable agency by maximizing accuracy. But what if predictions attempted to maximize agency by highlighting possible non-obvious actions? These predictions usually go by another name: science fiction. Scientists, engineers, and economists have the best tools to write good science fiction that is consistent with the real world. Some of the best, most predictive science fiction was written by people with technical PhDs. Science fiction written by experts could be seriously used to explore complex predictions in nuanced ways. Why don’t we have financial science fiction exploring funding mechanisms and economic policy implications? You could go so far as to analyze science fiction as case studies from a future world. Good science fiction creates a self-consistent world that can enable you to draw off the edge of the map. If the sci-fi world is consistent with the real world, you could conceivably fill in the space between the two and end up with a roadmap for building the future.
Let’s hold our predictions to higher standards.
Thanks to Luke Constable and Martin Permin for advice and feedback on this piece
I don’t think it’s possible to know what’s going on inside someone else’s head so at the end of the day this is speculation based on internal motivations I’ve had in the past. Perhaps they will ring true for you too. ↩
Michael Crichton touches on some reasons briefly in the amazing Why Speculate , but I think there’s more to dig into especially through the lens of “Hanson/Simler-ian analysis” and the proliferation of social media. ↩
It’s pushing the word a bit, but “speculation” (predictions without firm evidence) seemed like the best term for predictions made for reasons other than accuracy. Other contenders were pseudo-prediction, unprediction. The choice between making up new words and appropriating old ones is a rock and a hard place. ↩