Marginal stupidity
One of the most important cognitive tradeoffs we make is how to process information, and perhaps more specifically, the deluge of information that bombards us every day. A study out of UCSD in 2009 estimated that Americans read or hear more than 100,000 words a day — an increase of nearly 350% over the prior three decades (and that was before Slack and Substack!) It would seem logical that more information is always better for decision-making, both for individuals and for societies. Yet, that’s precisely the tradeoff: humans and civilizations must balance greater and better information with the limits of their rationalities.
So what’s the value of learning more information, and can there even be a downside to knowing more?
Such was the emerging theme of two classic books I happened to serendipitously smash together this week. The first was psychologist Philip E. Tetlock’s original 2005 magnum opus, Expert Political Judgment, a book that preceded his hyper-popular bestseller Superforecasters. The other was The Collapse of Complex Societies, a monograph published in 1988 and written by anthropologist Joseph A. Tainter. These two books, drawing from divergent fields and investigating different problems, ultimately converge around a central thesis: there is a marginal return to information, and it often reaches zero — or even negative — far earlier than we might expect.
As with all economic analysis, the marginal return of information is the relationship between a “unit” increase in information and the value created by that new knowledge. If I have never studied biology and I read one textbook, the marginal value of information is high: I am learning a lot of new information and immensely expanding my knowledge of the subject. However, reading 20 biology textbooks doesn’t mean I am 20 times smarter on the subject. There are diminishing returns — the marginal value of each additional textbook I read will get smaller and smaller.
Tetlock’s central thesis, based on extensive laboratory data, is that specialization, education, and depth of knowledge does not lead to better political forecasting. In fact, a generalist who regularly reads a quality set of news sources is often much more likely to correctly predict the future than a specialist. The key predictor rather is not knowledge but cognitive style. The best forecasters are “foxes,” from Isaiah Berlin’s metaphor of the fox who knows many things and the hedgehog who knows one thing well. As Tetlock writes:
Several foxes commented that good judges cultivate a capacity “to go with the flow” by improvising dissonant combinations of ideas that capture the “dynamic tensions” propelling political processes. For these self-conscious balancers, the status quo is often in precarious equilibrium and “seeing the truth” is a fleeting achievement for even the most Socratic souls.
The best forecasters are able to draw from their extensive kens and selectively process and synthesize the information they have at hand.
Can flexibility of thinking or “improvising dissonant combinations of ideas” (a phrase I really love) ever have a negative marginal value? Well, actually, yes. In a series of experiments, Tetlock asks participants to weigh probabilities of outcomes across a range of scenarios that get increasingly complicated. What he finds is that “…foxes become more susceptible than hedgehogs to a serious bias: the tendency to assign so much likelihood to so many possibilities that they become entangled in self-contradictions.”
Since foxes use self-doubt to correct their predictions, feeding them more information on other perspectives or contradictory evidence can actually turn a relatively statistically accurate prediction into a much weaker forecast. The marginal return on information doesn’t just tend toward zero — it can actually become negative in certain contexts.
Tetlock approaches this marginal value of information from the perspective of a psychologist, and the core of his book covers his research on various human biases as well as subgroups of hedgehogs and foxes and when they perform particularly well and when they slump. Tainter, on the other hand, is interested in societies and how agglomerations of people can suddenly lose their productive sophistication. Or, as he puts it, “A society has collapsed when it displays a rapid, significant loss of an established level of sociopolitical complexity.”
The word “complexity” typically has a negative valence in our modern commentary, but complexity is generally good. As societies move from hunter-gatherer models of economic production to more specialization, surpluses of goods like food grow, and that allows for a greater number of roles across society as well as the development of culture.
Tainter’s goal is to find a theory that encompasses dozens of different societal collapses in history, while avoiding many of the logical contradictions in other hypotheses. For instance, if additional complexity adds more productivity to an economy, then shouldn’t it precisely be the most advanced and complex economies that are able to weather exogenous shocks and avoid collapse? What he finds and develops into a thesis is that there are limits of how much benefit complexity offers. In his words:
It is the thesis … that return on investment in complexity varies, and that this variation follows a characteristic curve. More specifically, it is proposed that, in many crucial spheres, continued investment in sociopolitical complexity reaches a point where the benefits for such investment begin to decline, at first gradually, then with accelerated force.
Societies can continue to grow in complexity, but then they start to stretch themselves thin.
Yet a society experiencing declining marginal returns is investing ever more heavily in a strategy that is yielding proportionately less. Excess productivity capacity will at some point be used up, and accumulated surpluses allocated to current operating needs. There is, then, little or no surplus with which to counter major adversities.
That major adversity can be a climate catastrophe or a barbarian invasion, but when it comes, there is no slack in the system or any unused fount of knowledge to draw upon. Upkeep of the vast complexity that manages society consumes immense resources, and that hinders people from adapting to their new and more challenging context. Without immediate economic growth or technological innovation to compensate, the system falls apart.
Tainter’s theory is that there are declining marginal returns to investment in complexity, but he also implies that such complexity can also turn negative. There is a point at which complexity begets further complexity with no increase in productivity, essentially leveraging a collective “tax” on everyone to maintain an ever more complicated bureaucracy for no value.
Thus, Tetlock and Tainter converge from disparate paths to what might be dubbed a “theory of marginal stupidity.” While education is extremely valuable, and more information and complexity is generally good for decision-making and societal productivity, there is a turning point where further information or complexity can befuddle us and simply raise costs without any concomitant value. Yes, the world is always changing, and much like the Red Queen in Alice in Wonderland, we always need to be learning just to avoid getting dumber (or as our scientist-in-residence Sam Arbesman titled his book, there is The Half-Life of Facts). But critically, there’s a limit to how much knowledge consumption is beneficial.
That theory has huge implications on fields like science, where nearly all indicators of productivity have been scorchingly negative for decades. Even Tainter, writing in 1988, notes in an extensive section that “… in any field, as each research question among the stock waiting to be answered is resolved, the costliness of deciphering the remainder increases.”
How do we square the declining marginal return of knowledge to the fact that it seems we need more knowledge than ever to make any impact in many fields? I’ve written before about the “dual PhD” problem for TechCrunch, that more and more deep tech startups essentially require founders with two PhDs in intersecting domains like machine learning and biology). Others analysts have noted the same trend such as Benjamin F. Jones, who wrote a quality economics analysis on the subject titled “The Burden of Knowledge and the ‘Death of the Renaissance Man’.”
The answer is to recognize that additional information is often not helpful, and instead, we should explore frontiers that have never been investigated. Tainter notes a widely observed fact that “It is no coincidence that the most famous practitioners historically in each field tend to be persons who were instrumental in developing the field, and in establishing its basic outline.” The most value comes from mapping uncharted territory, and while more of our world has certainly been mapped, that doesn’t mean there are no areas left to explore. The tradeoff is plunging into the unknown, but potentially finding something new. It certainly beats reading biology textbook number 21.
A tale of two fundings
From our Lux portfolio, two important new rounds were announced:
- Anagenex, a platform that uses a combination of machine learning and biotech to screen for potential small molecule medicines, raised a $30 million Series A led by biotech veterans Catalio. Lux was the founding seed investor in 2020.
- Fruitful, a forthcoming app that offers financial wellness solutions for a fixed subscription fee, raised a total of $33 million of pre-launch capital including from Lux plus $10 million from debt and proceeds on the sale of a web domain.
Lux Recommends
- Remember when PC games and software came in packaged boxes? Many were boring squares, but Hock Wah Yeo designed a number of imaginative packages for games like Brøderbund’s Prince of Persia and Electronic Arts’s Ultrabots. Phil Salvador compiles these unique software boxes and narrates their history in a fun look at a completely different time in PC gaming.
- Peter Hébert shares this powerful clip from Peter Fortenbaugh, Co-CEO and Chief Community Builder of Boys & Girls Clubs of the Peninsula, as he wrestles with advanced cancer and the meaning of a life of service.
- For those who have read Emily St. John Mandel’s Station Eleven (I have, and loved it), well, her latest novel is out and our scientist-in-residence Sam Arbesman is a huge fan. Sea of Tranquility is a speculative fiction work about time travel, metaphysics, and the connections between all of us and across time.
- Shaq Vayda recommends a tweet thread from Elizabeth Clark Polner Hudson, the lead of the engineering team at Omega Funds, analyzing software business models in biotech.
- Finally, “Securities” reader Andrew Thompson has been publishing a beautiful site, in-depth analyses, and a bounty of comprehensive datasets on subjects ranging from Product Hunt and consumer tech review media to Bandcamp. Lots of fun detail, and more data than anyone should ever have time to play with.
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.