Antifragile is the fourth book in Nassim Nicholas Taleb’s Incerto series dealing with uncertainty, randomness, and unpredictability.
To me this is the best book in the series (so far) since it moves from just understanding randomness and unpredictable events to how they can actually be beneficial and helpful. Meaning: systems, people, or ideas that are not harmed by unpredictability, nor able to simply resist it, but actually benefit from it.
The book is full of ideas and has some brilliant observations. But whether or not you will enjoy it depends a bit on how you perceive Taleb’s writing style. He’s blunt, provocative, and often goes on tangents and rants. But if you’ve read and enjoyed his previous works then for sure you will get a lot out of Antifragile as well.
For more details and reviews go to Amazon.
Book Summary & Notes
All text between quotation marks is taken directly from the book.
What is antifragility?
“Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile.”
Antifragility is not the same as robustness or resilience – those two adjectives mean resisting shocks, but the antifragile gets stronger or better with shocks. Antifragile also means loving randomness and uncertainty, benefiting from errors. These terms are not singular, but you can see fragility and antifragility as a spectrum.
A simple test to detect (anti)fragility is to look at asymmetry. If something has more downside from random events or shocks it’s fragile, if it has more upside than downside it’s antifragile to those events or shocks.
Depriving systems, or anything really, of random events and volatility equals weakening these systems and fragilizing them. Taleb calls this the tragedy of modernity: “as with neurotically overprotective parents, those trying to help are often hurting the most.”
“Thou shalt not have antifragility at the expense of the fragility of others.” In the past rulers and heroes took risks and had potential downsides for their actions. Nowadays there is a rise in the number of “inverse heroes” – people with power who don’t have any real downside or accountability. (E.g. bureaucrats, academics, bankers, et cetera.)
Fragility and antifragility are harmed by, or benefit from, something related to volatility. What does that include exactly? “(i) uncertainty, (ii) variability, (iii) imperfect, incomplete knowledge, (iv) chance, (v) chaos, (vi) volatility, (vii) disorder, (viii) entropy, (ix) time, (x) the unknown, (xi) randomness, (xii) turmoil, (xiii) stressor, (xiv) error, (xv) dispersion of outcomes, (xvi) unknowledge.”
“It is said that the best horses lose when they compete with slower ones, and win against better rivals. Undercompensation from the absence of a stressor, inverse hormesis, absence of challenge, degrades the best of the best.” Overcompensation can be found in many places; if you need something urgently done, the best advice is often to give it to the busiest person in the office.
“Most humans manage to squander their free time, as free time makes them dysfunctional, lazy, and unmotivated – the busier they get, the more active they are at other tasks.”
Antifragility is also about redundancy: Mother Nature likes to have additional insurance, and adding layers of redundancy is a form of risk management. It’s why we have two kidneys, and spare capacity in many organs. In contrast, most human endeavors are designed without redundancy in mind or even have inverse redundancy (e.g. debt).
Criticism, negative publicity, and smear campaigns can be considered as hidden antifragilities to ideas and persons. The attack or badmouthing puts you on the map, a hidden upside to a negative event.
Most professions are at risk of reputational harm and are thus extremely fragile. If you work for a corporation and some negative news would surface, there would be a large chance you will be fired and it will be difficult to find a new job again. Writers and artists, and other independent professions are more antifragile – and jobs that are not based on reputation, such as taxi drivers or constructions workers, are also more safe.
“Take this easy-to-use heuristic to detect the independence and robustness of someone’s reputation. With few exceptions, those who dress outrageously are robust or even antifragile in reputation; those clean-shaven types who dress in suits and ties are fragile to information about them.”
Chronic vs. acute stressors
Stressors help with antifragility, but frequency is important. Acute stressors followed by a period of recovery are beneficial; chronic stressors that are low-level and mild over a long period are negative (examples include: bosses, mortgage, tax problems, things to do, et cetera.) – these “make you feel trapped in life.”
“An environment with variability (hence randomness) does not expose us to chronic stress injury, unlike human-designed systems. If you walk on uneven, not man-made terrain, no two steps will ever be identical – compare that to the randomness-free gym machine offering the exact opposite: forcing you into endless repetitions of the very same movement. Much of modern life is preventable chronic stress injury.”
What kills me makes others stronger
When something random happens it’s too late to change – so either an organism can react to it or it dies. This is why immortality means perfect prediction of the future because every potential random event needs to be countered or resisted.
Hence nature “likes diversity between organisms rather than diversity within an immortal organism”. The genetic code survives but the individual organism dies.
More generally speaking, systems “subjected to randomness – and unpredictability – build a mechanism beyond the robust to opportunistically reinvent themselves each generation, with a continuous change of population and species.” Complex systems like empires and financial markets also follow this (as long as nobody intervenes).
Taleb’s definition of a loser is someone who after making a mistake doesn’t learn from it, or doesn’t try to exploit it, and tries to explain the mistake away.
“He who has never sinned is less reliable than he who has only sinned once. And someone who has made plenty of errors – though never the same error more than once – is more reliable than someone who has never made any.”
The antifragility of a system requires that individual components are fragile. Take the economy: for it be antifragile, all individual businesses need to be fragile – meaning that when something goes wrong, they die and are replaced by others. Aggregate antifragility thus requires individual sacrifice.
Randomness and the turkey problem
“That is the central illusion in life: that randomness is risky, that it is a bad thing – and that eliminating randomness is done by eliminating randomness.”
There are professions that have variability in income (e.g. taxi drivers, artisans, prostitutes) but are robust on a longer period, and then there are professions that seem robust – without any volatility – but are exposed to events that bring that income right back down to zero (most office workers).
Turkey problem: “A turkey is fed for a thousand days by a butcher; every day confirms to its staff of analysts that butchers love turkeys “with increased statistical confidence.” The butcher will keep feeding the turkey until a few days before Thanksgiving. Then comes that day when it is really not a very good idea to be a turkey. So with the butcher surprising it, the turkey will have a revision of belief – right when its confidence in the statement that the butcher loves turkeys is maximal and “it is very quiet” and soothingly predictable in the life of the turkey.”
A derivative of the turkey problem is mistaking absence of evidence for evidence of absence.
The benefits of randomness and volatility
A bit of volatility and randomness is beneficial. Take financial markets for example: if there is no volatility the slightest change will make people panic, but having some confusion and volatility helps to stabilize the market overall. Volatility can also act as a purge, as in the example of forest fires. Preventing small fires means that more flammable material will accumulate on the forest floor which makes the occurrence of a massive forest fire more likely over time.
Buridan’s Donkey: “A donkey equally famished and thirsty caught at an equal distance between food and water would unavoidably die of hunger or thirst. But he can be saved thanks to a random nudge one way or the other.”
That is, adding randomness to a system has benefits and antifragile systems need it.
“My definition of modernity is humans’ large-scale domination of the environment, the systematic smoothing of the world’s jaggedness, and the stifling of volatility and stressors.”
Taleb also states that modernity is Procrustean bed – humans are fit based on what appears to be efficient and effective. Some aspects might work, but many don’t.
Iatrogenics = damage or harm from the healer. Taleb gives the example of medicine and doctors that, until the invention of modern medicine, often would increase your chances of death. A prime example of this is bloodletting.
The benefit of procrastination: it can potentially protect you from doing the wrong thing and gives time to allow the problem to go away by itself. Plus procrastination only happens when things are not urgent and when there’s no immediate danger. Procrastination “is a message from our natural willpower via low motivation, the cure is changing the environment, or one’s profession, by selecting one in which one does not have to fight one’s impulses.”
Iatrogenics are also visible in business and economics. We now have more data than ever, but the more we focus on the data, the more toxic it can get. Meaning: the more you look at the data, the more noise you probably see – the signal vs. noise ratio will be distorted. If you only look at the data every quarter, or every year, there will be a higher ratio of signal – but looking at daily, or hourly, fluctuations equals looking at noise.
“The best way to mitigate interventionism is to ration the supply of information, as naturalistically as possible. This is hard to accept in the age of the Internet. It has been very hard for me to explain that the more data you get, the less you know what’s going on, and the more iatrogenics you will cause.”
Stoicism & Seneca’s antifragility
“Stoicism […] becomes pure robustness – for the attainment of a state of immunity from one’s external circumstances, good or bad, and an absence of fragility to decisions made by fate, is robustness. Random events won’t affect us either way
But Seneca’s stoicism was not just about robustness – he also stood to gain from his wealth and investments. There was an asymmetry: downsides didn’t hurt him, but he had huge potential upside. This is why he said that wealth is the slave of the wise man, but a master of the fool.
“My idea of the modern Stoic sage is someone who transforms fear into prudence, pain into information, mistakes into initiation, and desire into undertaking.”
The barbell strategy
Barbell (or bimodal) strategy: the idea of combining the extremes but avoiding the middle. I.e. playing it safe on the one hand, and taking risks on the other. In finance this could be keeping 90% of your portfolio in (inflation-protected) cash, while putting 10% in very risky assets. This gives asymmetry: low downside, put potential high upside.
This strategy also applies to jobs and careers; there are many examples of people having a simple, secure day-job and, for example, writing in their spare time. Or you can do this consequentially: build a career in one area, leave to do something risky, and always have the option to come back to safety.
“Just as Stoicism is the domestication, not the elimination, of emotions, so is the barbell a domestication, not the elimination, of uncertainty.”
“[T]he illusion that you know exactly where you are going, and that you knew exactly where you were going in the past, and that others have succeeded in the past by knowing where they were going.”
Options and optionality
Options are nonlinear and asymmetrical. If you are right, you stand to gain a huge upside. But if you’re wrong, you lose small amount.
So the average return of options doesn’t matter – only the favorable ones do. And, in fact, in many domains the downside is quite limited, e.g. book sales – you cannot sell a negative number of books.
Taleb’s description of an option: “Option = asymmetry + rationality”. Rational meaning that one keeps what’s good, and ditches what’s bad.
In finance and business options are often expensive, because they are clearly visible and listed in contracts. The same with insurance. But due to the domain dependency of our minds we don’t see optionality in other places – places where options are not priced or underpriced.
Randomness and the future
“This tells us something about the way we map the future. We humans lack imagination, to the point of not even knowing what tomorrow’s important things look like. We use randomness to spoon-feed us with discoveries – which is why antifragility is necessary.” Taleb gives the example of the discovery of the wheel with Mesoamericans – they had wheels, but they were only on toys for children and were never applied in useful cases. Another example is the steam engine, in Greece they had an operating version of it (aeolipyle) but it was just for amusement, never for application in real life.
“Not the same thing” and the green lumber fallacy
Does education lead to wealth on a country level? Most people would assume yes, but that is mistaking the “associative for the causal”, and it seems that wealth leads to education, not the other way around. This is an epiphenomenon: a secondary effect (rising education standards) alongside a primary phenomenon (increase in wealth) but without a causation.
Green lumber fallacy: story of a very successful trader of “green lumber” who thought the lumber was actually painted green, rather than freshly cut. But he made a fortune trading it, and didn’t risk narrating and theorizing about the product itself. Not all knowledge is necessary.
“Not the same thing” happens in a lot of cases in life. “There is something (here, perception, ideas, theories) and a function of something (here, a price or reality, or something real). The conflation problem is to mistake one for the other, forgetting that there is a “function” and that such function has different properties. Now, the more asymmetries there are between the something and the function of something, then the more difference there is between the two. They may end up having nothing to do with each other.” I.e. theory and reality (practice).
By using tinkering and trial-and-error there is no dependence on the narrative being true. Take heuristics in traditions or religion, or false narratives parents tell their children in order to avoid trouble for them. Taleb states that this is why wisdom you hear from your grandmother is superior to that in the classroom – it has survived (or, rather, the person who holds the idea has survived).
“Expert problems (in which the expert knows a lot but less than he thinks he does) often bring fragilities, and acceptance of ignorance the reverse. Expert problems put you on the wrong side of asymmetry. […] When you are fragile you need to know a lot more than when you are antifragile. Conversely, when you think you know more than you do, you are fragile (to error).”
History is written by the losers
“Practitioners don’t write; they do. Birds fly and those who lecture them are the ones who write their story. So it is easy to see that history is truly written by losers with time on their hands and a protected academic position.”
Taleb states that the industrial revolution, and specifically the technical knowledge and innovation in it, was not led by academia but by hobbyists and “the English rector”. Both were in barbell positions. Famous examples include Thomas Bayes and Thomas Malthus, but there are many more examples of amateurs and clergymen who made major contributions.
While corporations like to make strategic plans, there is no evidence they actually work. In fact, most if not all theoretical approaches in management have been debunked during empirical testing as pseudoscience.
Evidence of absence vs. absence of evidence
“To repeat, evidence of absence is not absence of evidence, a simple point that has the following implications: for the antifragile, good news tends to be absent from past data, and for the fragile it is the bad news that doesn’t show easily.”
This means that when faced with fragile cases (negative asymmetries), negative impacts will be underestimated based on the average – defects are hidden and qualities are displayed. In contrast, antifragile cases (positive asymmetries), positive impact will be underestimated based on the average – qualities are hidden, while defects are displayed.
Rules from this chapter:
- “(i) Look for optionality; in fact, rank things according to optionality”
- “(ii) preferably with open-ended, not closed-ended, payoffs;” “
- (iii) Do not invest in business plans but in people, so look for someone capable of changing six or seven times over his career, or more; one gets immunity from the backfit narratives of the business plan by investing in people. It is simply more robust to do so;”
- “(iv) Make sure you are barbelled, whatever that means in your business.”
Is knowledge good?
“This argument is precisely what Nietzsche vituperated against: knowledge is the panacea; error is evil; hence science is an optimistic enterprise. The mandate of scientific optimism irritated Nietzsche: this use of reasoning and knowledge at the service of utopia. Forget the optimism/pessimism business that is addressed when people discuss Nietzsche, as the so-called Nietzschean pessimism distracts from the point: it is the very goodness of knowledge that he questioned.”
Decisions in real life are taken based on fragility and not probability. Taleb gives the example of confidence levels with, for instance, flights. If you were told a flight was 95% safe you probably would not get on the plane – the probability of something happening is low, but the size of the effects matter. The payoff, and the fragility, in the end are more important than just the probability.
A large stone vs. many small pebbles
Shows that fragile things are impacted by nonlinear effects. Being hit by one massive stone vs. thousand pebbles is not the same; jumping down from a height of 10 meters is different than jumping a height of 1 meter 10 times.
Convex vs. concave. The first curves outward (looks like a smile), the latter inward (looks like a sad face). Convexity is antifragile, concavity is fragile.
Convexity effects (positive or negative) are everywhere: in traffic where each additional car increases the travel time more than the previous, to central banks printing money – nothing happens at first, but at some point there will be a jump in inflation.
Size leads to additional vulnerability, and squeezes become more costly. Taleb gives the example of owning a pet elephant vs. a dog – if there’s a water shortage you will be squeezed and spend a lot of money on the elephant, but the effect is limited for a dog. This is why economies of scale lead to negatives in difficult moments. You could also say that the gains from increasing size (e.g. mergers) are visible, but the risks are hidden (i.e. it brings fragilities).
“Black Swan effects are necessarily increasing, as a result of complexity, interdependence between parts, globalization, and the beastly thing called “efficiency” that makes people now sail too close to the wind. Add to that consultants and business schools. One problem somewhere can halt an entire project – so the projects tend to get as weak as the weakest link in their chain (an acute negative convexity effect). The world is getting less and less predictable, and we rely more and more on technologies that have errors and interactions that are harder to estimate, let alone predict.”
So project failures and delays are not due to “planning fallacies” but rather due to the nonlinear payoff of projects itself. On a timeline a project cannot be completed in negative time (nor in zero time), so the errors are compounded on the right side of the timeline.
Most people and organization focus on doing something, the naive intervention described earlier. Acts of omission, of not doing something, are simply not done. But only charlatans focus on just positive advice; professionals in an area use negatives as well. You win chess by not losing, and get rich by not going bust, and in many ways you learn life by learning what to avoid.
“So the central tenet of the epistemology I advocate is as follows: we know a lot more what is wrong than what is right, or, phrased according to the fragile/robust classification, negative knowledge (what is wrong, what does not work) is more robust to error than positive knowledge (what is right, what works). So knowledge grows by subtraction much more than by addition – given that what we know today might turn out to be wrong but what we know to be wrong cannot turn out to be right, at least not easily.”
“Since one small observation can disprove a statement, while millions can hardly confirm it, disconfirmation is more rigorous than confirmation.”
The ‘less is more’ approach also works in decision making. Taleb states that if you have more than one reason for doing something, don’t do it – it probably means you’re convincing yourself to do something. If a course of action is obvious it does not need more than a single reason
Time and fragility
Focusing too much on technology is also an additive process – the fragile is not subtracted, but the future is extrapolated from the current narrative or utopia. Taleb writes that the past is a much better teacher for what will happen in the future than the present. “To understand the future, you do not need technoautistic jargon, obsession with “killer apps,” these sort of things. You just need the following: some respect for the past, some curiosity about the historical record, a hunger for the wisdom of the elders, and a grasp of the notion of “heuristics,” these often unwritten rules of thumb that are so determining of survival. In other words, you will be forced to give weight to things that have been around, things that have survived.”
The Lindy effect: “The old is expected to stay longer than the young in proportion to their age”
Information tends to hides failures. People hear about successes on the stock market, and forget about the failures. Or read novels that are still in print, but forget out wonderful books out of print; or with technology, focus on surviving pieces of tech, but not ones that could have benefits but that have not survived.
“Do you have evidence?” fallacy
In some cases the focus should be on the absence of evidence rather than the evidence. The “do you have evidence?” fallacy means that for harmful things it can sometimes take decades for public opinion to change (see smoking). Taleb proposes a simple solution: anything non-natural needs to give evidence of its benefits, not the natural.
Or put differently: “What Mother Nature does is rigorous until proven otherwise; what humans and science do is flawed until proven otherwise.”
To live long, but not too long
Via Negativa, focusing on what not to do, can have a lot of benefits. For a start it prevents iatrogenics and the intervention bias. The best health improvement of the last sixty years was simply not smoking. As another example, take happiness: who can tell us how to reach it exactly? But most people probably know how to prevent unhappiness.
“The good is mostly the absence of bad” – Ennius
“I, for my part, resist eating fruits not found in the ancient Eastern Mediterranean. I avoid any fruit that does not have an ancient Greek or Hebrew name, such as mangoes, papayas, even oranges. […] As to liquid, my rule is to drink no liquid that is not at least a thousand years old – so its fitness has been tested. I drink just wine, water, and coffee. No soft drinks. […] From such examples, I derived the rule that what is called “healthy” is generally unhealthy, just as “social” networks are antisocial, and the “knowledge”-based economy is typically ignorant.”
Skin in the game
“Cowardice enhanced by technology is all connected: society is fragilized by spineless politicians, draft dodgers afraid of polls, and journalists building narratives, who create explosive deficits and compound agency problems because they want to look good on the short term.”
A solution to the transfer of fragility between individuals – or to address the asymmetries in rewards and punishments – is to have skin in the game. Every person making a prediction needs to be exposed to the downside. E.g. those advocating for war, should have at least one descendant or family member exposed to the fighting.
The absence of penalties also means that people (Taleb specifically states academics) can write different viewpoints in different papers. If there is no harm associated with this they can then cherry-pick from the statements made, and convince others (and themselves) they made a correct prediction. A cure for some of these problems is not to ask people for their opinion but what they have in their portfolio, or what actions they’ve taken.
“The psychologist Gerd Gigerenzer has a simple heuristic. Never ask the doctor what you should do. Ask him what he would do if he were in your place. You would be surprised at the difference.”
In the real world is doesn’t matter how many mistakes you make or how often you’re wrong. Provided of course you focus on antifragile payoffs – in those cases you can be wrong for years, but right in the singular moment that it actually matters. What is important in the end is the payoff.
“Suckers try to win arguments, nonsuckers try to win.”
When it comes to researchers and academics, Taleb advises a simple heuristic: if the person works on ideas applicable to real life, does he or she apply to them to their own daily life as well? If yes, then they can be taken seriously. If no, then it’s better to ignore them.
Interested in Antifragile? Get the book on Amazon.