• Brodotype
  • Posts
  • History’s Repeated Decision Dilemmas

History’s Repeated Decision Dilemmas

This is the one where I interview myself about decisions in history.

In this interview, we explore the fascinating topic of why people and organizations often fail to learn from history, leading to repeated mistakes. We dive into historical examples like the Vietnam War, the Coca-Cola fiasco of the 1980s, and Hugo Chávez's economic policies in Venezuela. We discuss the root causes behind these repeated errors, emphasizing the impact of cognitive biases and personal motivations on decision-making. Practical strategies for overcoming these challenges, including structured decision protocols and balancing emotional and known information, are highlighted. Personal anecdotes and lessons learned from professional experiences further enrich this engaging and insightful conversation.

Interviewer: Thanks for sitting down with me to discuss this intriguing topic! To start us off, what are some historical examples where businesses or societies have repeatedly made the same mistakes? Why do you think these mistakes were repeated despite past failures?

Brody:  I love this topic - and it's become a bit of an obsession for me recently. First i'll start by saying that I'm not someone who warns against history repeating itself. It doesn't - at least not that I believe. What I do know though is that people and organizations are prone to make the same mistakes over and over again, regardless of the information they have that precedes a decision point. 

One of the key things about this topic is that "history" is made up of a continuous series of decisions, and those decisions are made by people, who (we can only assume) are making the decision they feel is best with the information they have available to them. So then why is it that we see similar poor decisions made, even though the information we have keeps getting better and better? 

Before we get into examples, I want to caveat this entire conversation. My interest in this topic is not about the information conveyed - my interest is rooted in the motivations and biases that lead people to interpret information the way they do. Therefore (like I discussed in my last post), it's important to keep a fully objective view on decisions made by leaders and assume they were doing the best they could based their interpretation of information, and their personal motivations and biases.

Interviewer:  That’s a compelling perspective. Given your focus on motivations and biases, can you share some historical examples where businesses or societies have made the same mistakes repeatedly? Why do you think these mistakes were repeated despite having more information at their disposal?

Brody:  Sure! Let's explore a few areas. One is the most impactful and most interesting to me; War. The Vietnam war was an incredible example of a sequence of decisions that were made based on accurate information, and led to one of the worst geopolitical blunders of the United States. Another is business - everyone knows about the Kodak story, so I'll spare the repetition and explore the less known Coke Fiasco of the 1980's when executives at Coca-Cola reacted to information without understanding the root of the information. And finally, another instance of this I find fascinating is Hugo Chavez in Venezuela, and his well meaning attempt to redistribute wealth and champion the impoverished but created one of the worst economic crises in history.

Interviewer: Let’s dive a bit deeper into each one. Starting with the Vietnam War, what specific decisions were made based on accurate information that led to such a significant geopolitical blunder?

Brody: There are so many! In the whirlwind of decisions that defined the Vietnam War, one figure stands out for his profound impact and subsequent regret: Robert S. McNamara. As detailed in his book “In Retrospect,” McNamara, then U.S. Secretary of Defense, navigated a series of pivotal moments that escalated the conflict. Despite possessing accurate information, his decisions, like the infamous Gulf of Tonkin incident in August 1964, were grounded in misinterpretations and flawed assumptions. This event, misreported as an attack on U.S. naval vessels, led to the Gulf of Tonkin Resolution, granting the President extensive military powers and setting the stage for deeper U.S. involvement. Similarly, the escalation after South Vietnamese President Diem’s assassination in November 1963 and the decision to launch Operation Rolling Thunder in February 1965 were driven by overconfidence in quick military solutions and a misunderstanding of the enemy’s resolve.

By the mid-1960s, despite growing doubts about the war’s winnability, McNamara and his colleagues doubled down, influenced by the domino theory and fears of a communist takeover in Southeast Asia. The deployment of large numbers of U.S. ground troops in July 1965, along with continuous bombing campaigns, only deepened the quagmire. These actions were fueled by a mix of anti-communist fervor and a misreading of Vietnam’s nationalist motivations. Even as internal reports signaled the strategy’s failure, the commitment to avoid perceived loss of U.S. credibility kept the war escalating.

There's so many incredible lessons to be learned from this event in history, but it's important to note that regardless of the information delivered, the motivations of the individuals receiving the information were a far stronger factor in determining what happened next. McNamara’s later reflections reveal a man haunted by these choices, offering a stark lesson on the importance of critical thinking and the dangers of unchallenged assumptions in policymaking.

Interviewer: Moving on to the business world, you mentioned the Coca-Cola fiasco of the 1980s. Can you elaborate on what happened and how executives at Coca-Cola reacted to information without understanding its root?

Brody: Yes, definitely! This one is fascinating because it happens so frequently in business. The combination (formula) of biases that led to the coke fiasco can be found time after time in the history books - yet there are many more stories to write that will show the same patterns! Here's the breakdown...

In the mid-1980s, Coca-Cola executives faced mounting pressure as Pepsi gained market share and younger consumers began favoring the sweeter taste of Pepsi over the traditional Coca-Cola formula. To counter this trend, Coca-Cola decided to reformulate its flagship product, resulting in the creation of “New Coke.” This decision was backed by extensive market research, including taste tests with nearly 200,000 participants, which indicated a preference for the sweeter new formula over the original Coke and Pepsi... makes sense right?

However, the executives’ decision was influenced by several cognitive biases. One primary bias was confirmation bias, where the executives placed more weight on data supporting the move towards a sweeter product, which aligned with their preconception that sweetness was the key to regaining market share. Another less prominent, yet significant bias was the bandwagon effect, driven by the success of Pepsi’s marketing campaigns and its increasing popularity. Coca-Cola likely felt compelled to follow what appeared to be a market trend towards sweeter beverages. (I always believe that reliance on competition for insight is the downfall of most businesses)

Once New Coke was launched in April 1985, the company was astonished by the scale of consumer backlash. This response highlighted another bias, loss aversion, which they had underestimated. Loyal Coca-Cola drinkers had a deep emotional attachment to the original formula (like how most/Bill Gates feel about diet coke today), and the loss of the familiar taste triggered significant consumer anger and disappointment. The failure to predict this emotional attachment and the intensity of the response was a miss from Coke's executive team, although well known by the employees "on the ground".

Additionally, the status quo bias played a role in the swift reversal. The negative reaction to New Coke led to a quick reinstatement of the original formula, branded as “Coca-Cola Classic,” just 79 days after the introduction of New Coke. This move was aimed at restoring the status quo and placating the disillusioned customer base. The backlash and the subsequent reintroduction of the original formula underscored the underestimation of consumer sentiment and the over-reliance on quantitative data without sufficient qualitative insight around brand loyalty.

In retrospect, Coca-Cola’s decision to introduce New Coke serves as a valuable lesson in the risks of overreliance on quantitative data and check for the biases you all have when making a decision.

Interviewer: That breakdown of the Coca-Cola fiasco is a great example of how cognitive biases can derail well-intentioned decisions. Let’s turn to the political realm now. You mentioned Hugo Chavez in Venezuela and his attempt to redistribute wealth, which ended in economic crisis. Can you explain what happened there and how his motivations and biases led to such an outcome?

Brody: This one I know a bit less about, so I'll do my best to summarize in a quick way. Basically, Hugo Chávez swept into power in Venezuela on a wave of populist fervor, promising to tackle the rampant social inequality and corruption that had plagued the nation for decades. With oil revenues soaring, Chávez launched ambitious social programs, nationalized major industries, and created communal councils to give power back to the people. Sounds pretty good if you ask me.

The early years of his presidency saw significant reductions in poverty and improvements in healthcare and education. However, Chávez’s deep-seated biases towards state control and his mistrust of neoliberal economic principles drove decisions that would ultimately backfire. He believed that a strong, centralized government could best manage the economy, leading to an over-reliance on volatile oil revenues and sweeping price controls that disrupted the market.

Chávez’s ideologically driven policies neglected the need for economic diversification and infrastructure investment. This shortsightedness, coupled with his concentration of power and dismantling of institutional checks and balances, set the stage for a catastrophic economic downturn. When oil prices plummeted, Venezuela’s economy, heavily dependent on oil exports, spiraled into chaos. Hyperinflation soared, shortages of basic goods became widespread, and living standards plummeted. Chávez’s successor, Nicolás Maduro, inherited an economy in freefall and continued to adhere to the same flawed policies, exacerbating the crisis. The story of Venezuela under Chávez is a cautionary tale of how an individuals hyperfocus on doing what they feel is right, without looking at the big picture can have disastrous outcomes for the people you were trying to do right by.

Interviewer: It’s clear how Chávez’s good intentions were undermined by his biases and lack of economic foresight. Shifting focus a bit, let’s talk about the root causes. What do you believe are the key factors that prevent people or organizations from learning from history? Is it a lack of information, overconfidence, or something else?

Brody: I don't believe it's lack of information. Overconfidence is a big driver, but more prevalent is simply the motivation to be the person with the "right answer" instead of being the person who finds and executes the right answer - regardless of who provides it. Maybe that's hubris, maybe that's overconfidence, or maybe that's fear... there's a lot of things that drive decisions. My point here is that the most important thing we can do is to recognize those motivations and feelings that drive our decisions, so that we can openly and unbiasely accept all information to make the best decision possible.

Interviewer: That’s a nuanced take on the root causes. Recognizing and addressing personal motivations seems crucial for unbiased decision-making. What strategies or approaches do you think could help individuals and businesses better learn from history and avoid repeating mistakes? Are there any examples of organizations or leaders who have successfully done this?

Brody: One of my favorite approaches here is maybe a less popular one, which is "more process". In “Noise,” Daniel Kahneman, highlights the detrimental impact of variability in human judgment. They suggest that reducing noise, or unwanted variability, in decision-making can be achieved by implementing structured decision protocols, using algorithms where appropriate, and conducting regular decision audits to identify and minimize inconsistencies. In practical terms, this means organizations should strive for more standardized processes and use data-driven methods to enhance consistency and fairness in decisions - the downfall of this is that organizations capture the data, but never put the decision frameworks in place to use it correctly. (probably because they want to maintain control over decisions)

Another thing I have been doing recently is when I make decisions, I create a 4 quadrant graph where I'm able to take the different pieces of information I have about a particular decision, and plot them according to "Emotional Information" vs "Known Information" and assess confidence level. This is important because this piece isn't about ignoring the emotional information that you have around a particular decision, you just need to be care you're not over or undervaluing it against the other information available.

Interviewer: Your quadrant graph sounds like a great tool for balancing emotional and known information. Finally, on a personal note, have you ever experienced or witnessed a situation where failing to learn from history led to repeated mistakes? How did that impact you, and what did you take away from the experience?

Brody: I'm assuming you mean besides my choice of previous relationships? 

Interviewer: Ha, Besides that, yes! Though if there's a lesson there too, feel free to share! In a professional or organizational context, have you witnessed repeated mistakes due to a failure to learn from history? How did it impact you, and what did you take away from the experience?

Brody: I'd rather not share, lol. In a professional setting, I have personally overestimated the information from what data has told me. While creating a new Sales Performance Platform, I looked a lot at engagement metrics around certain activities within the digital learning platform we were using at the time. I then paired those with another data point surrounding competitors and the usage on their platforms. I used these engagement metrics to inform our roadmap decisions, and decisions around what to build and what not to build. After releasing some of these key activities we saw much lower engagement numbers than what we saw in the previous platforms, and even competitors. This was a huge red flag for me but even became a primary issue at the Board level.

After having a few simple discussions with users, I realized that something I had missed in all of my qual research was the fact users were being incentivized to complete these particular activities from their L&D Teams. Looking back at the research, they all noted that they "really liked these activities" and my biases led me down a path to think that because they liked the activities meant they would engage willingly.

The lesson? Information and data is the outcome measures of decisions made - both need to be important inputs when making your own decisions.

This interview was inspired by: Jocko Podcast #440 “Why we don’t learn from history”

Other inspiration comes from one of my favorite books Decisive by Chip & Dan Heath.