Each of these actions was small, but we followed a principle of improving by marginal gains. We gained insights that validated or invalidated these hypotheses. Over time, we felt more equipped to place bigger bets and make bigger commitments. In 2016, we joined the first startup cohort of Project Include, a data‐driven DEI initiative anchored at the Kapor Center. 18 We validated that DEI initiatives weren't successful as just HR or marketing initiatives. We sharpened our perspective, learning that the best results come when the CEO is directly involved, either hands‐on or as an executive sponsor. Cultural change requires the CEO to set, model, and reinforce purpose and accountability.
Finally, there was the well‐documented, cross‐industry data on pay disparities. We surmised that if there is a pay disparity between a Black woman and a White man hired on the same day for the same job, they could perform equally well, earn the same percentage salary increase, yet in five years, have a substantial difference in their compensation. How long do you think that high‐performing Black woman will remain at your company after she finds out that she is being paid less for the same job? We saw that even well‐intentioned companies fell into that trap. If we wanted to attract and retain people from diverse backgrounds at Truss, we believed salary transparency was a concrete initiative to address this complex problem.
But we didn't know the answer, so we started on the journey using what became the framework for Move to the Edge, Declare It Center. I'll return to the specific methods in the Salary Transparency Case Study in Chapter 2. However, it is time to examine the ways we can make poor decisions around complex problems, despite our best intentions.
Defeating the Defaults: Making Decisions Under Uncertainty
When I wrote the July 2016 letter that opened the book, I was struggling with two significant issues: complexity and uncertainty. The issue of racial justice is complex, and I felt deeply uncertain about how to address it with my company. My “I don't know” moment was how my employees and the larger public would respond as I revealed how vulnerable I felt. There were significant risks if I got the message wrong or created unintended consequences. I decided to show up and speak up, and my message resonated – not just with my employees but also with a larger readership when I published it in Forbes . I've also witnessed leaders, including myself, making poor decisions when confronted with complexity and uncertainty, especially during the summer of 2020. I got curious about how complexity and uncertainty affects decision‐making, and it became an important building block for the Move the Edge, Declare It Center framework.
If you searched “decision‐making under uncertainty” on the internet, you would find thousands of academic citations across the fields of organizational behavior, neuroscience, behavioral economics, and computer science. In addition to the academic work, the topic shows up in the theater (from Hamlet to Hamilton ) and even in your decision about which line to get in at the grocery. Suffice to say that a full literature review is way beyond the scope of my brain, your attention, and this book.
Over the course of my career, I have experimented with diverse research, tools, and practices in order to solve complex problems under uncertainty. I learned, through good and bad decisions, to cultivate reliable practices that help make better – though not perfect – decisions in uncertain times. There are four key principles that anchor these practices, and they animate the rest of this book.
Our Brains: Wired for Shortcuts
The first three principles have to do with our brains. First, we're less rational than we think. I used to have endless arguments with economists in grad school, where the dominant paradigm was based on the assumption that humans are rational actors. In contrast, my personal experience reflected theories in social psychology that people use simplifying shortcuts, social cues, and responses to their own emotional states in order to make decisions that might be considered irrational . For example, the collected works of Daniel Kahneman and Amos Tversky, detailed in Thinking, Fast and Slow , not only begat the field of behavioral economics and won Kahneman a Nobel Prize but also opened the door to more accurate ways of thinking about decision‐making that assume that humans are full of predictable biases, illusions, and preferences.
Second, our brains are wired to make systematic shortcuts in decision‐making. Kahneman distinguishes two types: System 1 and System 2 thinking. System 1 thinking is fast . 19 It's generally unconscious, automatic, and effortless. The brain in System 1 mode assesses the situation, acts without conscious control, and responds. Roughly 98 percent of our thinking occurs here. System 2, on the other hand, is slow . It is conscious, deliberate, requires effort, and is rational. The brain in System 2 mode seeks anomalies, missing information and consciously makes decisions – 2 percent of our thinking occurs here. This is purposeful – imagine having to think about contracting and lowering our diaphragm in order to inhale every single breath. 20
Unfortunately, System 1 (fast) can interfere with or influence System 2 (slow) with numerous cognitive biases without our conscious knowledge. For example, confirmation bias is the tendency to search, interpret, and recall information that confirms one's prior beliefs or preconceptions. 21 For example, a 2021 study of soccer commentators showed that they are six times more likely to associate attributes of physical power to players of darker skin tone than to lighter skin tone. 22 This is an example of how the stereotype that darker‐skinned people have greater physical prowess and lack intellectual prowess manifests in everyday, unconscious ways. Confirmation bias shows up in how we assess job candidates as well as how we evaluate business proposals. After all, when 98 percent of our thinking is System 1(fast), we are subject to it unless we consciously slow it down or create systems to help.
The third influence on decision‐making relevant for this book is Type 1 and Type 2 decision‐making. It's a framework that categorizes different kinds of decisions and can help corral the best of our brainpower to make good decisions. Type 1 decisions are irreversible; Type 2 decisions are reversible. It is crucial to distinguish which type before committing to an action. For example, if a decision is Type 1 (irreversible) and consequential, rigorously gather facts and perspective from multiple, diverse sources, and actively mitigate risk. On the other hand, that's way too much effort for a reversible Type 2 decision. Instead, it's better to decide quickly, especially if you can do an experiment.
I learned a useful twist to decision‐making when listening to author Jim Collins talk about his research with Morten Hansen on the impact of time and risk. 23 They learned that the best decision makers could assess when the risk would increase. For Type 1 decisions that were irreversible and consequential, they would use the best available information and make a decision fast. If risk wasn't going to change for a long time, they would rigorously gather data (see Figure I.3).
I experienced this in my own life. When my father was diagnosed with stomach cancer in 2010, he had multiple treatment options, but he had time to decide which option to pursue before the tumor grew too big and limited his choices. We rigorously sought new data from different doctors, researchers, and other patients. We decided to remove his stomach because we were absolutely clear it was the best choice. It was a Type 1 irreversible decision, but the risk was not increasing rapidly. In contrast, when the Tubbs Fire erupted in 2017 in rural northern California near Calistoga, risk was increasing by the second because of 70 mph wind gusts and dry grasses due to drought. “Evacuate!” was a Type 1 irreversible decision with no time, and it was the right call.
Читать дальше