Decision (i): Choose between A. sure gain of $240 B. 25% chance to gain $1,000 and 75% chance to gain nothing Decision (ii): Choose between C. sure loss of $750 D. 75% chance to lose $1,000 and 25% chance to lose nothing
I found the above in Daniel Kahneman’s excellent book Thinking, Fast and Slow. My initial reaction was to pick AD, but as it turns out the rational choice is BC. If you were to break down these options you would find that these options result in the following probabilities:
AD: 25% chance to win $240 and 75% chance to lose $760 BC: 25% chance to win $250 and 75% chance to lose $750
When you consider things this way BC is clearly a better choice. Unfortunately, my brain didn’t automatically take it on itself to break down the choices into combinations of probabilities. Of course that takes work (although not much in this case) and our brains have certain performance optimizations built in which it will make suboptimal decisions based on surface level information and automatic processing (when the conscious brain is not involved).
Another example of confusing the brain comes earlier in his book. Which line is longer in the below image, the top or the bottom?
It’s very likely that the bottom line looks longer, but in truth both lines are the same. Now that you know both lines are the same look at the lines again. What line appears longer? It’s hard for the part of the brain that automatically processes that visual information to recognize that both lines are of equal length, even though you know that to be true. You have to put in conscious effort to override the automatic part of brain with the right information.
Here’s another example Kahneman uses to show the lazy behavior of our conscious brain:
A bat and ball cost $1.10 The bat costs one dollar more than the ball. How much does the ball cost?
You likely have a number in mind and there’s a good chance it’s not the right answer. I tried this on myself and on a few friends and we all got it wrong. If you take a minute to do the computations in your head or write them down you’ll come up with the right answer. But, your initial response may have been similar to ours, ten cents. Of course ten cents isn’t right, because if the bat is a dollar more than the combined cost would be $1.20. The correct answer is 5 cents.
None of the above three examples are complicated, but the automatic part of our brain (our “System 1” as Kahneman calls) does a very poor job of processing the information to come up with the logical, rational answers. System 1 turns out be great at very simple tasks like 2 + 2, driving on auto-pilot on an open highway, recognizing colors, identifying words, among many other examples. But it’s another part of our brain, System 2, which does all of the conscious effort of critical thinking and computation even though it’s capable of it.
Kahneman calls this WYSIATI (What You See Is All There Is). You take surface level information, your System 1 gives an automatic/intuitive response about it, and when it is plausible enough our System 2 may never be activated to actually check! By default, our brains are lazy (or optimized depending how see it). When we need to focus and critically think we are capable of it, but when there’s not reason to put forth the mental effort our brains will opt out when possible (or at least when things seem plausible).
Relating this to software
Somewhere in all of this it made me think of how this relates to crafting software (of course that’s always on my mind). If our brains can be irrational with these simple decisions it only encourages my belief in clear, readable, and repeatable tests against software. Even though none of the above examples were worded poorly or in a way to intended to hoodwink anyone that was a good chance our brains failed us on the first pass.
When we are in the act of crafting a part of our software we have an activated System 2. We’re not in the state of relying on System 1 so readily. Rather, we’re thinking critically and deeply about the particular problem we are solving. Tests become our tool to root out any logical inconsistencies and ensure we’re covering all of potential concerns that come to mind now. It’s highly unlikely that any potential code reviewer is going to have the same level of thought when they merely open the file later and skim it. This may be related to why code audits are hard and/or thorough ones are time-consuming/costly.
When doing a code audit we often rely on certain metrics, but also on going through the code itself. There are certain patterns which are found which commonly refer to certain types of problems in the system (a lot of code smells can be recognized with relative ease). However, deeply rooted issues can hide in seemingly okay code. These are harder to identify, especially when you look at a system that has NO tests.
It’s not that we’re doing a bad job with the code review or audit. It’s just that our brain is unable to activate the conscious and effortful System 2 to analyze every aspect of every decision made in code. And a good part of that is because we weren’t the ones who often made it (and in many cases when we were the original authors we can get ourselves into trouble still).
Because of this we may leave gaps in not only code reviews and audits, but in the software itself. We may be seemingly picking the best combination of choices or seeing different length of lines or quickly handling easy calculations to a seemingly simple question. But really, the brain residing in our trusty noggin may not be as credible as we often give it credit for. If only we provided the expected answer when our System 2 was fully engaged. If only we had written automated tests for our assumptions and expectations about the behavior of something we put so much time into creating.
All of the examples used in this post were pulled from Thinking, Fast and Slow.
blog comments powered by Disqus