I think the most common technique is to ask ourselves, "What is the most likely outcome?", and if that outcome is good, then we do it (to the extent that people actually reason through decisions at all). That works well enough for many decisions -- for example, you might believe that the most likely outcome of going to school is that you can get a better job later on, and therefore choose that path. That's the reasoning most people use when going to school, getting a job, buying a house, or making most other "normal" decisions. Since it focuses on the "expected" outcome, people using it often ignore the possible bad outcomes, and when something bad does happen, they may feel bitter or cheated ("I have a degree, now where's my job!?"). For example, most people buying houses a couple of years ago weren't considering the possibility that their new house would lose 20% of its value, and that they would end up owing more than the house was worth.
When advising on startups, I often tell people that they should start with the assumption that the startup will fail and all of their equity will become worthless. Many people have a hard time accepting that fact, and say that they would be unable to stay motivated if they believed such a thing. It seems unfortunate that these people feel the need to lie to themselves in order to stay motivated, but recently I realized that I'm just using a different method of evaluating risks and opportunities.
Instead of asking, "What's the most likely outcome?", I like to ask "What's the worst that could happen?" and "Could it be awesome?". Essentially, instead of evaluating the median outcome, I like to look at the 0.01 percentile and 95th percentile outcomes. In the case of a startup, the worst case outcome is generally that you will lose your entire investment (but learn a lot), and the best case is that you make a large pile of money, create something cool, and learn a lot. (see "Why I'd rather be wrong" for more on this)
Thinking about the best-case outcomes is easy and people do it a lot, which is part of the reason it's often disrespected ("dreamer" isn't usually a compliment). However, too many people ignore the worst case scenario because thinking about bad things is uncomfortable. This is a mistake. This is why we see people killing themselves over investment losses (part of the reason, anyway). They were not planning for the worst case. Thinking about the worst case not only protects us from making dumb mistakes, it also provides an emotional buffer. If I'm comfortable with the worst-case outcome, then I can move without fear and focus my attention on the opportunity.
Considering only the best and worst case outcomes is not perfect of course -- lottery tickets have an acceptable worst case (you lose a $1) and a great best case (you win millions), yet they are generally a bad deal. Ideally we would also consider the "expected value" of our decisions, but in practice that's impossible for most real decisions because the world is too complicated and math is hard. If the expected value is available (as it is for lottery tickets), then use it (and don't buy lottery tickets), but otherwise we need some heuristics. Here are some of mine:
- Will I learn a lot from the experience? (failure can be very educational)
- Will it make my life more interesting? (a predictable life is a boring life)
- Is it good for the world? (even if I don't benefit, maybe someone else will)
I've been told that I'm extremely cynical. I've also been told that I'm unreasonably optimistic. Upon reflection, I think I'm ok with being a cynical optimist :)
By the way, here's why I chose the 0.01 percentile outcome when evaluating the worst case: Last year there were 37,261 motor vehicle fatalities in the United States. The population of the United States is 304,059,724, so my odds of getting killed in a car accident is very roughly 1/10,000 per year (of course many of those people were teenagers and alcoholics, so my odds are probably a little better than that, but as a rough estimate it's good). Using this logic, I can largely ignore obscure 1/1,000,000 risks, which are too numerous and difficult to protect against anyway.
Also see The other half of the story