Oh S***! Why imagining the worst is both fun and important

A path winding through a valley
Photo by Lili Popper on Unsplash

What’s the worst that can happen?

It’s a question asked by a simple game I like to call “Oh sh*t!” The way it works is very simple, and based on the principle of encouraging imagination and originality that underlies many creative thinking exercises.

Any time you are considering introducing guidelines or rules on any subject, ask one question: what is the worst outcome that this rule makes more likely? Like many other creativity games, it’s best played in a group, and the results best scored by discussion. Using another principle common to creative thinking, the fewer people who have a similar idea, the more that idea scores. And the worse the consequence, the more that idea scores.

Imagining disastrous unintended consequences can sometimes feel like writing a bad undergraduate philosophy degree. The kind of examples you come up with often feel ridiculous. We feel like we know, deep down, that the artificial intelligence we programme to find a way to put an end to road deaths won’t actually do it by killing everyone at sea. Just like we feel like we know that people who say three strikes laws will lead to more egregious offending are just being alarmist.

But what’s behind that confidence?

I would suggest every reason we have for believing the bad things that could happen won’t actually happen is seriously flawed, and arises from some major cognitive biases:

  1. “It hasn’t happened yet.” The major reason we find it hard to believe really bad things will happen is they haven’t happened to us yet. But that’s just obvious — if it had, we wouldn’t be here to disbelieve them. But really bad things have happened throughout history. Just not to us. But before they happened, the people they happened to were also in the position we are now.
  2. “Someone will stop it before it’s too late.” It’s not a bad rationale, except when you unpack the next bit of that sentence, “so we don’t have to worry about doing anything.” Much of the time we ARE the “someone”.
  3. The illusion of control. We like to believe we can control processes way past the point we actually can. We believe “there’ll be time to act later so let’s wait and see for now.” This is a problem because most of the time things in our lives are in our control — or turn out in such a way that they appear they were. We can usually start that exercise regime tomorrow, or ask for that extension to our essay next Monday if we don’t get it done over the weekend. Most of the time most things in our lives are not too late to change. And that means the things that are take us by surprise, when they really shouldn’t.
  4. The exponential growth problem. This goes with the last one. We’re really bad at spotting exponential growth. So we deny it’s happening until it starts to explode, by which time it’s too late.
  5. “They didn’t mean it that way.” Metrics to improve the survival of heart surgery weren’t intended to lead to the sickest patients being denied treatment. But that didn’t stop it. Many consequences — and in the case of an algorithm all consequences — are oblivious to motive. IT MIGHT NOT BE YOUR MOTIVE TO CAUSE A PARTICULAR HARM BUT UNLESS YOU BUILD IN A SPECIFIC WAY OF PREVENTING IT, IT WILL STILL HAPPEN. And that’s why you need to play “Oh Sh*t!”

When might you play, “Oh sh*t!”? New IT security policies would be a great example. If you are introducing a new password policy and you want to make it completely accessible, the worst consequence is that anyone can hack you including some very bad actors. If you make it really secure, the worst consequence might be that many people will be unable to lose it and your organisation will lose access to its most original thinkers while they could end up losing their jobs, or their access to basic services.

Once you’ve identified the bad consequences, you can evaluate them. But most important, you can evaluate the reasons why you want to reject them as ridiculous. You might still reject them, but I’d recommend not introducing any major change without playing the game. Unless you’ve identified a policy’s consequences you can’t really make a proper choice about whether to adopt it.

I’d also recommend your teams play the game regularly just for fun. And to get used to and better at imaging the very worst.

If you would like to talk about how I can help your organization be more imaginative about avoiding unintended consequences, email rogueinterrobang@gmail.com



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Dan Holloway

Dan Holloway

CEO & founder of Rogue Interrobang, University of Oxford spinout using creativity to solve wicked problems. 2016, 17 & 19 Creative Thinking World Champion.