Why do we need resilience anyway?

????????????????????????????????????
Post flood damage – India (SCAD)

By Douglas Owen FSI

To understand why we need systems and societies capable of resilient performance, we first have to understand what the problem is. The problem is this: the world is complex. Really, really, mindbogglingly complex. And only getting more so thanks to us – it’s not called the anthropocene for nothing. People who know about these things use words like volatile, ambiguous, emergent, coupled, chaotic and so on to describe the characteristics of our cities, institutions, systems, technology and their interactions within the natural environment.

Complexity in itself is actually fine. Great, really. Without it we couldn’t have life, weather, cities or romanesco broccoli (my favourite edible fractal). The problem is actually ours. It’s just that all this complexity has a nasty habit of tripping us up and doing us harm. Why? Because the world is so complex that it’s beyond the limits of any human beings’ intellect to understand and act judiciously within it. So despite our best efforts to make good choices, our decisions tend to fall a bit short when it comes to foreseeing the full extent of their effects, good and bad.

It’s very important that we get comfortable with this idea. That the complexity of our world is only a problem because we’re just not smart enough, and never will be. At least not in terms of a brute force capacity to understand every single interaction and outcome of our decisions, actions and environment that may bring benefits or do harm to ourselves and our world.

History is quite literally littered with the wreckage of unforeseen and unintended consequences of human endeavour. So we need to both get better at not tripping over our own shoe-laces, and also not being surprised when we are inevitably blindsided by the source or size of disturbances. As Judith Rodin – CEO of the Rockefeller Foundation – said: crisis in the new normal.

But it’s not like we don’t try, and with good effect too. The aviation and nuclear industries are proof that we can figure out how to do incredibly dangerous and complicated things with almost unbelievable reliability, almost all of the time. We achieved this by spending quite a lot of time of the latter half of the 20th century devising tools that help us foresee the future: risk assessments, event trees, fault trees, dependency diagrams, bow tie analyses, probabilistic safety assessments, scenarios and so forth. More recently, human factors has allowed us to do this for the people in our systems as well. When this isn’t enough, we create computer models with algorithms to do the number crunching for us. Go us! So let’s just do more of that, no?

Not so fast. George Box once wisely said, ‘Essentially, all models are wrong, but some are useful’. All of these tools are based on models that effectively simplify our world into bite-sized chunks for digestion by our puny human brains. They help us make good (or less bad) decisions, or show that we’ve done our homework to the powers that be. Their usefulness to foresee how to sustain success and dodge disaster diminishes as the complexity of the systems and the world they operate in increases. And as they are developed by humans for use by other humans (with a bit of computer assistance), they are also subject to the vagaries of competence, bias, incentives to get the ‘correct’ answer, and all the other things that affect people doing anything ever.

But it’s not that all of a sudden these deterministic ‘predict and withstand’ approaches have had their day and are of no longer of use. Far from it. They are awesome, and will continue to be so. They have delivered huge benefits in safety and reliability around the globe. They just have a limit. We need to recognise where that limit is, and that we shouldn’t be surprised by the fact bad stuff that happens when we try to use them when we shouldn’t.

Defining that limit is actually really difficult. It’s certainly true that the scope of things that could imaginably affect of most systems is greater than the scope of control of their designers, operators and custodians. As a response, it has been tempting to just try to foresee harder – to extend the boundaries of the current tools. A number of tools being created under the banner of resilience try to do this by adding more prompts and more categories, and the possibility draw more lines representing interconnections between the stuff in our systems. At some point the return on investment of this effort to foresee harder stops paying off.

It’s a bit like trying to decide whether to spend $1, $5 or $1000 dollars on lottery tickets. When you consider the odds of choosing the right six numbers, it makes no appreciable difference to your chances of winning whether you spend $1 on tickets or $1000. You’re better off spending $1 on lottery tickets and switching career to something that pays better.

So we have a starting point: we must put aside the delusion that we and our finely crafted tools that peer into the fog of the future have an infinite capacity to wrangle the complexity of our world. We, and they, don’t. Once we accept that, it opens a door that allows us, as humans, with all our capabilities and limitations, to look for new ways to enhance the ability of systems to succeed in a fiendishly complex world and the disturbances that resonate around it.

Who doesn’t want to get better at safeguarding what matters, minimising harm to individuals, organisations and wider society, and enhancing organisational and societal wellbeing in a complex world?

Thinking about resilient performance is the invitation to do just that, but in a radically new way. To identify ways of thinking about the systems we build, their capabilities, how they interact, and allow us to respond to disturbances from within and without. We need to recognise the positive contribution of ‘predict and withstand’ approaches, and complement them without being bogged down in the eternal swamp of documenting infinite possible interactions and outcomes.

We will still be building our systems out of broadly the same stuff we always have – people, technology and processes – but their function will change to deliver new, resilient capabilities. A reformulated and reconfigurable socio-technical system-of-systems, if you will. One capable of preparing, adapting and succeeding within our complex and volatile environment, to minimise harm and enhance well-being within human and planetary limits.

If that all sounds a bit grand, perhaps we can settle on resilient capabilities being a way to make things not as bad as they would otherwise be.