Infrastructure

If anything exciting happens, we’ve done it wrong.

Susan365
4 min readNov 12, 2020

As John Oliver said in his brilliant episode about infrastructure in March 2015 (paraphrased): “…if our infrastructure isn’t blowing up, it’s pretty boring”.

He points out that infrastructure is important, but not that interesting. The whole episode is 20 minutes long; the first 17 minutes is a bit of a rant about the state of US infrastructure funding.

Jump ahead to 17:15 where he points out that we’ll only pay attention if someone makes a blockbuster film about it. The fake movie trailer is hilarious, with Edward Norton, Vincent D’Onofrio, and others, effectively pointing out that what makes the “movie” so great is that ‘nothing happens’.

“If anything exciting happens, we’ve done it wrong”.

For many years I’ve been involved in risk management — for which, when applied to operational safety in particular, the ultimate goal is to have ‘nothing happen’.

In June last year, I attended an Engineers Australia seminar put on by the ICHEME chapter, on Risk Management, presented Dr Maureen Hassall, an Associate Professor and project leader of UQ R!SK, a leading-edge initiative that crosses the field of industrial risk and human factors.

The seminar was a very good summary of the history of risk management, and the challenges ahead for the practise.

We know that engineering is used to solve complex problems, which are imperfectly defined and rely on limited information. Because of this, we rely on systems thinking, judgement, and decisions based on trade-offs to optimise operation, and also to be safe.

It is the ultimate in managing uncertainty. It can be argued that uncertainty has changed over time.

The original industrial revolution — mechanical

The first industrial revolution began in the mid-1700s in England and later in other countries, characterized by the replacement of hand tools with power-driven machines, and by the concentration of industry in large establishments.

During this revolution, through until the 1970s and 80s, processes and tools were being mechanised. ‘Process upset events’ (accidents) were broadly being caused by equipment failures.

The operator was in the room and could smell/hear/touch the rattles, knobs, noise, and grease. Picture the (newly) automated car assembly lines; or, the onsite support given by operators located at hydrocarbon processing plants.

In this environment, risk management techniques such as FMEA, Fault trees, HAZOPs, and Bowties developed — and were focussed on occupational safety and by association, process containment to keep the workers safe.

The next revolution — digital

The next revolution, from the 1990s to the present, is digital. Now, it can be said that faults are broadly caused by humans rather than the machines, because machines and process tools have been mechanised and then automated.

There is now remote supervisory control, and it’s based on electronic, touch-screen interfaces.

It’s a very different operator in this environment: there is nothing to touch or smell or hear — so even the personality, and the mindset, of today’s operators is different to those from the 70s and 80s.

And humans are being forced to fit the system. The past two decades have seen the rise of risk management tools such as Behavioural Based Safety (BBS), Swiss-cheese model, ACCIM, and STAMP (system-theoretic model of accidents).

The observation here is that we now have, and will more so have in the future, workers with little experiential learning — where they have smelt/heard/seen the machine that they are managing.

Learning is now at a computer screen instead of ‘in the field’ for the most part.

The future revolution –

Next, the revolution will be even more distant from the action. It will include more remote control, and 3D printing. Of course that includes machine learning, artificial intelligence, and robots.

But no matter how much of that is adopted into society, keep in mind that some person starts that process, and somebody’s personality will infuse the learning process. No program is written without human involvement. At this point, even machine learning is only as good as the code that is written for it.

A new combination of factors are now at play for risk management.

There are very few novel risks. Most failures are associated with failure to implement, or failure to maintain, suitable controls (treatments) that are known and recognised. Before automation and screen-based management, failures were attributable to individual or linear activities. We are now dealing with ever more systematised and complex situations.

Risk management is successful if these are achieved:

  1. Prevent failures of expected events, and
  2. Promotion of and support the success of effective controls
  3. Minimise consequences should a failure occur, and
  4. Should an incident occur, those responsible can honestly and confidently say they tried everything possible to prevent it or to minimise the consequences.

It is easy to focus — quite deeply in fact — on the analysis and numerical side, to apply numbers and statistics to likelihoods.

But the area that needs more attention, especially now that risks are so complex, is on treating the risks, not just analysing them and assigning numbers.

Risk management processes are very good at the context and assessment of risk. The focus on treatment of those risks is fraught with the reality of skinny budgets, overworked resources, and general difficulty in identifying effective risk controls — and then checking up on them for ongoing effectiveness.

Anyway, as you were. Let’s go back to our roles in making sure that nothing happens.

(originally posted on my blog, pipelinesoz2 at wordpress.)

--

--