Natech Risk Assessment and Management – Reducing the risk of Natural-hazard Impact on Hazardous Installations

Authors: Elisabeth Krausmann; Ana Maria Cruz; Ernesto Salzano
The problem that is currently on the rise is that of technical disasters triggered by previous natural disasters or natural events. Examples include frost, heat, drought, rainfall, floods, earthquakes, whether or not combined in tsunamis, lightning strikes, etc.
In an extensive introduction, the authors give a number of examples of what these natural phenomena can do. And that is quite something: ranging from power cuts and pipe breaks, to destruction of storage tanks and explosions. These in turn give rise to an evacuation of the employees of the company, the environment, with or without death toll, as well as a possible enormous economic damage and the stagnation of (parts of) the economic activity in the affected area. It is therefore not for nothing that people want to arm themselves against the even worse domino effects of such events. To this end, these so-called Natech events are studied. To make the world a little safer.
Unfortunately, no two Natech disasters are the same. Although performing risk assessments makes progress in this regard, according to the authors, it remains (for the time being) an impossible task to compare the results of risk assessments. That makes it difficult to prioritize. Yet there are some standard works that the authors regularly refer to, among many other things in their detailed literature lists. Namely the so-called purple book, red book, green book and yellow book from TNO. But perhaps more important for their discussion are the software packages RAPID-N, PANR, the methods of TRAS 310 and TRAS 320, risk curves and ARIPAR-GIS. These contain qualitative, semi-quantitative and quantitative risk assessment modules.
After a number of chapters in which RAPID-N, ARIPAR-GIS and RISKCURVES are illustrated with discussion of the results, two chapters deal respectively with structural (technical) measures and organizational (more administrative) measures.
An innovative framework, which the authors say is worthwhile, was proposed by IRGC and consists of the following five elements:

  1. Risk preassessment: an early warning and “framing” of the risk to provide the problem with a structured definition. Or how it is framed by the various stakeholders and interested parties, and how best to deal with it.
  2. Risk assessment. By combining a scientific risk assessment (of the hazard and the probability of it) combined with a systematic ‘concern’ assessment (of public concerns and perceptions) to form the basis of knowledge for taking subsequent decisions.
  3. Characterization and evaluation: making use of scientific data and a thorough understanding of the societal values ​​affected by the risk to determine whether the risk is acceptable, tolerable (with or without mitigation of the risk as a requirement) or intolerable (unacceptable).
  4. Risk management: all actions and remedies that are necessary to avoid, reduce, share or retain a risk.
  5. Risk communication: how stakeholders and interested parties and society understand the risk and participate in the risk governance process.

The work is, in particular, a piece of “compulsory” reading material for continuity managers and risk managers of large companies that are important for the economic motor of a region or country with large industrial installations. It requires a healthy portion of common sense, but also a sufficient knowledge of process engineering to grasp the storylines. In addition, an open view of a wide range of sciences, technical and non-technical, and of society, is necessary to correctly assess the importance of this work.

Adaptive Business Continuity – A New Approach

Authors: David Lindstedt; Mark Armor
Adaptive Business Continuity wants to blow a new wind through the BCM world. To this end, they throw, among other things, the BIA and the risk assessment overboard. When I read the arguments why they do this, it’s not clear why. After all, if I read the authors’ arguments in Appendix B (the manifesto), then there is the following in B.5.1 page 154:
“Adaptive BC discourses a sequential approach. Continuous value, coupled with the core mission of continuous improvements in response and recovery capabilities, leads to the adoption or a non-linear approach that adjusts to ongoing feedback from all participants. … “.
The manifesto in https://www.adaptivebcp.org/manifesto.php also states that things are becoming increasingly complex:
“How long can an organization without a particular service almost always depend on an integrated combination of factors too numerous to identify and too complex to quantify. Moreover, the changes that result from the exact timing and actual impact of a disaster on a service will dictate different judgments about applicable recovery strategies, priorities, and time. Definitive changes to a service’s holistic “ecosystem” cannot be foreknown. ”

Nobody can deny that the problems are becoming increasingly complex. However, it can be denied that linear problems no longer occur can be denied. That is why I would like to look back at the picture of the full spectrum of possible disasters. I tried to discuss this in a previous blog, at http://www.emannuel.eu/en/artikels/resilience-strikt-genomen-disaster-management-red-ants-gray-rhinos-black-swans-de-verhouding-van-bcm-risico-management-rm-en-crisis-management-cm/ in question 6:

However, when I get the idea of linear issues, difficult or complicated issues and complex issues from the article of http://www.emannuel.eu/en/artikels/herhaalt-de-geschiedenis-zich-of-niet/# then I come to the next figure:

I believe that Traditional BCM has a linear approach. This is extremely suitable for linear issues and systems with a (well) known impact and linearisable systems from the complicated part of the spectrum. The question that remains is whether the arguments of the authors are clear enough to put this linear approach aside. This I deny: the objectives of Adaptive BC are not clearly defined enough to throw traditional BCM overboard here. After all, no arguments are given. The BIA simply refers to an article by Rainer Hübert, the authors of which do not reflect the line of thought.
Mutatis mutandis, I think the argument to drop the risk assessment is a non-argument.
After all, they write:
“Administering a proper risk assessment and implementing the resulting action items may necessitate deep knowledge of actuarial tables, information security, insurance and fraud, state and federal regulations, seismological and meteorological data, and the law. Typical continuity practitioners do not possess such deep kn owledge; those who do are most likely specifically trained as risk managers. Adaptive BC practitioners as such should eliminate the risk assessment from their scope of responsibility. ”
I do not agree with this: a BC manager does not need to be an expert in all these matters. Rather, he / she must be able to know the right experts within the company, gain their trust, facilitate and coach them in order to achieve results. Then making a risk assessment is not hopeless and not unusable. I leave the criticism of the readers to judge a lot of other claims to the detriment of traditional BCM: it is clear that the authors are trying to declare the traditional BCM dead for reasons they unclearly articulate.
Has the Adaptive BC depreciated for me? No, not at all. Because they offer an answer to issues from a different part of the disaster spectrum: possibly, provided that they have a solid foundation, a solution can be distilled here for complicated and complex systems where I think traditional BCM has a harder time answering as follows:

So in my opinion, the time of traditional BCM is not over as long as there are linear issues. In addition, Adaptive BC will have to develop further into a more mature activity, which can claim its own part of the spectrum of disasters, together with monitoring, building scenarios and future scanning. But next to Traditional BCM. Not instead of.
So there is still work to be done.

X-Events. The Collapse of Everything

Author: John Casti

The book is written “For the connoisseurs of unknown unknowns” and is divided into three parts.

The first part – Why ‘normal’ is not normal anymore – talks about complexity theory. The complexity theory means that each issue has two (or more) sides, for example a service delivery of an organization has an organization side and a customer side. Both have a certain degree of complexity. Without going into the definitions of complexity here, but from the gut feeling, we can view the delivery of electricity in the USA as an obvious example. We can say that the demand side is very complex: different quantities, different times, different needs that have grown throughout history as a very complex system. But there is an outdated infrastructure that has a low complexity with regard to the current state of technology. Between both complexity levels there is a gap, which according to the complexity theory is a source of vulnerabilities, and can trigger an extreme event to correct the system. For example, a blackout. This example is a simple illustration of the theory, which is obvious. The best solution for the continuity of the customer side and the supplier side in this case is an increase of the complexity on the supplier side, until it equals that of the customer side. In other words, a technical upgrade.

The first part ends with seven complexity principles:
– Complexity: Main characteristic
– Emergence: The whole is not equal to the sum of the parts
– Red Queen hypothesis: Evolve to survive
– For nothing the sun sets: Exchange between efficiency and resilience
– Goldilocks principle: Freedom levels are ‘just right’
– Incomplete: Only logic is not enough
– Butterfly effect: Small changes can have huge consequences
– The law of the required variety (this is the somewhat important one): Only complexity can control complexity

Part two is a collection of 11 chapters, each of which deals with a separate case, in which the complexity gap is shown each time and how a disaster can arise from it.

In part three, the author argues that the breadth of the gap or the excess of complexity can be seen as a new way of quantifying the risk of an extreme event. This, however, without really going into formulas.

Finally, the author determines three principles with which the gap can be made smaller or can be prevented.

– A first principle is that systems and people must be as adaptive as possible. Because the future is unprecedented but increasingly dangerous, it is wise to develop the infrastructures with a large degree of freedom, to be able to counter or use what you encounter.

– The second aspect, resilience, is closely related to the first principle, that of adaptation. With this you can not only collect hits but also take advantage of them.

– The third principle is redundancy. This is a proven method in the security sciences to keep a system or infrastructure going when faced with unknown unforeseeable and foreseeable shocks. Actually this is about extra capacity that is available when, for example, a defect occurs.

Exponential Organizations

Authors: Salim Ismail; Michael S. Malone; Yuri Van Geest

Humanity has been busy with productivity since time immemorial. Production provided people with scarce resources that were / are worth a lot due to their scarcity. In the last decade, the Internet has come to the forefront, including the concept of “Creative Destruction” and “disruptive technology”. The big companies usually thought about the Internet 15 years ago as “something that is a phenomenon of time”. Nowadays, after an explanation about exponential organizations, they realize that the internet is a phenomenon that is the beginning of everything.

But what are they, those “Exponential organizations”?

It is usually small organizations that make use of the latest technology to come up with new solutions for market demands, for which solutions sometimes already exist. Through the new application they conquer the market in a very short time, in an exponential way. Examples include smartphones and tablets, which have given the photography and the paper newspaper world a problem.

The “nice thing” about this phenomenon is that because technology has become common good, an adolescent in a garage can do an invention that can turn the world of a gigantic company with thousands of employees upside down in a very short time.

That is why it is important that all organizations transform themselves into exponential organizations and tackle themselves disruptively. Because if they do not do it themselves, someone else will. Hence disruption as a means to do risk management and business continuity.

In the book, which is the result of a study by SU (Singularity University), the authors give a number of points of interest. These are given by the mnemonics MTP, SCALE and IDEAS.

Very important is that in contrast to large monoliths the small ExOs are very Lean and Mean organized. The book does not go very deep on this, but large monoliths can also benefit from their advantages by collaborating with existing ExOs or by creating ExOs at the borders of their organization.

Good Practice Guidelines – 2018 Edition – The Global guide to good practice in business continuity

Published by The Business Continuity Institute

This edition of the GPG differs according to its own saying in numerous ways from the 2013 edition. Some of those that stayed with me are:

–    More collaboration of the BCM employees with other employees in other management disciplines.
–   
Supply chain was integrated more into the story.
–   
More links are being made to ISO standards.
–   
Risk assessment has gained importance.

There are other things that have changed, which are noticeable:

–    Throughout the work, the link is regularly made to information security, but without referring to the ISO 27K series.
–   
The BIA is still a 4-tuple, but the mandatory character has been changed to “use what you need”
–   
A distinction has been made between crisis management and incident management.
–   
There is a better explanation for strategic, tactical and operational plans in times of crisis. However, without mentioning that the choice is also important as a function of what one needs. This piece remained theoretically sharply separated.
–   
There is a beautiful table here and there with more explanation of what is meant, such as the table with specific core competences and management skills that are required by the BCM responsible, divided according to the 6 professional practices.

In the book, extensive attention was given to PP6: ‘Validation’. Practicing and validating the operation of the BC program of the organization is very important as the keystone of the cycle to its restart.

In summary, we can state that the book is important for the beginners in BCM, but also for the advanced as a reference book.

What I personally regret that lacks is a bibliography for each chapter. For further reading I have the feeling that the interested parties are somewhat abandoned. But then there is the URL of ‘The Business Continuity Institute’ where you can find more information. (www.thebci.org).