Institution of Mechanical Engineers—Process Safety, London

UK-based institution hears from process safety specialists on avoiding major disasters like the UK’s Buncefield tank farm fire. Models, like James Reason’s ‘Swiss cheese’ layered security, guidelines and best practices are part of the solution along with constant vigilance and training for all.

The one day event, ‘Process Safety, are you doing enough to avoid major disasters?’ held last month at the London-based Institution of Mechanical Engineers kicked off with a keynote from Ian Travers who heads up the Chemical Industries Strategy Unit of the UK’s Health and Safety Executive. For Travers, process safety should be in your blood. If you lose control of a process, what happens next may be up to sheer luck. As luck would have it, there were no fatalities at Buncefield (www.oilit.com/links/1111_10), but this major incident shaped thinking around process safety. One outcome is the Chemical Industry Association’s best practice guide to process safety leadership (www.oilit.com/links/1111_11). Why leadership? Because the ‘C suite’ needs to understand risk and ensure that process safety is managed in a systematic way. The Buncefield Report is essential reading and explains the management systems failures. Companies should also use the UKPIA tools for self assessment (www.oilit.com/links/1111_12). Process safety is shorthand for how major hazard risks are controlled. Root causes are common across all organizations and the system is only as good as its weakest link. Process safety management determines what hazards are there and what is the potential impact on a plant. The Center for Chemical Process Safety’s tools should be used (www.oilit.com/links/1111_13).

However, once everything is OK, it starts to go wrong! So you need to constantly monitor and adapt, recognizing that people are the weakest link—not the kit in the plant. Human error occurs throughout the organization starting at the top. Senior executives often don’t understand risk. They absolutely trust the system design and are shocked and upset when things go wrong. Managers are more receptive to messages about ‘success,’ and focus too much on outputs, thinking that ‘somebody else’ is in charge of safety. Front line staff suffer from complacency and don’t believe the consequences even when they are spelled out. They give priority to production and tend to deviate from agreed procedures.

Around 25% of plant in the UK is in an ‘unacceptable’ condition and 50% needs improvement. Are we over egging it? It is up to the regulator to decide but we are not where we want to be. Management needs to act on aging plant. Travers also observed that ‘people are fixated on near miss reporting.’ More focus is required on overall challenges to security. Any adverse outcome needs to be captured, for instance, repeated unintentional overfilling of a tank.

A common theme in several talks was the work of psychologist James Reason (www.oilit.com/links/1111_14) whose ‘Swiss cheese’ model of accidents visualizes multiple lines of defense comprising alarms, physical barriers, automatic shutdowns, operators and procedures. These protect assets and the environment from hazards, but they each have weaknesses—the holes in the cheese. Accidents occur when holes momentarily line up opening a trajectory of ‘accident opportunity.’ Safety procedures aim to identify and eliminate the holes and or to eliminate links in the accident chain.

Phil Scott of the Chemical Industries Association (www.oilit.com/links/1111_15) believes that we need to ‘instill a chronic sense of unease in managers.’ While there is no silver bullet for enhancing process safety, there are common causes across industries. Hence the cross industry Process Safety Forum with representation from chemicals, nuclear and oil and gas. Scott says underinvestment in safety is a false economy. Companies need to go beyond compliance and seek out problems—pipe work and non metallic equipment is often overlooked and tank supports, bridges and bunds need attention. If you have got to, shut down and inspect—there are no short cuts. A culture of ‘look, listen and report’ is needed. Checkout the CIA guide on—www.oilit.com/links/1111_16. Process safety training has been a bit ‘ad hoc’ in the past. Scott also emphasized that management walkabouts are a good thing but observed that there was one on Macondo just before the accident (www.oilit.com/links/1111_17).

Guy Gratton (head of the UK’s Facility for Airborne Atmospheric Measurements) observed that while aviation approaches safety in a similar way to other industries, there are areas where it leads the field. Aviation benefits from transparency in worldwide accident reporting and cause analysis. Most accidents (64%) are cause by ‘human factors.’ These are not just ‘pilot errors.’ Aviation prefers to look at the interfaces between software, hardware, environment and ‘liveware.’ Air accident investigators make safety recommendations rather than apportion blame. The ‘no blame’ approach encourages participants to share information but complicates insurance and may be at odds with mainstream legal culture. This was addressed in the 1952 Rome convention which has it that the operator carries the can, irrespective of blame. The most interesting concept that has emerged from air safety training is crew resource management (CRM). Several accidents can be attributed to poor communications between crew members. In 1989, a British Midland 737 crashed when the pilot shut down the wrong engine—even though passengers had informed the stewardess as much. The problem was that the established pecking order meant that stewardesses were afraid to tell the captain he was doing the wrong thing. The point is that ‘everybody here has valid input’ and the secret is collaboration. CRM training now happens on a regular basis and includes the crew, ground staff and management and tells a young, smart copilot how to tell a grumpy old captain, ‘you are about to kill us all.’ More on CRM from www.oilit.com/links/1111_18.

Paul Taylor (Network Rail, UK) observed that despite many safety procedures and equipment, the same accidents happen over and over again. All rail risks are known somewhere in the business, either consciously or unconsciously. One problem with current safety procedures is that it makes for proliferating documentation and sometimes for ludicrous control measures. Network Rail has problems with maintenance worker fatigue while driving. But this cannot be countered by entreaties not to drive when tired. You need to stop putting workers in a position where they may be at risk and avoid stuff that makes safety managers feel good but that will not stop people driving home in the middle of the night when tired.

Phil Graham described how Linde Group’s major hazards review program (MHRP) set out to raise process safety at its 2000 sites around the world. Senior managers tend to think that a major disaster could not happen. Legislation is all very well but it is not enough to discharge corporate responsibilities. MHRP is a consistent process including audits, local accountability that ensures on and off site risks are managed to acceptable safety levels. A staged process starts with site data collection, moves through hazard and consequence evaluation, site categorization, risk mitigation, compliance and finally site certification. Some Linde plants have been shut down as getting risk to an acceptable level would cost more than the plant was worth. Plants may initially be located away from populated areas but as towns spread, risk rises. Process safety is now on the Linde board of directors’ agenda. A process safety dashboard is under development to visualize where risks exist and drill down for more information.

Mark Harrison (SABIC) returned to the theme of the aging plant. Plants degrade physically but equally as knowledge is lost and also with ‘creeping change.’ Very small risks at design time can be amplified through vibration-induced stresses over time. Hazard review often assumes a fit for purpose asset as starting point. Programs should address removal and modification of components vulnerable to vibration.

Graeme Ellis outlined how ABB has implemented performance safety metrics. These need to go beyond injury rates to embrace both leading and lagging indicators. You need to focus on high risk areas with a ‘manageable’ number of indicators. Indicators should be ‘SMART,’ i.e. sufficient, measurable, accurate, reliable and targeted. ABB has screened these down to 8-10 indicators.

John Armstrong (E.ON) described the‘Abiline paradox’ (www.oilit.com/links/1111_50), a kind of group think whereby a consensus position is achieved that is actually something that nobody wants to do. The consequence is that bad, risk-prone decisions can be taken because nobody involved in the process has a strong opinion. Examples include the RBS/ABN Ambro deal, eventually a €72 billion write down. This had been evaluated at 18 management meetings with nobody questioning the deal. Morton Thiokol’s role in the Challenger disaster was another where self censorship led to a wrong decision (www.oilit.com/links/1111_51). Jenny Clucas’s company, Cogent is working to rectify executives’ lack of process safety understanding with a dedicated training program—www.oilit.com/links/1111_52. More from IMECHE on—www.oilit.com/links/1111_53.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.