The errors in ‘human error’
Typically, when something goes amiss on the rig floor, fingers point straightaway at the crew. Obviously, the reasoning holds, Don the Driller failed miserably to comply with the company’s finely tuned procedures, thus human error is to blame and case closed.
Instantly leaping to that conclusion and declaring the incident resolved is a wasted and, conceivably dangerous, exercise, Andrew Dingee, BP’s global wells learning advisor, told an SPE HSE study group last month in Houston. “To build a safety system, you must understand why an incident occurred. The why is not just pilot error. There’s a lot more at play here, and the key is to dig deeper as to why the mistake was made,” said the safety specialist, who transitioned from the aviation sector.
A pilot and retired aviation instructor for the U.S. Marine Corps, Dingee spent most of his post-military career analyzing and developing safety management systems for commercial airlines. He transferred the lessons learned dissecting airline accidents to the oil field in 2010 and, before joining BP, worked with companies to develop and audit federally-mandated Safety and Environmental Management Systems (SEMS).
He said many of the safety tools used in the airline industry, such as the pilots’ pre-flight checklist, are equally applicable to the oil field, providing they are part of a flexible management system that changes as conditions warrant. “We’ve done a lot of work lately on checklists, and I’ve seen a real difference, providing it is the right checklist and is completed the right way. For our crews on both offshore and land rigs, it can be a real key to reducing NPT (non-productive time),” he said.
However, Dingee said companies and employees, alike, need to get away from what he calls “plan bias.” “Why do we, as humans, become slaves to a plan and continue to execute that plan, regardless of the changes around you? We see the same thing in our business, but when you’re on an offshore rig, sticking strictly to the plan can be a negative.”
The human equation. Dingee, chairman of the SPE Human Factors Technical Session, has focused extensively on the impact of human elements in workplace mistakes, and with the publication of “Hanger Talk,” literally wrote the book on the subject. His message is that the human element is the most flexible, adaptable part of a company’s management system, but also the one most vulnerable to influences that hinder performance.
Although up to 80% of accidents are correctly labeled “human error,” effective safety management, he said, requires that the label be accompanied by exactly why the incident occurred and how it can be fixed. Preventing future accidents requires an understanding of all the underlying factors and conditions that affect human performance, be it fatigue, inadequate equipment or training, badly designed procedures, or poorly laid out checklists and manuals. He singled out one commercial airline, which after concluding that even its most experienced pilots made, on average, four errors per flight, subsequently put corrective systems in place.
Determining the “why” requires digging deeper and fully understanding the levels at which employees perform—be it a skill, rule or knowledge-based scenario, he said. With the former, an employee essentially is performing on automatic and not fully engaged with the surrounding environment. “In reality, we train our crews to be on automatic, because then they are actually working to standards and making fewer errors. This actually makes a good place to work, but you can’t work on automatic all the time.”
“When you’re working in a non-routine or rule-based scenario, the company makes the decisions for us on how to perform,” he said. “You start flying on automatic, you take an assessment of your environment, you make a decision based on the rules you’ve been trained to, you execute, and then you slide back into automatic. So, you’re going back and forth, and that is when we make our best decisions.”
Dingee said understanding the human element is especially imperative when training inexperienced hands. “When you’re training, you’re learning, and learning, first of all, is hard. You make mistakes. You’re not operating on automatic behavior. You’re operating on conscious behavior, so you must go back and forth.”
In yet another airline analogy, Dingee said an example of knowledge-based performance is Captain Chesley “Sully” Sullenberger’s widely celebrated ditching of US Airways Flight 1549 in New York’s Hudson River in January 2009, after the jetliner was disabled upon striking a flock of geese after takeoff. No lives were lost, which Dingee attributed largely to an experienced pilot foreseeing a problem-in-waiting and planning accordingly. Having frequently flown that route, Sullenberger feared the heavy flocks of geese regularly traversing that particular airspace and developed a contingency. “He was not operating from any manual. He developed a solution in a knowledge-based scenario and, once the decision was made, he and the crew were on automatic behavior.”
Dingee said one of the problems in many companies’ safety management systems is the collection of potentially valuable data, such as Stop-the-Work cards and incident reports, that often are given cursory examinations and filed away. “We have become data-rich, but information-poor. It’s very important that we correctly mine the data that we receive. Your next accident very likely is buried somewhere within an incident report,” he said.
“I once asked a group of leaders, ‘where’s your next accident going to happen?’ They couldn’t answer, because nobody had asked that question before. If you can’t recognize where you think your next accident will occur, how can you put something in place to prevent it?”