Language selection

Keynote speech to Airline Pilots’ Association Air Safety Forum 2015

Kathy Fox
Chair, Transportation Safety Board of Canada
Washington, DC, 23 July 2015

Check against delivery.

Good morning. Thank you very much for that kind introduction and for the invitation to speak to you today. It is a real pleasure to be here.

Kathy Fox adressing the Airline Pilots' Association Air Safety Forum 2015 (Source: Air Line Pilots Association)
Kathy Fox adressing the Airline Pilots' Association Air Safety Forum 2015 (Source: Air Line Pilots Association)

I want to talk about evolution today—specifically, the way in which our work has evolved, and how we think about accident causation.

It used to be, for instance, that the focus of accident investigation was on mechanical breakdowns. Then, as technology improved, investigators started looking at the contribution of crew behaviour and human-performance limitations. Nonetheless, people still thought things were “safe” so long as everyone followed standard operating procedures. Don't break any rules or regulations went the thinking; make sure the equipment isn't going to fail; and above all: pay attention to what you're doing and don't make any “stupid” mistakes.

That line of thinking held for quite a while. In fact, even today, in the immediate aftermath of an accident, people on the street and in the media still think accidents begin and end with the person or people in the cockpit. So they ask if an accident was caused by “mechanical failure” or “human error.” Or they jump to false conclusions and say “Oh, this was caused by someone who did not follow the rules,” as if that were the end of it. Case closed.

But it's not that simple. No accident is ever caused by one person, one factor. And no one wakes up in the morning and says, “I think I'll have an accident today”—not ships' masters, not pilots, not locomotive engineers, not company executives.

That's why our thinking had to evolve. And so it has become critical to look deeper into an accident, to understand why people make the decisions they make. Because if those decisions and those actions made sense to the people involved at the time, they could also make sense to others, in future. In other words, if we focus too much on “pilot error,” we miss out on understanding the context in which the pilots were operating.

Allow me to present an example. It's a Canadian example, but it has global implications.

On August 20, 2011, 12 people died and 3 were seriously injured when a Boeing 737, operated by First Air, struck a hill about 1 nautical mile east of the runway while attempting to land in Resolute Bay, Nunavut. After the TSB published its report, many news outlets focused on the fact that the aircraft did not intercept the runway localizer  and instead diverged to the right, where it continued parallel to the localizer, until the tragic final moment.

So was it just “pilot error” as the newspapers said? Of course not. The TSB's investigation concluded that a combination of factors—18 in all—contributed to the accident, only one of which was that an unstable approach was continued to a landing.

Naturally, we asked “why.” And the answer is, because at the time, and in those conditions, continuing made sense to the pilot flying. Yes, both members of the flight crew knew they were off course and rapidly approaching the runway. And yes, the heavy cloud cover meant they were unable to see the ground. Very good reasons to conduct a go-around. But our investigation determined that the pilot flying felt a go-around wasn't necessary because he likely expected that their course would soon re-intercept the localizer—and that once that happened, the situation would improve. Nor were the first officer's suggestions for a go-around compelling enough to alter that mindset.

So, could such a situation happen to other pilots?

That's a rhetorical question, obviously, because we all know unstable approaches get continued all the time. And not just in Canada, either. Most of the time, everything works out just fine. But not always.

Here's a second example, from a report the TSB released only a few weeks ago. It, too, involves marginal weather and an unstable approach.

On December 22, 2012, Perimeter Aviation Flight 993 left Winnipeg, Manitoba, bound for Sanikiluaq, Nunavut, a remote community in eastern Hudson's Bay. Although the wind favored a landing on runway 09, there was no instrument approach procedure for that runway. The crew made one NDB approach on runway 27 followed by two attempted circling maneuvers in limited visibility for runway 09. Knowing that the weather at their alternate had deteriorated, and that they didn't have the fuel to go elsewhere, the crew elected to attempt a landing downwind on runway 27. The tailwind, however, increased their groundspeed, and they came in too high, too steep, and too fast, sighting the runway later than expected. By the time the captain did decide to reject the landing, it was too late, and the aircraft struck the ground.

The consequences proved tragic. The 2 crew and the 6 adult passengers, secured by their seat belts, suffered injuries ranging from minor to serious. But a lap-held infant, not restrained by any device or seatbelt, was torn from his mother's arms and suffered fatal injuries.

Although the TSB investigation turned up multiple causes and contributing factors—11 in all—we again had to answer the question of why. As in “Why did the pilots continue an unstable approach?”

And what we found were answers that applied to flights well beyond this one.

Now at the TSB we are careful not to assign blame or fault. Because pointing fingers and blaming people doesn't do anything to prevent the next accident. Understanding the operating context does. Identifying the underlying safety deficiencies does. Finding out if those deficiencies could apply elsewhere, to other flights, does. And that's also why we choose our language very carefully. For instance, we strive only to use the word “failed” in an engineering context. A part can fail, sure. But we try very hard not to say, “The crew failed to follow this or that procedure.” Because we know that there is always a context for human decisions. No one, as I said earlier, wakes up in the morning and decides to have an accident.

But all of those reasons why the pilots of First Air 6560 and Perimeter Flight 993 chose not to conduct a go-around earlier… well, they're not exactly unique. International industry research shows that between 3 and 4 percent of all approaches are unstable, and that of these, 97 percent are continued to a landing. In other words, this is a worldwide problem. Further research is underway by the Flight Safety Foundation to better understand the nature of crew decision-making in such circumstances. The TSB has recommended that Transport Canada require scheduled airlines to monitor and reduce the incidence of unstable approaches that are continued to a landing. TC has opted for a voluntary approach, using an operator's Safety Management System or SMS, to identify and address this issue. The effectiveness of such a voluntary approach remains to be seen.

So, how do you identify when people are in danger of making risky decisions? Or how do you identify if they've been repeatedly making them and just haven't had anything go wrong yet?

The answer, at least in part, is SMS which has been in place at many major Canadian airlines for a number of years now. I understand that it is also being introduced for many air carriers here in the United States. The TSB believes that a properly-implemented SMS can offer significant safety benefits. However, there are examples that show us where we still have room to improve.

Here's one.

On March 13, 2011, a Boeing 737 was departing Toronto's Lester B. Pearson International Airport with 189 passengers and a crew of 7. During the early-morning take-off run, at about 90 knots indicated airspeed, the autothrottle disengaged after take-off thrust was set. As the aircraft approached the critical engine failure recognition speed, the first officer, who was the pilot flying, noticed an AIRSPEED DISAGREE alert and transferred control of the aircraft to the captain, who then continued the take-off. During the initial climb, at about 400 feet above ground, the aircraft received a stall warning (stick shaker), followed by a flight director command to pitch to a 5° nose-down attitude. The take-off was being conducted in visual conditions, allowing the captain to determine that the flight director commands were erroneous. The captain ignored the flight director commands and maintained a climbing attitude. The crew advised the air traffic controller of a technical problem that required a safe return to Toronto.

Some may consider this as “no big deal,” just something that occasionally happens, in this case, due to a failure in the pitot-static system. Yes, it resulted in inaccurate airspeed indications, stall warnings, and misleading commands being displayed on the aircraft flight instruments. But the pilots handled it effectively, and nothing serious came of it. There was no damage to the aircraft, nor were there any injuries to those onboard. But what if the takeoff had been during IMC conditions, when the captain could not have so easily determined that the airspeed indicator was unreliable?

Now let's look at this from an SMS perspective, one that is supposed to have proactive processes to identify and mitigate hazards, and reactive processes to learn safety lessons from incidents.

In September 2010, Boeing had issued an advisory to Boeing 737NG operators regarding flight crew and airplane system recognition of, and response to, erroneous main display airspeed situations. In this advisory, Boeing indicated that erroneous airspeed events may compromise the safety of flight, describing the issue as follows: The rate of occurrence for multi-channel unreliable airspeed events combined with probability of flight crew inability to recognize and/or respond appropriately in a timely manner is not sufficient to ensure that loss of continued safe flight and landing is extremely improbable. Although Boeing had noted that the flight crew training curriculum did not require recurring training for an erroneous airspeed condition and that such events were occurring more frequently than predicted, the operator did not consider the advisory as a statement of a hazard that should be analyzed by its proactive process. Therefore, the document was not circulated to flight crews, nor did the operator consider what, if any, other action should be taken.

Our investigation also found that the operator delayed reporting this incident to the TSB because it did not recognize this event, despite the potentially serious consequences, as a reportable aviation occurrence.

In this occurrence, the operator did not initially recognize any hazards worthy of analysis by its SMS. The effective performance of the crew masked the underlying risks that may not be mitigated by the lack of guidance, training and procedures available to them.

When an operator's proactive and reactive safety management system processes do not trigger a risk assessment, there is an increased risk that hazards will not be mitigated. And all of that brings me to the question of oversight. Because, as more and more operators transition to safety management systems, the regulator must recognize that those operators may not always identify and mitigate hazards as they should. The regulator—in this case, Transport Canada—must adjust its oversight activities to be commensurate with the maturity of an operator's SMS. International air carriers and regulators may want to review the lessons learned from this investigation.

Going forward, I think this is going to be one of the challenges facing the transportation industry over the next few years. Don't get me wrong: air travel, particularly scheduled operations is, and continues to be, very safe. I'm not saying otherwise. But as we strive to constantly improve that already admirable safety record, this is one area where we can do more work.

It's one thing, though, to talk about improving an SMS. It's something else to give a concrete proposal for how. And I'd like to close by doing just that, and bringing up a subject that I know will be of interest to ALPA and other stakeholders. I'm talking specifically about the use of cockpit voice recordings, and our belief that they can be a useful tool in the context of a proactive, non-punitive SMS.

As the TSB has said many times over the years, when an accident occurs, a recording of the communication between the crew is often critical to our understanding of what happened, and—again—why. Currently, this information is available only to TSB investigators, and it is protected under legislation. It is, for all intents and purposes, sacred. We use these recordings to identify safety deficiencies, and only safety deficiencies. There is no determination of fault. There is no blame. There is no criminal or civil liability arising from such use. The recordings are not released or distributed outside the TSB, and definitely not to discipline or prosecute individuals.

But. Having access to this information may also be very useful for the operators involved—not, I need to stress, for punitive purposes, but in order to better understand what and why events occur, and especially to use in the context of an effective SMS.

Because really, it's all about understanding the why, and the more information operators have, the better they can do that. Companies can be proactive, working collaboratively with their employees and employee representatives, to identify trends, or take a closer look to see how severe a problem may be, and whether those problems are internal or external. For example, what, if any training is required for personnel or what changes may be required to SOPs?

We're entering some new territory with all of this and, obviously, in order for such a big change to happen, there would need to be an amendment to our legislation, one that allows the recordings to be shared and prescribes the appropriate safeguards and the exact purpose and manner in which that might happen.

In the rail mode in Canada, this discussion between the TSB, the regulator and the industry is already underway.  In fact, earlier this year, we announced that the TSB will be working with Transport Canada to conduct a joint safety study on the use of locomotive voice and video recorders—a study that will, in part, identify and assess related technology issues, and the associated legislative and regulatory considerations. In fact, the regulator has announced that the study's results will help inform the basis of any regulatory or legislative changes that may be developed.

Again, though, I want to be clear: regardless of the study's result, and regardless of what changes come from it, if the same were to happen in aviation, it would need to be made very clear that these recordings from the flight deck will continue to be protected from punitive use.

Now, I am aware that this will be considered controversial. There are certainly many issues involved. But it's time we at least began the conversation. In the rail mode, we've started. We've also initiated discussions with some of our international safety investigation counterparts. It is my hope—it is my belief—that this could lead to big things. Because as more operators—not just in Canada, but around the world—realize that understanding human factors—understanding the why—is critical if they are to prevent accidents … and if voice recordings become a part of a pro-active, non-punitive SMS … then our work is about to take a big leap—you could say an evolutionary leap forward—in terms of safety. The payoff—safer skies for everyone—is a goal we can all agree on.

That's food for thought.

Thank you.