Is faulty AI technology really behind all the Boeing 737 crashes? And is MCAS easily manipulated by remote control?

Boeing 737 Max: An Artificial Intelligence Event?

JAMES THOMPSON
The UNZ Review

Conventional wisdom is that it is too early to speculate why in the past six months two Boeing 737 Max 8 planes have gone down shortly after take off, so if all that follows is wrong you will know it very quickly. Last night I predicted that the first withdrawals of the plane would happen within two days, and this morning China withdrew it. So far, so good. (Indonesia followed a few hours ago).

Why should I stick my neck out with further predictions? First, because we must speculate the moment something goes wrong. It is natural, right and proper to note errors and try to correct them.(The authorities are always against “wild” speculation, and I would be in agreement with that if they had an a prior definition of wildness). Second, because putting forward hypotheses may help others test them (if they are not already doing so). Third, because if the hypotheses turn out to be wrong, it will indicate an error in reasoning, and will be an example worth studying in psychology, so often dourly drawn to human fallibility. Charmingly, an error in my reasoning might even illuminate an error that a pilot might make, if poorly trained, sleep-deprived and inattentive.

I think the problem is that the Boeing anti-stall patch MCAS is poorly configured for pilot use: it is not intuitive, and opaque in its consequences.

By the way of full disclosure, I have held my opinion since the first Lion Air crash in October, and ran it past a test pilot who, while not responsible for a single word here, did not argue against it. He suggested that MCAS characteristics should have been in a special directive and drawn to the attention of pilots.

I am normally a fan of Boeing. I have flown Boeing more than any other plane, and that might make me loyal to the brand. Even more powerfully, I thought they were correct to carry on with the joystick yoke, and that AirBus was wrong to drop it, simply because the position of the joystick is something visible to pilot and co-pilot, whereas the Airbus side stick does not show you at a glance how high the nose of the plane is pointing.

http://www.unz.com/jthompson/fear-of-flying-and-safety-of-gruyere/

Pilots are bright people, but they must never be set a badly configured test item with tight time limits and potentially fatal outcomes.

The Air France 447 crash had several ingredients, but one was that the pilots of the Airbus A330-203 took too long to work out they were in a stall. In fact, that realization only hit them very shortly before they hit the ocean. Whatever the limitations of the crew (sleep deprived captain, uncertain co-pilot) they were blinded by a frozen Pitot air speed indicator, and an inability to set the right angle of attack for their airspeed.

For the industry, the first step was to fit better air speed indicators which were less likely to ice up. However, it was clear that better stall warning and protection was required.

Boeing had a problem with fitting larger and heavier engines to their tried and trusted 737 configuration, meaning that the engines had to be higher on the wing and a little forwards, and that made the 737 Max have different performance characteristics, which in turn led to the need for an anti-stall patch to be put into the control systems.

It is said that generals always fight the last war. Safety officials correct the last problem, as they must. However, sometimes a safety system has unintended consequences.

The key of the matter is that pilots fly normal 737s every day, and have internalized a mental model of how that plane operates. Pilots probably actually read manuals, and safety directives, and practice for rare events. However, I bet that what they know best is how a plane actually operates most of the time. (I am adjusting to a new car, same manufacturer and model as the last one, but the 9 years of habit are still often stronger than the manual-led actions required by the new configuration). When they fly a 737 Max there is a bit of software in the system which detects stall conditions and corrects them automatically. The pilots should know that, they should adjust to that, they should know that they must switch off that system if it seems to be getting in the way, but all that may be steps too far, when something so important is so opaque.

What is interesting is that in emergencies people rely on their most validated mental models: residents fleeing a burning building tend to go out their usual exits, not even the nearest or safest exit. Pilots are used to pulling the nose up and pushing it down, to adding power and to easing back on it, and when a system takes over some of those decisions, they need to know about it.

After Lion Air I believed that pilots had been warned about the system, but had not paid sufficient attention to its admittedly complicated characteristics, but now it is claimed that the system was not in the training manual anyway. It was deemed a safety system that pilots did not need to know about.

This farrago has an unintended consequence, in that it may be a warning about artificial intelligence. Boeing may have rated the correction factor as too simple to merit human attention, something required mainly to correct a small difference in pitch characteristics unlikely to be encountered in most commercial flying, which is kept as smooth as possible for passenger comfort.

It would be terrible if an apparently small change in automated safety systems designed to avoid a stall turned out have given us a rogue plane, killing us to make us safe.

___
http://www.unz.com/jthompson/boeing-737-max-an-artificial-intelligence-event/

This entry was posted in Uncategorized. Bookmark the permalink.