Hacker News new | past | comments | ask | show | jobs | submit login
Multiple Incidents: B789 deviated from localizer, descended below safe altitude (avherald.com)
110 points by hugh-avherald on Aug 12, 2023 | hide | past | favorite | 80 comments



Reading the news, Boeing seems much more affected by critical software bugs than Airbus. I am curious to know how their teams are structured.

As an aircraft industry nerd, I have understood that Boeing has a dedicated software team (with a dedicated software engineering VP [1]). Whereas Airbus seems to have a more integrated structure with software jobs in many teams. Moreover, Airbus was a pioneer in digitalization, while Boeing followed.

I have the feeling that a dedicated software département is more error-prone than if software is more integrated and that it could be generalized to other industries (banking, automotive…etc). Haven’t found any ressources or research paper about it however.

[1]: https://boeing.mediaroom.com/2020-11-06-Boeing-Appoints-Jinn...


Airbus has invested in applying software verification for a while (https://www.di.ens.fr/~delmas/papers/fm09.pdf). I haven’t seen as much sustained interest in formal verification in industry in the US as I’ve seen in Europe, so I’m guessing this may contribute at least a little to the difference. Fortunately more US companies are starting to take this stuff more seriously.


Which is a bit surprising given Ada, Boeing and Department of Defense.


Maybe this has something to do with it:

"Boeing's 737 Max Software Outsourced to $9-an-Hour Engineers"

https://www.industryweek.com/supply-chain/article/22027840/b...


Hear me out:

So you have millions of Software Engineers flying in Boeing planes every year, right? They could be valuable free labor while in the air!

We just have to modify the in-flight WiFi so all you can do is clone the Boeing monorepo, and for every submitted pull request we give out free SkyMiles or something.


You're overthinking it: just provide access directly into the avionics and develop in production, isn't that what all the cool kids are doing these days?

You know the code quality will be good when their lives literally depend on it.


Well, the Tesla plane is going to be something to remember.


Ah yes, the new job description will be "devpilot", now that free time watching the screens at 30kft can be used productively.


If flying == true goto Landing()


Please delete this before Boeing MBAs get their grubby little booger hooks on it.


> "Boeing's 737 Max Software Outsourced to $9-an-Hour Engineers"

It is not like $200-an-hour engineers do not write crap code.

If your SW department lives in a bubble and testing is optional...


I’ve spent a decade in the public sector where you’ll find a little of both. In my experience the dedicated software departments produce better quality software over time. This isn’t to say you can’t do it with integrated teams spread out over your organisation, it’s just that it comes with more of a risk of two separate teams working “against” each other (usually not on purpose) when you have less “control” over your digitalisation organisation. Now, I’m actually a huge fan of doing the spread out stuff, but not for critical software. The best comparison I have from my work is medical software that has life and death impact. I absolutely think your hospital should have dedicated digitalisation teams spread out over all it’s organisations to better spot and build things that benefit the organisation, but the people writing the surgery automation software need to be in the department that does that sort of thing in my opinion.

What I have experienced to be more of a factor on the quality of your vital software than anything is your corporate culture, however, I’m sure you can succeed with more autonomous teams and with dedicated departments alike, but to do so, the single most important factor is that you let your engineers and their quality decide when things are ready. Your entire organisation needs to understand that no matter what level of CEO you are, your opinion doesn’t matter if the engineers tell you something isn’t safe. And you need to foster a culture where your engineers will actually tell you that things aren’t ready without any sort of incentives to lie or move too fast. Which is incredibly hard to do.

I suspect that AirBus is simply better at this than Boeing. With the amount of scary stories to come out of Boeing in recent years, it sort of looks like Boeing has become a place where money matters more than anything else, so much so, that quality product aren’t as important to them as they once were. Which can frankly happen to any organisation and Boeing can also find its way back from there.

But really, if that happens to an organisation where it costs lives it’s the responsibility of the government to shut those companies down with regulations. Which is something the US seems to struggle with, and many Americans will likely even find this statement sort of appalling.


I've spent almost two decades with complex equipment. In my view it's about functionality. The spread out sw works better because it is there to provide a specific function, requested by people who know what they need and what they are taking about. When you have the almighty software team, they don't understand mechanics, mechatronics, physics, avionics, thermodynamics, electronics, etc. And they end up taking more impactful decisions hidden from all those other people. It can work, but the bigger the product the more difficult it is (also you need very humble software engineers, which, hmmm, not so common especially at large corporations)


Software engineers function better when they directly interface with the people they are serving. If an software engineer is on a team with the hydrolics people and wants to know how something works they only have to turn around.

If you’re in a ‘dedicated software team’ you escalate to your Business Architect, who escalates to the business team, whom contact the hydrolics department, whom ask an overworked engineer to respond to ‘another stupid question’ from the software team.

I kind of see where those things go wrong.


Interesting, so Airbus seems to be structured like Apple [1]

[1] https://daringfireball.net/linked/2016/11/28/yglesias-apple-...


I never heard Airbus claim "you're flying it wrong" when one of their types misbehaves.


Very simple, because they don’t have enough domain knowledge, and in addition, they will assume many things and blame others (the user) and still act superior towards people who actually use the product.

I’ve seen it so many times, so I’m not surprised by this at all


I came to the conclusion on this specific question long ago that, like most things, it's mostly political. There's political/social power to be gained and lost by pushing negative stories about Boeing and suppressing negative stories about Airbus (and the opposite of course; suppressing positive stories about Boeing, pushing positive stories about Airbus). Just your typical information warfare.


> A Virgin Atlantic Airways Boeing 787-9, registration G-VBOW performing flight VS-206 from London Heathrow,EN (UK) to Hong Kong (China) with 235 passengers and 14 crew, was on an ILS approach to runway 25R at about 15:49L (07:49Z) with the autopilot engaged when veered to the right off the localizer and descended below minimum safe sector altitude. The crew disconnected the autoflight system, assumed manual control of the aircraft, re-established the aircraft on the localizer about 12nm before the runway threshold and landed safely.

Note that “minimum safe altitude” here is being in the glidepath where you can make the runway with total engine loss - not fifty feet above the ground heading into a building. You can tell because they regained the localizer 12 nautical miles out.

Still a quite bad bug that shouldn’t exist, but it sounds worse than it is.


>> ... descended below minimum safe sector altitude.

> Note that “minimum safe altitude” here is being in the glidepath where you can make the runway with total engine loss

It's not just the glidepath altitude that goes into calculating landing approach minimums. It's also terrain clearance and separation from other aircraft that would be needed for a go around if landing is to be aborted.

> Still a quite bad bug that shouldn’t exist, but it sounds worse than it is.

Descending below minimums has been a contributing factor in quite a few accidents resulting in loss of life. Had there been other contributing factors, this could have ended up in sth really nasty.

Misbehaving aircraft increases workload on the flight crew. In an already increased workload situation (like landing approach, in this instance), such unexpected and untrained-for increases in workload can and have caused crashes into terrain.

Both the MAX 8 crashes fall under this category, BTW, given that both situations were recoverable (as had already happened once before the fatal Lion Air crash, on the same aircraft) had the flight crew had enough time to think clearly.

In an industry where rules and regulations are written in blood, there's plenty of ink that warrants investigation into this incident and prevention of repeats.


Certainly - they’re called minimums for a reason. But there’s a difference between “computer deviation handled simply with little danger” and “plane was skimming the rooftops before recovery”.


> But there's a difference between “computer deviation handled simply with little danger” and “plane was skimming the rooftops before recovery”.

That really diminishes the risk here. Calling it "computer deviation" downplays the IRL deviation of the (very real) aircraft. "Little danger" ignores the (very real) environment in which the deviation happened and contributed to. "Handled simply" ignores how close it was to not being able to be handled.

The aircraft, at an altitude below 4500 ft. and descending, was flying directly towards a mountain peak at height 3140 ft. It would have reached the peak in just over a minute. If the pilots hadn't reacted quickly (14 seconds), this could very well have ended as "plane was skimming the rooftop" of Tai Mo Shan. A similar incident with Air France flight 953 almost hitting Mount Cameroon was narrowly avoided simply because the captain (under severe low visibility conditions) rolled the dice to climb. Some incidents have had unrecoverable sealings of fate in less than 22 seconds of total time.

The margin of safety here is not to be overestimated.

Indeed, as evidence, you can see Section 4.2 of the investigation report (Proactive Safety Actions taken by VAA), particularly bullet (2) of subsection 4.2.1 where the flight operator (Virgin) clearly advises its flight crew (how) to stay safe of the high terrain. From asking for a different runway to outright aborting landing "outside of half-scale LOC deflection". That's a super small margin of error, when compared to how much deviation happened in this incident.

----

BTW, "plane was skimming the rooftops before recovery" is still considered an extremely serious incident in aviation. Even if it results in no loss of life or damage to aircraft. Less than 1000 ft. of vertical separation between the aircraft and anything that's not the landing runway (extended to 3 NM) is considered safety violation.

In general, aviation safety strongly avoids a glass half-full "Phew! Nothing bad happened." mentality and focuses on keeping a glass half-empty "That could have been really bad!" mentality. Because danger can gang up. Multiple things can go wrong simultaneously. And danger has to succeed only once, while humans need to succeed every time.

Still, the article only states plain facts and does not play up the incident. Nothing about the wording of "deviated from localizer and descended below minimum safe altitude" (or the body of the article) is made up or overplayed. It absolutely does not deserve a label of "sounds worse than it is". There's no way it could have been worded to sound better without leaving out key facts about what actually happened.


Within the last decade, I've noticed that the vast majority of non-aviation, non-safety-critical software has taken a steep nosedive in quality. It's not so surprising that that same decline has now crept into more critical areas too.

Meanwhile, things like quality metrics and fancy tools for generating or enforcing such have become very popular in the industry, and no doubt most if not all released code is deemed to be at least "acceptable" in quality by such metrics, so I think the underlying cause of this decline is a deeper and harsher truth.


Read the Hong Kong report. That explains the problem.

The problem is, when should the system start the turn to align with the runway? There are multiple constraints. Obstacle-free space to descend is only guaranteed within a wedge that goes outward from the runway. Here, the aircraft entered that wedge at a large angle and too close to the airport for a standard turn to bring the aircraft into line with the runway while staying within the safe wedge.

The diagram on page 13 of [1] shows the concept. But that's not to scale. See the blue line drawn on the approach chart in [2] under "Background information". Note how narrow that wedge is. The problem is that the control system acted as if overshooting the wedge and then gradually returning to course was fine. It started the descent outside the wedge, and didn't alarm.

According to the approach chart, the plane was supposed to be roughly aligned with the runway when it passed over waypoint RIVER. But it was about 40 degrees off at RIVER. That shouldn't have been a problem, but it was.

[1] https://www.tlb.gov.hk/aaia/doc/Investigation%20Report%206-2...

[2] https://avherald.com/h?article=4d42d0b5


> Here, the aircraft entered that wedge at a large angle and too close to the airport for a standard turn to bring the aircraft into line with the runway while staying within the safe wedge.

The entry was okay. The aircraft (and the others) was on a STAR (as with almost all arrivals into HK due to the complex terrain/airspace/traffic environment), and appears to have flown it correctly. The tight final turn is normal, but is supposed to coincide with LOC capture. (The final STAR waypoints for most arrivals to 25L/R [C didn't exist then], LOTUS/RIVER, are virtually coincident with the LOC capture point, which is probably the detail that caused this issue to be noticed in HK and not elsewhere.)

Since the LOC was not captured the aircraft did not complete the turn but instead transitioned to HDG mode and rolled out of the turn at that heading. It then, despite not having captured LOC, captured GS and descended, at which point attentive pilots+ATC noticed the issue and dealt with it, resulting in either a go around or a correction and completion of the approach.


> It then, despite not having captured LOC, captured GS and descended.

Ouch. Right. Lateral and vertical guidance are somewhat separate.

Part of the problem seems to have come from the Consistent Localizer Capture system. Boeing has a patent on this.[1] This uses other data sources (GPS and/or inertial guidance, per the patent) that yield lateral position to get the aircraft onto the approach path. It doesn't just use the localizer beam. The beam isn't wide enough for that if approached at a large error angle. But the system apparently tells the pilot that it's in LOC mode and has captured the localizer when it's really still in CLC mode and trying to find the localizer beam.

Or worse. Boeing writes: "Boeing has received reports that suggest, depending on the geometry and ground speed of the approach, CLC may activate for such a short time that the three FCMs fail to synchronize the engaged autopilot mode and fail to transition to the localizer capture mode. This may result in the aircraft turning to a localizer intercept angle of approximately 20 degrees and flying through the localizer on this track, rather than properly capturing the localizer. “LOC” will remain on the FMA despite the failed capture and, in some circumstances, the aircraft may begin descent down the glideslope while 20 degrees off of the localizer course.”"

So the displays are telling the pilot that the localizer beam has been captured, while in fact, not only is the aircraft not following the localizer, the Consistent Localizer Capture system has given up and the aircraft is just holding a heading. The wrong heading. Is that correct? How can that not be an alarm condition?

This is puzzling, because Fig. 8 seems to indicate more normal control overshoot, where the CLC system is still trying get the aircraft on the proper approach path after an overshoot. That's bad enough; there's no guarantee of terrain clearance off the proper approach path. But it's at least understandable as control system behavior. Dropping all the way back to heading hold while displaying LOC and not alarming is just broken.

Here's some pilot discussion of this topic.[2] At least one pilot reports seeing this failure. Other aircraft have systems to use GPS/INS data to get lined up for approach, but apparently don't have this problem.

(Not a pilot, used to work for an aerospace company.)

[1] https://patents.google.com/patent/US20130066489A1/en

[2] https://www.airliners.net/forum/viewtopic.php?t=1454933


I was just reading about the Air France flight that was lost over the Atlantic due to pilot panic, and a near-collision caused by confused pilots almost landing on a crowded taxiway, and a near-crash caused by pilot miscommunication about a flap setting just after takeoff that happened even more recently...

In light of this, I'm practically relieved to read about incidents caused by software bugs where the pilots were able to handle it calmly and with ease.

I don't know if that's a bad thing or a good thing.


I suppose for every situation where a pilot gets confused or panics that results in a serious incident there are dozens(hundreds?) situations that never progressed far enough, because a pilot calmly prevented it.

Personally I consider autopilots etc, boredom relieving devices not an improvement or even replacement for pilots in future. I feel much safer in a plane a human lands by hand than if a computer program could do it.

Also, the whole "software controlled fly by wire" still makes me uneasy. I feel much better when I know there are actual physical cables that let the pilot move control surfaces. In a huge plane I can understand the necessity of using a hydraulic system which can be made to be extremely reliable, but software?

Perhaps it's because I got to see how software is usually developed?

When a bridge or a building is built you get an architect (depending on country also the civil engineer) sign under his/hers responsibility this thing is not going to collapse and kill people. If it does and negligence is proven that person goes to jail for a long time. There is no equivalent in software.

When responsibility is distributed over countless people you often get unreliable products. Why is it we don't build buildings like this? Probably, because people died in shoddy built buildings, also everyone can see cracks in the walls etc. If a software runs badly it's not so obvious.

Sadly, I think more people will die before this 70+ year discipline will come even close to implementing the safety processes devised by millenia of expertise in fields like civil engineering. So what can a person do to lower the probability of getting hurt? Probably not much. If you have to fly, you usually don't get a choice of a plane (and even if you do it's usually an Airbus vs Boeing). But in other areas of life one has much more control. For example, I'll not drive a car that has a computer I can't disable that can slam the brakes any moment at highway speeds if it detects "something" in front.


> Sadly, I think more people will die before this 70+ year discipline will come even close to implementing the safety processes devised by millenia of expertise in fields like civil engineering.

With all due respect, the safety culture and processes in aviation are excellent and exemplary.

Software in aviation is a bit more problematic (viz, MCAS in the MAX; though note that in the instances discussed here nobody came to harm, thanks to redundancy and pilots doing their job), but also subject to processes and rules that far exceed one or two people putting a signature to paper.


Yeah sometimes it’s easy to critique the air industry. But imagine if these same safety reviews were applied to the car industry. So many lives could be saved but we have become numb to car accidents.


Well, these problems do publicly rear their ugly head in cars intermittently. Telsa pushes out recall-worthy software updates, Toyota had its infamous stuck accelerator pedal issue in 2009, etc.

The airline industry gets such a weird amount of public attention across the board ranging from passenger safety to massive investigations if a plane goes down. It's not necessarily a bad thing but you certainly do not hear federal regulation stopping people from driving a model of car (recalls yes, but nothing akin to grounding) if there's one unexplained crash.


Toyota had its infamous stuck accelerator pedal issue in 2009, etc.

AFAIK that was never proven to be a software problem.


Strictly true but settling for $1.2B certainly certainly points to there being a problem.


Almost all (in the mathematical sense) car accidents are caused by pilot error, and not the subtle kind: driving into oncoming traffic. running red lights and other ways of ignoring way of right. drugs. speeding and tailgating.

You could envisage technical solutions for some classes of these errors, but they would usually require huge changes to infrastructure or only work for new cars. And full self-driving is still ways off.


I watch and listen to a lot of Strong Towns and NotJustBikes - a lot of the infrastructure change solutions already exist in European cities.

I view the overwhelming majority of driver errors as faulty system design. Instead of red lights use turnabouts like in Europe.

I just came from a small town in France where they put these staggered obstacles on one side of the road which forced both directions of traffic to be merged into one lane. So they forced people to drive into incoming traffic thereby causing cars to drive slower. A really low cost solution to slowing car speeds in the busy pedestrian area and making the roads safer for pedestrians.


In aviation, these kinds of things are solved with training and process. There is a reason pilots need a certain number of hours and multiple/recurrent checkrides to be able to fly people around for hire.

Most people think they are good drivers which makes them immune to actually getting better through training. As for process, good luck implementing it if you can’t train it.


I think the problem you mention has to do with the fact that software does not apply to one specific domain. It is easy to formulate a framework of engineering rigor for specific domains but this is not easily generalisable to all domains that software is applied to. But thankfully this is where regulators play an important role. Building codes have an explicit statement of how they want a building to work and fail, if you want to build engineering rigor in your team that does civil engineering software, that would be your starting point. For aviation the air authorities do the same. Afaic the us had outsourced that part of regulation to related enterprises for a while and this came up in the 737max aftermath. This is one of the few cases where the US seriously lags behind the rest of the world in my opinion.


I encourage you to research whether flying has gotten safer or more dangerous since 1970, and then consider why that is.


Downside of this is when you get a panicked pilot and a software bug, which is actually sort of inevitable since the one rides on top of the other.

Having worked both in aviation incident investigation and for Big Mama B at various times, every time I see a Boeing story my whole body involuntarily clenches for a moment.

In . . delight! Of course! Delightful clenching.


Full-body kegels


Arguably, the Air France incident was both a pilot and software problem. Although the software worked as designed, it contributed to the pilots' confusion. This was particularly true of the stall warning, which would reject angle of attack readings below about 60 knots forward airspeed but then sound again whenever the pilots did pitch down, the correct response to a high-altitude stall.


I always wondered - weren't there a simple glass of water in the cockpit. It wouldn't tell them the angle of attack of course, but could have used it as a sanity check.


The resulting flight path of this bug is a very common one. Going through the localizer and then turning back to re-intercept. And it's in the standard operating procedures for every airline/operator to monitor loc capture by the crew.

This "oops it didn't capture" also happens when you forget to select the right autopilot mode, which isn't an uncommon mistake for pilots in training.

It's actually so common that at airports with two runways close together they make aircraft capture the two localizers from different altitudes so there is no risk of collision when they fail to capture. (For example 18R and 18C in Amsterdam capture at 2000ft and 3000ft, this is done nearly every day)

So I'm going to say: Not really a big deal in terms of safety, easy to handle for the crew. But absolutely worth a safety report since unexpected auto flight behaviour needs an investigation and correction.


Why can’t Boeing do software? What is going on? I feel like these FCS issues simply weren’t happening pre-2010.


All of the strongest SWEs at Boeing got sucked away to FAANG in Seattle for 2-3x TC. Check them out on LinkedIn... there's been a steady brain drain to those companies for many years. Some were even early Amazon employees. It's crazy to realize Boeing had "big data" at the time when Amazon was still a startup.


Why did they shift away from Ada? It seems Boeing’s avionics systems had far fewer incidents before they started adopting other languages.


What are they using now?


They've been using multiple languages for a long time and were, as far as I can find, never exclusively using Ada. At least with the 787 the language choices we (subcontractor) had were Ada, C, and C++.


I’m guessing you picked C++ out of those options?


I didn't, but yes it was the one chosen before I got there.


Node


As in Node.js or something else? I don’t know what the 787 uses but I do know that it is not node.js


Exactly, we'll already hear 787s crashes all over if they're using node.

I like node, but they'll be stupid to use it for aviation instrument software.


The bigger issue is that a lot of the microcontrollers in the 797 probably can’t even run JavaScript lol


>issue

Sounds like a critical safety feature to me


There’s critical safety, and then there’s not even being able to do something

Like just because you can do something doesn’t mean you should, but if you are not able to can then you are also not able to should


Probably similar reasons to why IOT companies can’t do security.

Why use best practices from other industries when you can re-invent things?


This raises the question of what kinds of tests Boeing did for this autopilot...

I would want to have simulated landings at every airport in the world and a few million synthetic airports, and to do those tests with various simulated wind speeds, signal degradations, engine failures, broken actuators, damaged wings, etc.

For each test, I would determine if the plane is within the design envelope (for example, cross wind speed under 100 mph for landing), and if so require the test success.

I would also be running tests outside the design envelope, but be aiming to tweak the design to pass as many as possible - that way you might survive a few crazy 'wing cut off by passing UFO' type incidents.

I don't see any way such a bug could pass such tests... Which suggests they didn't run those tests, which is concerning.


> wing cut off by passing UFO' type incidents

Just a side note: the loss of a wing is one the "exceptions" that go deliberately "unhandled". Not even sensors and alerts are implemented -- after all, it's such an unlikely event, unrecoverable from, and they would simply add weight and complexity just to let the crew know the plane is doomed.

This sadly happened, though, with flight Gol 1907 in 2006. The wing was cut off by a crash with another plane in-flight. You can hear in the voice recorder the warnings about the plane falling off the safety envelope (and no mention to the lost wing ofc), and it's really sad how lost the crew gets during the seconds between the loss of the wing and the airplane destruction.


Interestingly there are lots of 'unusual' flight modes that actually stay aloft, especially if you are able to throttle up/down and move actuators very fast (ie. 10 Hz). I could totally imagine some existing planes being able to do some.

Quadcopter and drone hobbyists demo them sometimes. Eg. https://hackaday.com/2016/05/04/your-quadcopter-has-three-pr...


Interesting indeed. There was an Israeli pilot who managed to land an F-15 with a single wing [0]. That was considered impossible by then.

I couldn't find a link, but I think that, more recently, they were trying to reproduce the feat using the possibilities offered by automation. That would be a desirable feature in military aircraft indeed. However, I'm not sure whether that would be physically possible in airliners. The lift provided by the fuselage is too little.

[0] https://en.m.wikipedia.org/wiki/1983_Negev_mid-air_collision


The beauty of international standarda and norms is, that you do not have to test every single use case. All you have to do, and that is difficult enough, is to test to standardized use cases. And then someone else is making sure everyone is adhereing to those standards. That is what regulations are good for.


I remember back when Airbus came out, people were worried they rely too much on automation. How the tables have turned.


I don’t think that’s settled. AF447, an almost perfectly functioning plane, was jettisoned immediately into the ocean because the flying pilot completely forgot about basic flight physics and that without automation, you can’t just always increase angle of attack and expect to go up. The guy spent 99% of his time flying normal law, under automation protection, and when that shield was undone he resorted to his gut instincts, operating like the shield is there, when he was panicking.


Those same tables could keep turning, too. Given enough time, anything can become unmaintainable and eventually fails. Airbus might just survive longer than Boeing but Airbus internal processes will likely degrade at some point as well.


There's an interesting comment from a moderator(?) in the comment section:

> However, part of your comment can not be shown as you had used a character opening a HTML tag rendering a whole sentence like an invalid HTML instruction.

It appears this article reveals _two_ bugs.


AVHerald is quite a... special... website.

It strips anything that looks like a URL or HTML tag from the comments in a way that typically removes the rest of the comment (or sometimes only part of it), with frequent false positives and mangled comments.

Until recently it didn't support HTTPS, and it still doesn't redirect HTTP to HTTPS, with the justification being that HTTPS increases server load.

The heading "The Aviation Herald" at the top only links you back to the homepage if you have JS enabled, for reasons that were described to me but were too incomprehensible for me to remember.

Of course virtually every other website around solves these problems in more effective ways, but AVHerald has its own style. Regardless, you are into aviation it's one of the best sources around for incident and accident reports.


They should have thought about XSS mitigation...


It would be interesting to know with more specificity how the bug in the 787 AFDS is interacting with the VHHH 25R localizer. One would expect that all localizer beams conform to a single specification and wouldn’t require the autoflight systems to behave differently. To wit, one of the commenters at the site mentioned anomalies of the VHHH ILS 25R.

An aircraft I flew regularly some years ago would sometimes fail to capture the localizer at reasonable intercept vectors or swim around the extended centreline but I never experienced this uncoupling of lateral and vertical guidance. That’s insidious.


The best European engineers (including graduates from elite engineering schools) still work for industrial companies like Airbus, etc.

In the US, the same caliber of people now work for big tech or for the financial industry, while industrial companies like Boeing offshore tech work abroad or hire a "diversified" workforce that might not be as qualified as it used to be.


Autopilot not doing what it is expected leading to a too low approach sounds a bit like the 777 crash in S.F.. There it was not a bug but the hidden modes in the automation leading to deviation of what pilots thought the plane would do and what it did.


Defect in software. Identified and corrected.

Lucky it wasn't important, like "inadvertent display of advertisement to non optimum user".

You know, something important.


Sounds like the localizer for 25R should be checked...


(2020)

Confusing update of an article perhaps but the title should reference the incidents being years in the past some way


Needs a [2020]


Direct link to latest report (dated June 2023, released Aug 11th 2023 by Hong Kong's AAIA)

    Deviation from Intended Flightpath
    Investigation Report
    Boeing 787-9, G-VBOW
    Waypoint RIVER of Hong Kong
    18 October 2019
https://www.tlb.gov.hk/aaia/doc/Investigation%20Report%206-2... (51 pages)


Final report was 11 Aug 2023.


Wow, thanks! I had to search the page for 2023 to even find the text in the middle of things. Maybe it is late at night and I was skimming too quickly.


Incident(s) happened in Sep and Oct 2019, investigation opened in Mar 2020, closed in Aug 2023. Seems excruciatingly slow for a (series of) serious incident(s).


While I don't disagree that it's a long time, the lack of further incidents in 2019 suggests it was well mitigated early on, and the lack of incidents at other airports suggests there wasn't an imminent threat to flight safety. So the primary purpose of the investigation and report is as a learning exercise for improving future safety.

For pilots there isn't a lot to learn here, since the possibility of failing to capture the LOC is already known, trained for and monitored for during flight (which is why the incidents didn't result in flying into a mountain), and the reason for the failure to capture the LOC doesn't really change the actions taken in response.

For software implementors, there is plenty to learn [0], but given the relatively small number of organisations implementing autoflight software for transport category aircraft, it's likely they were already well aware of what went on here. So the final report is then to some extent just ticking a regulatory box (and informing the public).

[0] I'm actually somewhat surprised they don't have automated tests that simulate every single STAR leading to every single approach at every single major airport worldwide. Or maybe they do, but the simulation lacked the accuracy to simulate this aspect.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: