Hacker News new | past | comments | ask | show | jobs | submit | joshAg's comments login

The aviation industry doesn't usually reduce incidents down to a single proximate or ultimate cause like that, but instead notes every decision/event that contributed to the incident occurring. And then they usually spend time on how they can reduce the likelihood of such an event reoccurring at every single point in that chain. It appears that the coast guard plane lined up on the runway to prepare for takeoff instead of holding short of the runway as instructed and the JAL pilots did not see that their runway was not actually clear until it was too late to avoid a collision (if they saw the other plane at all).

From the JAL plane's perspective, automation is only as good as the sensors and logic which would be checking for if a runway is actually clear. Regardless of whether extant solutions are better than humans right now or will be better in the future, those solutions will still have a same failure mode as humans, which is suddenly realizing a runway that appeared to be clear actually isn't clear when it is too late to abort.

From the coast guard plane's perspective and from the controller perspective, automation and warnings might have been able to alert or prevent, however, automation can't just be thrown around as a solution without deep knowledge of the system and environment in which it will work. The main reason for this is ensuring that the transition between automated-control and human-control is clearly evident to the humans involved, that it occurs with enough time for the human to actually be able to avert a problem, and that the human is actually ready to take control. If the automated system silently disengages, a plane with permission to take off from a runway will instead just sit on the runway because the pilot assumes the automation will begin takeoff as expected, which brings us right back to a plane on the runway when it shouldn't be there and a landing plane not seeing it until it is too late. It is actually possible to land a plane purely with automation (military drones do it all the time), however that isn't done on commercial aircraft, because the constraints are so tight that if the automation were to fail for any reason there is a large chance the human pilot would be unable to prevent a crash even when perfectly monitoring the system (and perfect monitoring can't be assumed).


> The aviation industry doesn't usually reduce incidents down to a single proximate or ultimate cause like that

Correct. Folks here might be interested in the swiss cheese model: https://en.m.wikipedia.org/wiki/Swiss_cheese_model


There's actually a legal reason for tacking on anyone who is plausibly liable. The basic idea is to sue everyone in a single case and let the court sort out actual liability for each party as part of that single case.

Say the lawsuit is originally against just Holman Fleet Leasing and FedEx is the one legally liable (Maybe FedEx is the one that is doing something naughty. Maybe there's some contractual language around Fedex assuming all legal liabilities for the vehicles sold.). You're going to spend a bunch of time in court arguing with Holman about if they're even the right party to sue, and your case is either going to get thrown out or you're going to lose. Meanwhile, the statute of limitations is still ticking, so if it takes a long enough time to adjudicate the case against Holman, you won't even be able to refile the same case with the correct respondent. Oops. but if the statute of limitations miraculously hasn't run out yet, that's not even considering the possibility that the kind of person who would roll back an odometer would also have a punishingly short document retention policy, so all the documents that still existed at the time you filed against Holman have long since been shredded and destroyed, so your discovery in the new case against Fedex is going to be a single email saying "yeah, we don't have anything going back that far. Oops again.

Now consider the lawsuit filed initially against both Holman and Fedex. Assuming your list of respondents is complete, the case isn't going to get thrown out because you sued the wrong person. Liability will still be adjudicated (and the case amended to drop respondents as the proper liability holder gets determined), but now you don't need to worry about the statute of limitations running out as you wait for the determination of liability against the first respondent. And the document retention clock starts with that lawsuit and covers the time where you're just determining who hold liability, so now they can't delete those documents even if they other wise would be. Both of them are now going to be legally required to retain all the stuff you list in discovery for the duration of at least their involvement in the case. Sure, they could destroy those records anyway, but that sort of thing is regularly used to infer guilt of the respondent with the worst possible inferences when it's destroyed in violation of discovery.


These things never see the inside of a court room. It'll end up as a settlement check, with none of the involved parties admitting to anything. The lawyers will then move onto the next low hanging fruit.

I've learned over time, it doesn't matter how righteous your defense is - all that matters is the money it'll cost to make the issue go away. Turns out, it's almost always cheaper to write a check than defend yourself.


> Why are acquisitions legal?

It'd take at least semester of public policy, a semester of economics, a semester of history, and a semester of legal studies to adequately answer that.

The shorter answer is that nonnatutal persons and natural persons have the same rights to do what they want with their property, barring very specific exemptions. One of those exemptions is monopolies, but (and add this to the list of shit Ronald fucking Reagan and the university of chicago screwed up, too) in the 1980s US anti-monopoly enforcement switched from focusing on ensuring a competitive marketplace to focusing on ensuring economic efficiency and consumer welfare, so it became much much much easier to merge and acquire competitors.


".... i can't. No one can. It's a mathematical impossibility as a general solution for at least 2 separate reasons.

The first issue is that we're taking 64 bits of data and trying to squeeze them into 16 bits. Now, sure it's not that bad, because we have the sign bit and NANs and infinities, but even if you toss away the exponent entirely, that's still 53 bits of mantissa to squeeze into 16 bits of int.

The second issue is all the values not directly expressible as an integer, either because they're infinity, NAN, too big, too small, or fractional.

The only way we can overcome these issues is to decide what exactly we mean by "converts", because while we might not _like_ it, casting to an int64 and then masking off our 16 most favorite bits of the 64 available is a stable conversion. That might be silly, but it brings up a valid question. What is our conversion algorithm?

Maybe by "convert" we meant map from the smallest float to the smallest int and then the next smallest float to the next smallest int, and then either wrapping around or just pegging the rest to the int16.max.

Or maybe we meant from the biggest float to the biggest int and so on doing the inverse of the previous. Those are two very different results.

And we haven't even considered whether to throw on NAN or infinity or what to do with -0 for both those cases.

Or maybe we meant translate from the float value to the nearest representable integer? We'd have a lot of stuff mapping to int16.max and int16.min, and we'd still have to decide how to handle infinity, NAN, and -0, but still possible.

Basically, until we know the rough conversion function, we can't even know if NAN, infinity and -0 are special cases and we can't even know if clipping will be an edge case or not. There's lots of conversions where we can happily wrap around on ourself and there are no edge cases, and there's lots of conversions where we have edge cases, but we can clip or wrap, and there's lots of conversions where we have edge cases and clipping/wrapping."


You are hired!!!


> With 16-bit unsigned integers, you can store anything from 0 to 65,535. If you use the first bit to store a sign (positive/negative) and your 16-bit signed integer now covers everything from -32,768 to +32,767 (only 15 bits left for the actual number). Anything bigger than these values and you’ve run out of bits.

That's, oh man, that's not how they're stored or how you should think of it. Don't think of it that way because if you think "oh 1 bit for sign" that implies the number representation has both a +0 and a -0 (which is the case for ieee 754 floats) that are bitwise different in at least the sign bit, which isn't the case for signed ints. Plus, if you have that double zero that comes from dedicating a bit to sign, then you can't represent 2^15 or -2^15, because you are instead representing -0 and +0. Except, you can represent -2^15, or -32,768, by their own prose. So there's either more than just 15 bits for negative numbers or there's not actually a "sign bit."

Like, ok, sure, you don't want to explain the intricacies of 2's complement for this, but don't say there's a sign bit. Explain signed ints as a shifting the range of possible values to include negative and positive values. Something like

> With 16-bit unsigned integers, you can store anything from 0 to 65,535. If you shift that range down so that 0 is in the middle of the range of values instead of the minimum and your 16-bit signed integer now covers everything from -32,768 to +32,767. Anything outside the range of these values and you’ve run out of bits.


> ...If you shift that range down so that 0 is in the middle of the range of values instead of the minimum...

Not a downvoter, but: your concept of "shifting the range" is also misleading.

In the source domain of 16-bit numbers, [0...65535] can be split into two sets:

    [0...32767]
    [32768...65535]
The first set of numbers maps to [0...32767] in 2's complement.

But the second interval maps to [-32768...-1].

So it's not just a "shift" of [0...65535] onto another range. There's a discontinuous jump going from 32767 to 32768 (or -1 to 0 if converting the other direction).

And actually, we don't know if the processor used 2's complement or 1's complement -- if it was 1's complement, they would have a signed 0!

I think they'd have to say "remapping" the range? On the whole, I think OP did about as well as you're going to do, given the audience.


> And actually, we don't know if the processor used 2's complement or 1's complement -- if it was 1's complement, they would have a signed 0!

We can infer it used two's complement, and absolutely rule out one's complement or any signed-zero system, because the range is [-2^15,2^15) and with a signed zero you can't represent every integer in that range in 16-bits, you have one too many unique numbers.


The range is of values, not their representation in bits, which can be mapped in any order. You could specify that the bit representation for 0 was 0x1234 and the bit representation for 1 was 0x1134 and proceeded accordingly and the range of values for those 16 bits could still independently be [-32768, 32767] or [0, 65535] or [65536, 131071] if you wanted.

We know the signed int they're talking about can't be the standard 1's complement because its stated range of values is [-32768, 32767]. If the representation were 1's complement the range would be [-32767, 32767] to accommodate the -0. It could be some modified form of 1's complement where they redefine the -0 to actually be -32768, but that's not 1's complement anymore.


Everything written in those three sentences you've highlighted from the article is correct. You may not like how they've chosen their three sentences, but these three sentences contain no lies.

Every negative number has 1 as it's first bit, every positive number (including 0) has 0 as its first bit. Therefore first bit encodes sign. The other 15 bits encode value. They may not use the normal binary encoding for negative integers as you'd expect from how we encode unsigned integers, but you cannot explain every detail every time.


It's not a sign bit or a range shift. Signed integers are 2-adic numbers. In an n-bit signed integer, the "left" bit b represents the infinite repeated tail sum_{k >= n-1} {b 2^k} == - 2^{n-1} of the 2-adic integer value.

https://zjkmxy.github.io/posts/2021/11/twos-complement-2-adi...


Hey, so this is admittedly monday morning quarterbacking, but in the future, you can definitely consider moving from Google Auth to Twillio's authy [1]. It lets you move devices and all your secrets come with you (it's also got other cool features, but the one that is killer IMO is the ability to migrate from device to device).

https://authy.com/


I can’t recommend Authy enough. It’s multi device from the start and has cloud backup.

I once broke my phone with Google Authenticator on it and I spent 2 days locked out from my work accounts. Never risking that again.


One important note, though, is that the backup and multidevice requires their cloud servers* so the threat model is a little different. They've got a blog on how they do the cloud backup**, but since you need a password it either needs to be something you can remember or be stored in a password vault that doesn't rely on getting a 2fa code from authy for access.

* for the paranoid, there's a mode where it doesn't backup to the cloud, which makes it function the same as google auth, but that does defeat a lot of authy's benefits.

** https://authy.com/blog/how-the-authy-two-factor-backups-work...


Sounds like they're finding out why most companies won't fuck around with outbidding competitors for talented employees just so that they can't work for a competitor.


Sometimes they're not entirely wrong though. The E46 design is now decently well-regarded by many car enthusiasts. Granted, I'm probably not the best person to ask about BMW aesthetics since I think the best looking one they've made is the clownshoe and thought the hood bulge was exceptionally lazy.


Part of the draw of deno is that for better or worse it's javascript. If you start changing things about the language used in the deno runtime such that it's no longer compliant ecmascript, then you no longer get the benefits of it being ecmascript. The devs' mental modal of the language isn't a drop-in, libraries and modules including popular ones, are no longer guaranteed to work out of the box, etc.

You might say that all that is worth the benefit, but in that case why not just use another language that already gives you the feature you want or why stop there? Why not also fix other issues with javascript at the same time since we're no longer preserving compatibility? Something mental like the automatic type coercions? Or getting rid of var?


Deno has already made similar changes, like https://github.com/denoland/deno/pull/4341. That particular change happens to be allowed by the JS standard. The change discussed here isn't currently allowed, but I suspect TC39 would be open to making it allowed (though obviously it would not be allowed for browsers, in the same way the linked change to __proto__ is not allowed for browsers).

If you change something that most code isn't relying on, most code will still work. This change is plausible because it's very rare for code to be mutating built-ins. That's not true for most other possible "fixes". And most other changes would not have a benefit to consumers of the application (who cares if the library you're using has `var`s?), so they're much less well motivated.


I wouldn't consider that a similar change, because it's fully compliant with a newer js spec, which deprecated that feature. It sounds like, what deno has done is removed native support for older js specs and instead makes you transpile to an earlier spec. And by removing support for the earlier specs, they are able to drop support for deprecated features.

"allowed by the js standard" is key. As long as it's allowed by the standard they're still fully compliant with it. The compliance is necessary because no one wants to deal with "mostly compliant". Users want certainty, so "mostly compliant" becomes "fork their spec and make your own, so i can know what guarantees you make". That's why each change to the spec results in a new version. It's a self-fork of the previous spec.

If the change made it into the ts or js spec, I'm sure they'd add in support for freezing protoype chains, even if just as an option that can be toggled, but i doubt they'd ever want to break the spec just because they don't like parts of it. That opens the door to more changes because they don't like the spec, and eventually you have a new language, or worse, the original language changes out from under you to support something similar in a newer spec (eg typescript and namespaces/modules).


> That's why each change to the spec results in a new version. It's a self-fork of the previous spec.

That's not how it works, no. There is just the spec [1], which is updated frequently. I am editor of the specification. (There are annual editions as well, but no one should pay attention to these.)

> I'm sure they'd add in support for freezing protoype chains, even if just as an option that can be toggled, but i doubt they'd ever want to break the spec just because they don't like parts of it.

Well, like I said, if Deno's only concern is breaking with the spec here, I expect the spec could be updated to allow this behavior.

[1] https://tc39.es/ecma262/


>>> It is the fourteenth edition of the ECMAScript Language Specification

Fine, then sed s/version/edition/g

From the point of view of a maintainer that makes sense, but for the users each annual edition or feature moving into stage 4 its own spec version, and unless the feature is absolutely groundbreaking, thinking in terms of annual editions makes it possible for users to grab various tools with confidence that things will work together smoothly.

Look at how Mozilla interacts with the spec [1]. They're not thinking in terms of the nightly version of the spec. They're looking at the annualized editions and making sure they support them as fully as possible. And then they communicate that support to their own users in terms of that annualized edition.

V8 consumes from nightly and using the up-to-date test suites and they explain their reasoning here[2], but notably they're still conceptualizing things to their users through the lens of annualized editions even though they're still grabbing features when they're only at stage 3. They even tag blog posts about js with tags for the annualized editions that added the feature: [3].

As a user of the users of the spec, the annualized editions are super super helpful. I can only use features that have actually been implemented. And I have to make sure that each tool I use is only getting code that uses features it's implemented. Can you imagine if every tool had feature-by-feature implementation matrices? "OK, ESlint understands new features A, B and D, but babel only implements B, C, and D, but library X doesn't support feature D yet, so since we want to use that we have to use bluebird instead of the native feature D for now" and on an on. It'd be madness, and we'd end up picking a handful of tools we like enough and then transpiling everything to es5 because our own users don't actually care if we transpiled down to es5 or shaved enough yaks that we realized that our toolchain natively supports featurs B but everything else must be transpiled out or replacing the native implementations with our our in code implementation. Instead each tool picks an annualized edition and while slower tool release cycles be annoying, I can actually turn that guarantee of standardized features into a toolchain with transpilation steps as necessary, which means as a dev I know i can safely use any feature in that annualized edition without worrying if this feature i haven't really used before is going to blow up somewhere in my toolchain because the implementer hasn't gotten around to implementing that feature yet.

So while I like looking at the draft proposals and consider it important to know how the language is evolving, I have to wait for implementations to permeate enough of my tools before I can use the shiny new feature, which means annualized editions of the spec.

[1]: https://blog.mozilla.org/javascript/2017/02/22/ecmascript-20... [2]: https://v8.dev/blog/modern-javascript [3]: https://v8.dev/features/tags/ecmascript


Mozilla is absolutely thinking in terms of the nightly version of the spec. I agree that public messaging sometimes talks about annual editions, but this is mostly because it's a convenient way to talk about when features were added to the language, not because it reflects any underlying reality.

Anyway, that's not really the relevant thing. What I'm addressing is:

> deno has done is removed native support for older js specs and instead makes you transpile to an earlier spec. And by removing support for the earlier specs, they are able to drop support for deprecated features.

And that's just not a thing. That has no relationship to the ES specification works. The __proto__ accessor was never mandatory; it was added in browsers a long time before it was specified, and then specified as optional (Annex B) when it was first added to the specification, and has been optional since then. This is true whether or not you think of there being a single specification or annual editions.

So, with that said, to address the specific topic of annual editions:

> "OK, ESlint understands new features A, B and D, but babel only implements B, C, and D, but library X doesn't support feature D yet, so since we want to use that we have to use bluebird instead of the native feature D for now" and on an on.

That's exactly how it works. eslint implements proposals at stage 4. Babel implements proposals as they come out and people contribute, but only adds them to preset-env at stage 3. The output for present-env is based on what's actually supported in the browsers you're using. They both take a variable amount of time to land features once they hit the appropriate stage. Neither of them gates anything on annual editions. Neither do browsers. And browsers will frequently not have implemented features from multiple editions ago; for example, regex lookbehind was added in ES2018 and is still not implemented in Safari.


Not the OP, but the approach sounds roughly similar to SE linux, but for node.

A major problem with a half-secure security solution is that it's not actually secure. You might do everything right, and still get owned. For the threat model the solution needs to be complete (or able to be complete, but turned down to less than complete by the user*).

Part of the way this manifests with SElinux is that to have a fully locked down box with SElinux you have to consider access controls for _everything_ on the box on top of regular unix permissions. And to actually be a full solution, you have to install kernel headers because anything user-space isn't good enough to guarantee full security.

For node to make a similar guarantee locking down everything means turning off a lot of the features that allow javascript to be so dynamic or making major changes to the run-time implementation to support being able to use those features without making the security swiss-cheese.

And then, even though the system is securable, there's the issue of turning it on for all the modules you use. And not just the module's you use, but the modules your modules use and on and on. You could delegate the module-module to the module you import but that means either granting overly broad permissions to the module and hoping they don't screw something up (and that any module they delegate to does the same) or not delegating anything and personally granting explicit permissions to every module, no matter how deep in your hierarchy of requirements. If that sounds exhausting, it kind of is.

In SElinux's case what that leads to instead usually is rearchitecting things such that you can use virtualization to sandbox things and limiting access across the sandboxes (ohai, it's the deno solution). There's still places where a VM or other sandbox isn't appropriate though and if you want to be on linux you have to use SElinux (or a competitor, but i've only touched selinux), and it's not uncommon to have an entire team whose only job is to configure SElinux and support other teams that interact with it (eg, coaching them on how SElinux interacts with their codebase and what they need to tune, auditing teams' SElinux configs, and keeping SElinux working for the base system as that gets upgraded). And if you screw up or get lazy with auditing permissions, you've just limited the effectiveness of SElinux, possibly rendering it useless.

A large part of the reason SElinux is so hard to use and use right (and that directly translates to js and node) is that it's attempting to bolt-on security to an existing system that wasn't designed with (that kind of) security in mind. That's a monumentally hard thing to do in a way that doesn't require rewriting everything that uses it. And not having to rewrite everything is a hard requirement, because if you're going to rewrite everything that uses it, it's usually cheaper and easier to just make something new from scratch (in SElinux's case a new OS, in js's case a new language).

So holistic options: 1) remove all the dynamism that makes javascript javascript. this (potentially) breaks all existing code. Call it rustscript and get that to ship in all the browsers, and get all the websites to use that instead of javascript, and then make a serverside environment for rustscript. Now you can do fine-grained module permissioning. 1.5) remove only the dynamism that breaks this sort of security access control as part of a new ECMAScript spec and add support for the security access control at the same time. This breaks existing code, but the old code can still run in a runtime for the earlier spec. New code can take advantage of the new spec features. Old code can be modified to work with the new features. Broken code can be rewritten to the new spec. This makes rustscript ESNext. It will be up to various runtime to support this new runtime, so nodeNext will have support for it but it won't get backported. Browsers will require transpilation from ESNext to an earlier ES version as they do now, but eventually even they would drop support for the older js versions. 2) accept that module permissioning systems are easy enough to get around in JS that anything attempting to implement them is at best security theater. The deno solution isn't security theater, but that's because it makes much less stringent guarantees (ie only runtime granularity and not module granularity).

* why allow the user to make themselves insecure? In some cases, the user will choose to be less secure for some external reason or will be using the security solution as a part of a more holistic security solution, so some other part guarantees the security that is given up.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: