I tend not to click on links advertising pages that are hacked. You know, not that many zero days on Chrome, but still seems like a risky click, as they say.
They might be looking for client addresses of specific targets. Then again, if they were trying for a "bank shot" attack on some particular target, they almost certainly wouldn't be trashing the front page to let the world know that the whole site is compromised.
Oh those turk hackers. Remember the mid-2000s, when searching google dork on a certain public exploit, one would most certainly find lots of already defaced websites? Even on some forsaken lice breeding forum with 2 users, there would always be mad photoshop collage with star and crescent on dark background, and a message to those few poor visitors, who probably would not even comprehend what is going on.
Anyway, what's up with Turkey and hacking?
Ever seen a graffiti of someones name you can't even decipher on the back of a trash can at the far off bus stop near the forest? It's kind of like that.
Well, that makes sense. But why announce that you're Turkish? For example, there are lots of Russian hackers and skiddies, but I have never seen a cr3w called RussiaStrongSec.
Sometimes they are spreading a political message (I've seen more Syrian hackers than Turkish on defaced websites recently, incidentally) so they want to spread their identity. Just like hacking groups that are in it for infamy spread their identity.
Of course in some cases they may be false flag operations, always a possibility worth keeping in mind.
You can always run browser in a vitrual machine. Or open it with a text browser like Lynx or Links. Or use wget to download the file and read in text editor.
Obviously, VM should have no access to audio outputs, display driver should be rewritten to scramble output to be only viewable using a Lenslok-like device and the whole setup should run on a isolated computer staying in a clean room with a dead-man switch installed that - in case of unforeseen consequences - would quickly power a whole apartment down and call for emergency.
I think you're neglecting to consider the possibility of seismic communication by doing a lot of client-side computation to make the CPU fan kick on and off.
Yet another example of why to both sign release artifacts AND verify them is important.
Also, if you're running the public website for a security lib or core FOSS package, expect more attacks by kiddies trying to build rep... so very conservative tech choices (mostly static website served from a read-only fs) and defensive practices are de rigueur.
What's the use in a static website or a read-only FS when you can overwrite what's in RAM, or just attack routing or DNS? Security is a little more complex.
The point is to minimize attack surfaces. If you're serving static content that's one less path for an attacker to potentially exploit. With only static files exploits are limited to those contained in the web server or the OS network code. With a read only filesystem certain classes of privilege escalation are eliminated.
Attacks on routing or DNS are more difficult to deal with, but at least it isn't your server being compromised, and if you're using HTTPS properly then the certificate should show as invalid at least.
So yeah, security is complex, but his advice was spot on. The fact that you seem to call it into question says that you don't know much about security.
Thanks for also seeing the bigger picture. In the future, I will only submit complete production conf mgmt repos uuencoded in my comments. Readability is overrated.
Are you kidding me?! At the very least suggest Grsec, SELinux, containers! Who gives a shit about "certain classes" of privilege escalation? Are you securing your webserver against 5th graders or actual hackers?
If you want to minimize your attack surface, what he suggested is quite possibly the least effective possible thing anyone could do. I point out just a few of the more important issues to consider first, and you tell ME I don't know about security? I don't know what kind of systems you secure, but mine don't rely on 'mount -o ro,remount /' as a defense strategy.
You're missing the bigger point: enumerating every possible defense is beyond the scope of a comment AND does not exclude any technique by omission. If you'd like to raise technologies in a civil manner, please. Just don't start getting defensive and name calling. [1]
You can't claim that serving static content from a read-only filesystem is "de rigueur", and that securing your DNS registration is "fort knox". I think the point was that the OP was claiming static content from a read-only filesystem as the solution, and the reply was pointing out that this is hardly the best or first actions.
I would personally expect that the openssl group is suffering some embarrassment, but this sort of hack is a risk of the business. Hopefully we get a good writeup.
I said this in a lower thread but I figured it's better up here.
Why is there not a standard for links of this type in browsers. Eg <a href="url" sigurl="url to sig" sigalgo="algo to calculate signature">OpenSSL</a>
That's a simple way to go but I really think it's as generally insecure as reading a signature form a url that is advertised by a website. It's also why I rarely bother.
But if browsers were good about this then it could be done in a much better way which is to sign the application with a real peer verifiable signing method. Such as the SSL cert that covers the site behind the open source project .
now this only works for projects that have SSL certs. Another method would be to have a clearing house that can do 1-1 with github et al and a re cert, like a oss cert organization. A final good way would be to use the beauty of git and use the source checksums and a repeatable build process (which is fricking hard) and come up with a way to give a signature for oss applications based on a git commit and check that back to the public git repository.
really I think knwon public keys for oss projects and branches would be the real answer. And the security gating for newbs would be like windows and linux which check the public signature of the application before they run them from the web and make the end user feel safe instead of doing nothing.
Browsers have a good share in this responsibility as well. Standard domain security should work well here as well. Better than what we have.
I leave this to more entreprenurial minds to make this work and I'd love some real telegraph style sinkers to point out the flaws. This is must me talking after a belated xmas dinner. but I think I'm kind of on course.
Yeah the public key used for signing has to be communicated via a different channel, otherwise we're spinning our wheels. I think DNSSEC is headed in the right direction, but hasn't arrived yet.
But that's if we're talking websites in general. For the specific use here, installing "trusted" software packages, far better solutions already exist and already protected the users of OpenSSL.
Point is, I should not be able to access a plaintext version of a website hosting such cryptographically crucial software/information.
It's obvious why it can be a grand target for Man in the Middle, defacement and worst of all integrity attacks. Apart from preventing many of the latter, implementing HSTS could have really mitigated the problem. Anyone who had already visited the site wouldn't see the defaced page. Furthermore, they could get added to an STS preloaded list[1], making the attack invisible to anyone using a modern browser.
If you are interested, the Wikipedia page[2] does a fair job at explaining more about why HSTS is needed.
Or maybe they should have plaintext access, because then they can get the software if they don't have an SSL-enabled HTTP client, and they can compare the digital signature of the sources later via 3rd parties.
But this is all crap, really. OpenSSL is a library distributed across tens of thousands of independent providers all over the internet. Nobody needs to get it from the main site, and even if they do, there's a multitude of ways to tell if it's the real deal or not. If you know what OpenSSL is, you can type in "https" and be totally secure without HSTS. Even if you needed HSTS (which you don't), who's going to the OpenSSL.org website time after time that HSTS would even be useful?
People make way too big a deal over half-baked countermeasures that don't apply to every case. You find me the person who's downloading vanilla OpenSSL libraries from the main site over multiple visits and is at risk of a client-side MITM and not verifying their sources, and i'll show you someone who's going to get owned even without a MITM.
You seem to have completely missed the fact that HSTS can be combined with an STS preloaded list, to which the previous comment author also gave you a reference to read. HSTS is TOFU (trust-on-first-use) by default, but the TOFU portion can be upgraded to a full PKI-based authentication mechanism just fine using preloaded lists, which doesn't require any previous visits whatsoever.
Furthermore, it's just embarrassing to see such attacks on these high-profile sites. If they can't defend their site (even for users who don't type "https://"), who is to say their library is secure? These kinds of attacks threaten the confidence we have on our fundamental cryptographic building blocks and should be avoided.
1. These attacks don't bear at all on cryptographic building blocks, because any script kiddie can brute force a login or fuzz an input, etc. Has nothing to do with crypto at all. Has nothing to do with their code at all. Completely different systems with completely different attacks with incredibly different requirements... it's worlds apart. There is nothing about defacing a website or even MITM that could ever be compared to breaking a crypto library.
2. HSTS is quite a bit different from the PKI of TLS. That said, what you are basically saying is "HSTS is the same thing as HTTPS when you add the URL to the browser's STS whitelist". (Which could be avoided altogether if you just type "https")
So what your argument really boils down to is: a crypto library is worthless because we really need to use HSTS because OpenSSL.org users are too stupid to type "https".
Not only is this inaccurate and nonsensical, it's just a crappy security model. Now I have to manage HSTS exactly the same way TLS certs are managed and basically reproduce one protocol into a pseudo-protocol and juggle both, keeping things like RFC 6797 in mind. Suddenly the complexity's gone up enormously over time, only to prevent MITM on the client, and we don't even consider whether or not the content was compromised before the connection of the client to the server.
Even if the server is hacked, you still have absolutely no guarantee if the content was modified, because you're not verifying the content once you download it. But sure. Let's freak out about the possibility of a one-sided MITM to the client over HTTP (which every idiot knows is not secure by design), because that's clearly the most important or likely attack to be worried about right now.
I'd never heard of HSTS, but after reading that wiki page I find myself wondering along with Peter how much good it could ever do, in general, that a 301 to https doesn't already do. Sure it's good for a naive user who has already been to the correct uncompromised site. Sure it's good for a naive user visiting a popular Chrome-approved site with Chrome for the first time. I don't see how these two minor wins (especially for a site like that under discussion) make it worth the effort.
Which is, to say, 99.99% of people. Noone types "https://" or "http://"; 99% of people don't even know what https means. Us developers should know better and protect our users. It's our responsibility to make sure the default is secure.
SSLStrip does not work on valid HTTPS requests. If you request an HTTPS page, it can not be subverted into HTTP. If it could, HTTPS would be pointless. So, yes, HSTS is not required for a valid HTTPS request. This is not some semantic argument, or some sort of side channel attack crap. HSTS is not necessary for HTTPS requests, period.
1) hack high profile website
2) wait for it to be posted on hackernews
3) restore the page to appear normal, but embed a browser exploit
4) ...
5) profit!
I've wondered this for quite a while but why isn't there a standard for browsers like <a href="bigassfile" checksumhref="checksumhrefforbigassfile" checksumalgo="shashamd19">Download with check</a> I mean no one ever checks them anyways so it's not like they're useful. The second step would to be to provide a reputable repo of software version -> checksum lookups so I didn't have to trust a given server for that. This is me thinking and drinking and I'd love comments.
Content-Security-Policy is doing something vaguely similar with <script> tags, where you add a nonce in the HTTP header and then only <script nonce='foo'> tags with those nonces are executed.
You run right back into if you don't already trust the signer of the checksum, you can't trust the checksum, either.
The next logical step is some kind of third party authority, and then you right right into the Certificate Authority problem set, including code signing licenses like Apple and Windows use.
Some F/OSS systems are starting to use similar systems, like the newer Python package distribution systems.
Does anyone have any details about how this was done? Was it a compromised admin account, a local root exploit, social engineering, etc? I'm eagerly awaiting the post-mortem.
If they can replace the front page html, they could probably also replace the source code distribution with a backdoored/trojaned tarball. Or someone else might already have done so, since who knows how long ago, using the same exploit.
That. That's why the authors PGP-sign their sources. Furthermore, some of us maintain GPG trust paths, so replacing it on every other place on the Internet would still be futile.
As it should be. There was a story in HN a few weeks ago about why open source projects better not run on funding. Something with making it obligatory to work on the project and add features just to do something. And of course the people "donating" have some say in what's going on. I'm not saying backdoors per se, but should we want any sort of pressure this way?
Sounds like there is a need to sponsor OSS writers and not the actual projects. Kind of like having tenure but of course it would have to be voluntary, merit-based, etc.
Other pages are still up (although I haven't checked that they're unmodified) - it does appear the attacker didn't bother to bring anything but the front page down.