Hacker News new | past | comments | ask | show | jobs | submit login
This is why sites choose to stay vulnerable to Firesheep (google.com)
354 points by ericflo on Nov 7, 2010 | hide | past | favorite | 139 comments



This is a problem we (GitHub) are facing in a big way right now. Google Charts doesn't offer https alternatives, so almost all our users get a big "this site is going to steal all your private information" (mixed content warning). We chose to roll out SSL first, then deal with the hard problem of mixed content warnings (building ridiculous image proxies) later.

I think a lot of developers underestimate how big of an impact this warning is on users, especially on browsers like IE that throw up a dialog on every page that has this warning. Developers understand that it's not that big of a deal — but to a user, it looks like the site is full of viruses, malware and is going to steal all your bank account information.


This is a problem we (GitHub) are facing in a big way right now

This also broke Bingo Card Creator something fierce when I rolled out SSL support. It was the reason I hadn't had it previously, and I knew it was going to be a problem going in, and I tested for it, and I still managed to hose two pages which were critical to my business for most of a week.

Figure on a 40~50% drop in conversion from a non-technical audience on IE if they get one of those popups, by the way. It is the worst possible place to be: not enough to trigger an automated "Oh cripes!" from the website, but big enough to murder business results.


During the last Velocity conference, one of the last sessions on the last day was a talk from Google guys about how to make SSL faster, because they had recently turned SSL on for all gmail accounts.

I asked how they deal with the unlocked icon and warning dialogs for mixed protocol content on the page and the response was that people are so used to the popups and the lock being unlocked, that they (Google) don't consider it to be a problem. The response was really short and curt and I felt it was kind of a cop-out.


Well, as I recall, several of the questioners at that session were verging on the point of heckling, so many of the responses were short.

But the answer is that permitting mixed content was probably a mistake in the first place, but it's one that we have to live with. The ease of mixing content means that many sites get it wrong (including Google sites, to our shame) and the lack of ubiquitous SSL (again, including some Google sites) imposes that on others.

So, I suppose that `we don't consider it a problem' is roughly correct regarding warning dialogs: the answer is not to mix content. The problem is that it's clearly too difficult to do that. (The inability of networks to cache public resources over HTTPS is also an issue and possibly one which we'll address.)

Lack of SSL on the Chart's API is a new one, but I'll look into it now that I know that it's a problem.

As for the rest of the problem: fixing stuff is hard. Miraculous answers invariably tend to be so only in the eyes of the conceiver. We'll keep plugging away.


That's good to hear. But considering that only now is SSL considered to be "important", because of FireSheep, it would have been nice to have a major player like Google seriously consider/suggest/lead the dialog on solutions here, even if some of them are "hard" or unworkable. It's nice to have options, or know what the options are. Or even to say "there is no solution, create systems that don't mix protocols".

I mean, when I went back and summarized my experience at Velocity to the rest of my team, that this question was glossed over as it was led to some audible guffaws. Because we've all been dealing with users for years who don't know how to deal with the UX of this problem.


How's that free SSL-enabled google maps service coming along? Any chance we'll ever see it without paying umpty-thousands of dollars for the privilege?


Agree 100%. I wasted most of a day (in increments over several weeks) worrying about whether I had screwed something up at my end, tweaking my gmail settings, analyzing TCP traffic and so on. A little bit of information from Google's end would have saved me hours of needless security anxiety.

Lack of complaint != contentment. I am pretty annoyed to hear of this indifference to users' peace of mind.


Somebody should tell the Chrome team that. A recent version of Chrome changed the mixed content warning indicator from a relatively innocuous "padlock with a cross" to an alarmist "skull and crossbones". We got a lot of complaints about that (due to not yet having built the "ridiculous image proxies" kneath complains about above).

It seems like they may have thought better of this change, since my current version of Chrome (6.0.472.63) seems to have gone back to the padlock-and-cross.


I think it actually varies depending on the type of warning, some being considered more severe than others.


Maybe they feel like its not a problem for them because average users trust them without a second thought.


I suppose; unfortunately, the talk was in the context of making the same kinds of changes to your, or any random, site to make it more feasible to use SSL.

Also unfortunately, when there is mixed protocol content, especially with email, you're not asserting trust of the page origin, but of the additional assets loaded. Google has no control over the content referenced in emails. Encouraging people to ignore the warnings doesn't make anyone safer, if people are not informed enough to care or not.

One of the suggestions was to use shorter key lengths to make SSL less expensive to process, this wasn't considered a welcome suggestion by many of the more security conscious and vocal folks in the room.


The warning isn't spurious, by the way. A man in the middle could inject evil JS into urchin.js (or whatever the equivalent is now) just as easily as he could inject it into your site's JS; the page is not secure.


Indeed, the warning has it's merits.

That being said the second part of your argument is completely wrong. You can just as easily inject evil JS using an https server and never get the mixed content warnings.

The warning serves to indicate to users that some assets (think important-financial-graph.jpg) aren't being served over the same encryption as the rest of the page. But then again, browsers like Safari have no problem with this. Other browsers like Firefox (correctly) cache these assets on disk if Cache-Control:public is set, thereby un-encrypting the asset.

The error may not be spurious, but it sure doesn't mean the page is secure or not.


> You can just as easily inject evil JS using an https server and never get the mixed content warnings.

Only if the user ignores the "invalid certificate" warning.


1. Include https://hot-new-metrics-startup.com/tracker.js

2. hot-new-metrics-startup gets hacked. Sends over malicious js

3. Your page is no longer secure. https certificate remains.

We can argue semantics, but I guess I'm more concerned about the end result than semantics.


The problem is that you asserted it is "just as easy".

It certainly might be possible for the attacker to compromise a specific server that you have chosen to trust - but that's a much higher barrier to an attacker than performing MITM on an open Wifi connection which doesn't require them to compromise any server.


Okay, here's another "just as easy" scenario:

1. You include http://google.com/trusted.js on a https page

2. Someone goes to a cafe, opens up your website with Safari while someone is performing a MiTM attack on that file.

3. No warnings, your user is compromised.


Yeah, then don't do that, that's the point. Whether you include mixed content in your site is up to you, the developer.


The entire concept of the SSL icon is so that a user can trust a third party (web developer) that they don't know. If it's up to you (the web developer) — it's all up in the air again. And the icon/warnings are pointless. Which is where I've been trying to go with this…


Eh? SSL provides security against eavesdroppers, network manipulation, etc. - it doesn't help against a malicious site that you actually intended to communicate with.


Any browser which doesn't warn about that in some way is essentially broken. (Yes, I see you cited Safari as one, but it must the the only one as far as I know - it does remove the padlock, but that seems pretty inadequate ...)

EDIT: I do take your point in that I think IE is the only browser that actually blocks the content. The others warn about it but still load it, by which time, of course, the damage is done.


Our theory is that an SSL site including non-SSL content is no better or worse, in terms of security, than a completely non-SSL site.

What is the purpose of warning more prominently in the scenario described, than in the scenario where the user goes to a non-SSL site in the first page, or is redirected from an SSL login form to a non-SSL page?


Saying you can hack an analytics company's servers is cheating. I can just say I can hack GitHub's servers. Or obtain a root SSL cert. Or crack SSL.

If you don't trust a company and their competency at security you probably shouldn't be using their service for anything sensitive. You can't assume that your users aren't on hostile networks vulnerable to MITM attacks, etc.


Absolutely, but the protection SSL helps with is it actually forces the attacker to compromise hot-new-metrics whereas without SSL you can just skip the first part of step 2 and just do "send malicious js" through a MITM without ever having to go compromise any of the services involved.


Unless they're using Safari which doesn't have a mixed content idea. My point is that it doesn't fix anything at all. Like filling 8/10 holes in a bucket of water. It's still going to leak out.


It is the webapp developer who ultimately decides whether or not there is mixed content, not the browser. If you don't mix content in your webapp, an attacker who controls the network shouldn't be able to change your content (not even to inject references to new untrusted HTTPS or plain HTTP servers), or that of trusted service providers. The browser needs to implement SSL securely, but even users with a browser with no mixed-content warning benefit from there being no mixed-content.

The mixed content warning helps to warn the developer of the site of the problem, and let users of browsers that support it know that they are not fully protected.


Ask tptacek how hard it is to SSL man-in-the-middle attacks in the wild.

Hint: People sell out-of-the-box solutions to the problem.

It's trivial to get certs that browsers won't choke on. You have to more than check for the cert not being "invalid", you have to actually examine it carefully, knowing which cert sellers are trustworthy and which are not. Your SSL lock icon is useless.


Right. Users never do that!


How is 'mixed content' any more dangerous than unencrypted HTTP? Why is the user not warned every time they go over an un-encryped connection?


Thinking you have a secure connection when you don't is worse. On (generic hostile network), thinking you can check your mail safely is worse than knowing you can't.


I am not going to argue for the correctness of this position, but presumably it's because the "secure" icon (e.g., the little lock) is absent.


For Google Charts, there is a workaround. Simply change the hostname to www.google.com, example:

  http://chart.apis.google.com/chart?chs=200x200&cht=qr&chl=http://www.adperium.com/ (normal)
  https://www.google.com/chart?chs=200x200&cht=qr&chl=http://www.adperium.com/ (HTTPS)
It's probably not what Google prefers, but this works for us.


It does work and Google definitely does not want the public to use it. A Googler's post on the topic from 2007:

http://groups.google.com/group/google-chart-api/msg/85186f74...


In writing a plugin that rewrites URLs as https (http://github.com/nikcub/fidelio) I found that this worked in a lot of places. Facebook does not explicitly support ssl everywhere, but you can rewrite the requests to https servers and it works.


Exactly. Browser makers (including Mozilla/Firefox to a large degree) are responsible for the fact that HTTPS hasn't become the standard protocol as it should have been years ago. It's not only the unproductive mixed content warning but also the insistence of all browsers to only accept expensively bought certificates and throw a very scary and hard to overcome error dialog if a site uses any other kind of cert. While that isn't a problem for big(gish) commercial sites like GitHub, it presents an insurmountable hurdle for private sites and small-time projects for no good reason. For most sites I don't need "secure" origin verification as badly as encryption. The lack of a verifiable server address shouldn't mean that I should be bullied to not use an encrypted connection with it. But even if the verdict is that you absolutely can't have one without the other, browser makers should AT LEAST include trusted root certs of authorities who offer free SSL certificates, too.


While your frustration is understandable, I think you're speaking from the perspective of a tech-savvy person and not the average user. If browsers began accepting all free / self-signed certificates, it would be only a matter of time before something like "Firesheep FX" came along and permitted random strangers to MITM anybody's SSL session. Some of us can notice when that happens, but most people won't have a clue unless the browser presented them with a big scary red warning.

However, I agree with you that we need some good free CAs. The difference between free and $10/year is bigger than most of us think it is. Fortunately, there are registrars such as Gandi which will give you free certificates with every domain.


> If browsers began accepting all free / self-signed certificates [...]

Right now, browsers are accepting any unencrypted old HTTP connection without any warning, while non-verified securely encrypted connections are actively prevented. Tech people can circumvent the block, but normal users cannot. Nor do they have any reason to because the warning they are being shown sounds like the end of the world, while any unsecured connection looks perfectly fine to them. This is something that could be done right now to make everybody more secure, at no cost, but it threatens the business model of companies like Verisign.

Nobody is suggesting that browser makers should display the much-sought-after "lock of absolute protection" icon on any random SSL connection, I'd be fine if they reserve that for paid-for-certs. I'm merely suggesting they show free (or even self-signed) certs the same courtesy as basic HTTP, the most permissive protocol of all time, instead of actively preventing users from using encryption.

I agree with you about the threat of "Firesheep FX" and believe Wifi connections should probably all use WPA2, even at coffee shops where internet access is free. The threat of MITM is real, but the attack can be made more difficult using a number of schemes, and it even includes free certs that offer way more protection than any unencrypted link ever could. Yet, we are currently encouraging unencrypted connections while actively blocking encrypted ones.

If HTTPS could have the same UI mechanisms as, say, an SSH connection I'm convinced the online world would be a much safer place.


OK, I see what you mean. If you're suggesting that websites protected with untrusted certificates should be treated as if they were plain HTTP sites, then I agree with you. Chrome crosses out the "https" part of the URL if the page contains insecure elements. Something similar might be the right way to treat untrusted certificates.


Wifi connections should probably all use WPA2, even at coffee shops

if you just write the password on the wall, it defeats the purpose - everyone who logs in is on the same network again, just like a public network


Everyone being on the same network isn't too much of a problem. They still can't read each others traffic. See http://en.wikipedia.org/wiki/IEEE_802.11i-2004 or http://en.wikipedia.org/wiki/Wi-Fi_Protected_Access#WPA2

Every device negotiates its own keys with the access point.


Very common misconception, but it's still a problem. Any client with the network password can capture the initial key negotiation, and then decrypt the client's subsequent traffic. You can enter the network password in Wireshark: http://wiki.wireshark.org/HowToDecrypt802.11 .


If the shared key is known, it's also trivial to install a rogue access point with the same SSID and a transparent, tampering proxy without a realistic chance of anyone noticing.


Thanks.


if the AP and the gateway are separate devices, couldn't I just ARP-spoof the gateway and get all traffic on the wireless network sent over to me, regardless of encryption?


ye I should clarify and add that it means you can see the traffic. even if it is switched


> The difference between free and $10/year is bigger than most of us think it is.

> I agree with you that we need some good free CAs

https://www.startssl.com/ Supported by just about every browser. Entirely free. A fellow Hacker News user linked to it in a similar thread. I was impressed :)


Last time I checked, StartSSL was not recognized by some (slightly old) browsers. I wonder if this issue has been resolved now?


Their root CA was generated in 2006. In theory, any browser shipped before 2006 will not support it unless it was added (through, for example, Windows Updates).IE7+ is supported; I haven't tested (and don't care to test) IE6.


About 3 years back I used them and back then ie6 didnt' work out of the box. And the windows root cert update didn't help either back then.


StartSSL (http://www.startssl.com/?app=1) provides free certificates that are browser-recognised.


Thanks for the link, I didn't know them. I just tried it. I generated a certificate for a site of mine, uploaded it, changed the config and the cert was pulled by Firefox. However sadly, the authority of StartSSL was NOT recognized by Firefox. This is what it said in the egregious warning dialog:

  *-------.com uses an invalid security certificate.

  The certificate is not trusted because no issuer chain was provided.

  (Error code: sec_error_unknown_issuer)*
StartSSL does not work for me. Unless I did something wrong, which happens from time to time. I verified that the StartSSL cert I installed was downloaded by FF, it just doesn't recognize StartCom as a trusted cert authority (apparently). Can anyone confirm this?

Edit2: You guys were right, thanks! I did paste the intermediate certificate into the wrong file, my bad! It works!


Off the top of my head, you probably didn't include the intermediate certificate. Read #31 on the faq: "Why does Firefox present a warning when connecting to my website?"

http://www.startssl.com/?app=25


If you're using nginx, check out [1] for instructions on how to get it working.

[1]: http://blog.dembowski.net/2010/02/25/startssl-and-nginx/


I've been using StartSSL for quite some time, and only wget has been unwilling to accept it (whereas curl, firefox and chrome have all accepted it):

ERROR: cannot verify [site]'s certificate, issued by `/C=IL/O=StartCom Ltd./OU=Secure Digital Certificate Signing/CN=StartCom Class 1 Primary Intermediate Server CA':

  Self-signed certificate encountered.
To connect to [site] insecurely, use `--no-check-certificate'.


Or you could download their certificate locally...


Thats what I don't get, why Mozilla doesn't simply remove the root certificate for any organization which charges more than $10 for a wildcard certificate is beyond me.

The verification is the same, there is no good reason it shouldn't be the same price.


I don't really understand why these Certificate Authorities exist and need to charge money to sign a digital certificate. Couldn't some sort of user based distributed network be used for authentication? I mean we trust compilers and code patches (which do see many eyes) that we then load onto our PCs. Why can't we trust some similar mechanism of authority that is user based.


Hmmm... I see what you are saying, but (for example) godaddy offer basic SSL certs for $10. Not a huge price for anyone.


Yup, I agree with you. This is a pretty big problem with a lot of other google services as well.

The google maps api for example will not work behind https. Google has publicly said that this is because they want their maps free and open, not behind some page where the user needs to be logged in. This create a huge problem for any site that uses google maps. They do offer a solution though, for $10,000 a year they will let you use the map api behind https.


For reference, this 2009 post talks about how you need to subscribe to $10,000 Google Maps API Premier to use Google Maps on a private service: http://47hats.com/2009/07/google-maps-the-10k-gotcha/

http://www.google.com/enterprise/earthmaps/maps.html


Not only that, but in IE it's a modal dialog. You can't do anything (even switch to another tab) until you've acknowledged the scary warning.


Thats why I use IE6. No tabs, no problem.


It's good to have a solid reason to keep IE6 around. I'd hate for it to go away!


Couldn't you just cache the charts locally, and then serve them directly to the user?


Absolutely. I have run an SSL-only site (https://domize.com) for a few years now.

Other than Google Analytics I can't think of a single other widget/embed/analytics app that has supported SSL out of the box. It's a real shame, but on the other hand I'd bet good money that the web will be 99% SSL within the next 24 months.


We had to disable support for Google Maps on Catch.com because the costs was too prohibitive to use their "enterprise" SSL enabled version.

It's a damn shame because it was a really cool integration.


How about: Give every user a monotonically incrementing value that's initialized at the start of the session using HTTPS. For every request, the client will provide the next value in the expected sequence. Listeners won't have the secret key that was exchanged during the HTTPS authentication, and can't issue requests on the legitimate client's behalf.

Forcing the requests to be serial sucks, but if you only do it for privileged actions (as opposed to public page GETs) it should be manageable.


Still vernerable to MITM attacks, since an attacker could intercept a legitimate respo se for safe.js, but then send the user a completely different file. You need to sign the entire file. Now you're incredibly close to having SSL.


We discovered the exact same issue and rolled back few weeks ago.

The worst is that the default selected choice in the modal box is to not load anything.


The IE8 warning is the most confusing sentence I've ever seen. Even I as a veteran of 13 years of web programming have to read that thing 3 times to know which button to press to make it load the damned stuff.


The image proxy won't work: Google's js APIs are throttled per IP to prevent abuse.


Is there a lot of GitHub users using IE? ;-)


Could you proxy through your own servers?


This is problematic with services that have limits such as maximum connections per day from a single IP


To be accurate, this is not the reason many sites choose not to go with SSL for everything. The real reason is that most sites don't need to be SSL for everything.

I run a travel blogging site, where 99% of all pageviews are from random people off the internet reading people's trip reports and looking at photos. Encrypting all that traffic would do nothing except bog the site for everybody.

Every once in a great while (in terms of total traffic), somebody will log in and post something. That tiny moment could benefit from SSL, since chances are it's happening from a public internet cafe or wifi hotspot. That's the only time a user is actually vulnerable to this sort of attack, so that's when they need to be protected.

But when you look at the internet at a whole, the traffic fraction that needs protecting looks pretty much the same. When you're showing me pictures of cats with funny captions, please don't encrypt them before sending them to me just because you read something about security on HackerNews.


The thing that Firesheep brought to people's attention is that the login is not the only thing that needs to be SSL protected. The cookies you get after signing in are often sent in the clear, and that cookie is just as good as your login for gaining access.


It's not the same, because with someone's password you can completely lock them out of their account instead of just acting as them.


In a lot of systems, you can change the password without knowing the old one as long as you're logged in. Others, you can change the email address, only confirm on the new email address, and then get a password reset.

So if you really cover all of your bases, and require confirmation at every step, then the least they can do is access your data and generally impersonate you until you log out of that session (which no one does) or the session times out (which it won't, because you're still logged in as them).

It's pretty much the same.


Honestly, how many sites are aware that they are vulnerable?

It seems like you assume that because the security-oriented 0.5% of the web knows about it, the rest of the web should, too.

For most people, just making sure that their site runs at all is quite enough for them to handle, and keeping current on the latest vulnerabilities is way down on the list.

Additionally, fixing a site takes time. How long has Firesheep been out? A week? Two? You should realize that for many sites, even those staffed by very competent tech people, a month is the minimum amount of time for immediate action.


I agree that most of the web is probably ignorant at best of most security vulnerabilities. But, keep in mind that firesheep is not exploiting a new vulnerability, but an old one that has been known about since probably early 2000. Firesheep is new in that it is automating the work of other programs (which were admittedly a little less user friendly).


How many sites (that any of us are legitimately worried about) employ webmaster, developers, system admins or other that DON'T know why SSL/HTTPS is important? You can't honestly be giving facebook, twitter, etc a pass on understanding very basic concepts... (sniffing, http (cookies))?

Firesheep has been around for 2+ weeks now, but come on, we've all known this has been possible for forever. I'm 20, and I knew how to do this (and did) /years/ ago. I think Firesheep is just what everyone needed.

There are really good reasons why this is taking a long time and it is NOT lack of knowing that this problem exists.

That having been said, my laptop is now running a LiveCD of x2go's LTSP client and my desktop computer is running the x2go server. Very near-native performance and total security. (I trust my desktops' endpoint).


I can understand a possible flaw in a websites attempt to provide secure transfer of data, but just disregarding encrypted communication makes me question their title of web developer.


I read the overclocking ssl posting (http://www.imperialviolet.org/2010/06/25/overclocking-ssl.ht...) & I've been seeing plenty of follow up about how SSL is cheap and easy to scale, but I have yet to see one tutorial that describes actually implementing overclocking SSL or implementing it cheaply.

So- to the HN community, is this whole "ssl is cheap" a false meme, or does someone have actual instructions on how to deploy & implement a scalable SSL?


There has got to be a sensible way around this. It seems overkill to require every pageview to be over HTTPS, even for otherwise public sites. For example, should these public discussion pages be over HTTPS on hacker news?

On my site I am planning the following: operate the login page over HTTPS and issue two cookies. One is HTTPS only and other other is for all pages. The public (non HTTPS) cookie is only used for identification (e.g. welcome messages and personalisation). However, all requests that change the database in any way are handled over HTTPS and we check to make sure the user has the secret HTTPS cookie as well. Often forms submit to a HTTPS backend and then redirects back to the public page over HTTP. Also, all account information pages (sensitive pages) will be over HTTPS.

This way, the worst that can happen via cookie sniffing is that someone can see pages as though they were someone else. In your case, this is not much of a risk.


This is just dangerous. Example. If news.ycombinator.com implemented this dual cookie method. A man in the middle could intercept the page I'm looking at now, where I'm entering this comment in a textarea. They could modify the underlying form to post to the same page as the update form on the profile page, and set a hidden email field. Then when I hit the "reply" button, even though I'm posting to a HTTPS page, I'm not posting to the one I think I am, because the page containing the form it's self wasn't protected by HTTP.

I hope I explained that well enough. Mixed content is hard to do right. Forcing every page over SSL prevents anyone making any modifications to any page, and is just inherently safer.


This is all true but it is somehow different from the topic: one cannot do this kind of attack easily just using Firesheep kind of tool. I think sometimes these discussions are carried away to far.


Wait, if you've got the capability to intercept and rewrite arbitrary http forms, couldn't you just rewrite the homepage of Google.com the same way? The "action" attribute on the form gets changed to https -mybank- dot com slash profile, hidden fields are inserted, etc, and when the user clicks "Search" he's actually updating aforementioned profile page.

If this is possible, then the dual cookie method would seem to make sense.


True, as long as the site isn't doing referer checks, which most of them probably aren't. This is yet another argument for why the entire web should be secured with SSL.


Well, I would say its not 'just as dangerous' as a man in the middle attack is harder to set up.

Good point though. Maybe this could be solved by including a unique access code with the form that is a hashed value of the user'id and the url that you are submitting to (with salting to make this unguessable). Simply check this value upon submission to make sure it matches the URL seen by the controller. That would prevent anyone rewriting a form to submit to a new endpoint.


CSRF protection should be implemented even if your entire site is protected by SSL.

Also, I didn't say "just as dangerous", I said "just dangerous"


Sorry for misquoting you.

However, with proper CSRF protection your man in the middle argument is not the case is it?


Isn't it why we need to implement xsrf protection on every form?


Is all this extra effort really worth the alternative of just using HTTPS for everything?

Have you really examined the extra cost of 100% https compared with the scheme you've outlined? Sounds like this idea would require a decent amount of effort to identify where to use https, to ensure that each privileged request is using https, etc.

I can see that for some cases it is advantageous to stick to regular http for unimportant requests and use https for the important stuff, but I have a strong feeling that this is only applicable for the minority of use cases and websites.


We cant serve ads over HTTPS and we make most of our money from ads. So that would be very expensive!


You're think about right solution, but you're overdoing it and in the end you're unnecessary complicating very simple thing.

The simplest (and IMHO the best) solution is to have everything served over HTTP for unauthenticated users and everything served over HTTPS for authenticated users (this requires authentication cookie marked as "secure" and regular cookie "authenticated=true" that would redirect authenticated user to HTTPS version of the site in case that he/she would go to the HTTP site).


But then we can't server ads to signed in users which would be a large revenue cut for us.


Of course you can... unless you want to serve ads over HTTP, then indeed, you can't really do that.


All the advertising networks we deal with only offer HTTPS. Hardly any advertising networks support HTTPS.


...and if noone will complain about that then it won't change.


Browsers should use two kinds of notifications: "encryption on" (green or red) and "certificate is present" (green or red). Websites that do banking or handle sensitive information should be green/green (SSL-on with verified cert), while ordinary websites could be green/red (SSL-on with self-signed cert).


Is the solution to Firesheep to have every logged in page in https? Or is this not necessary?


Any HTTP request that includes the session cookie needs to be secured, otherwise the firesheep user will be able to grab the session cookie and use it in their own requests.


That is the solution to protect websites from the current iteration of FireSheep. It doesn't fix the underlying problem though. If a version of FireSheep comes out that can do MITM we might have bigger problems.

The solution to the problem is SSL on every page.


I'm not sure I'm parsing your post correctly, but as I understand it you're talking about third party websites accepting responsibility to protect you over an insecure network connection. If that's the case, then I think you're mistaken.

Certainly SSL is not required on every page, and MITM tools have been around for some time (including fairly friendly ones like Cain - http://www.oxid.it/). At the end of the day companies such as Facebook, Twitter et al have a moral (and in some cases legal) obligation to protect the information assets you uploaded to their systems from compromise. Likewise it is not unreasonable that you take certain steps to protect yourself.

The current version of FireSheep is a real known threat. We don't know what might be in future versions. For protecting against Session ID theft, SSL and the secure flag on cookies are the way to go. Certainly for data that doesn't need to be secure (such as static publicly available graphics), there's no need to use SSL for the majority of use cases.

The use of SSL for delivering dynamic client side code (such as HTML or Javascript) is an interesting issue, but ultimately the user has to bear some responsiblity for their own actions somewhere along the line. Not every network is insecure, not every browser has to support a zillion and one insecure means of using Javascript.

Rather than using SSL on every page and expecting the web sites to do the heavy lifting, consider not using insecure bearer networks, or some sort of means of securing insecure Internet links such as a VPN or SSH tunnel.


I do think websites should try to protect their users over an insecure network connection, yes.


That's an interesting though. Where do you draw the line? Should websites protect users who don't have AV, firewalls or share user accounts?


I don't think websites should protect users that don't have AV, firewalls or share user accounts no. I also don't think websites should protect users who cross the road without looking both ways. None of those things are relevant to protecting the communications between the website and the user.

If the users machine is compromised, that's the users problem. If the users machine isn't compromised, yet the website can't be accessed over an inherently untrustable network like the Internet, then the website has some flaws that it needs to deal with. SSL is a start. DNSSEC is becoming important too and I will be using it on grepular.com when Verisign signs "com" at the beginning of next year.


You can also SSH tunnel out to a secure server, which is what you should do on any public network that isnt under your control.

It won't protect you from man in the middle attacks on the general internet, or fix the underlying issue with most websites, but it will stop firesheep.


It seems unlikely most users on the Internet could even parse your suggestion, much less execute it. That doesn't make them stupid, just not experts in the subject. Instead they rely on experts, like you, to make the right choices for them in these esoteric matters. While there are any number of ways you might solve this problem, HTTPS is the best available option since clients and servers already support it. As ericflo is pointing out, a small number of sites deploying HTTPS would go a long way to achieving the goal of a more secure Internet for all.


I totally agree. In those cases I usually point people in the direction of TOR and the like. The orginal poster just asked if HTTPS was the only way to get around Firesheep, and the answer of course is no it isn't.

I still think HTTPS is the best way forward (never said anything to dispute that), but right now if you want speed (tor is slow!) and still want be safe, a SSH tunnel is the way to go.

Besides, anyone reading Hacker News is probably comfortable with creating a SSH tunnel, or learning how to do so :)


Getting a VPN account from some service like witopia.net is not rocket science. People just need to be educated that they need a VPN account if they want to use a publicly shared network without compromising their privacy.

(Of course, as boyter pointed out, a VPN connection doesn't protect you on the general internet, but it does protect you where you're most exposed.)


While sites wait for services such as adsense to support SSL, adding a second Secure cookie and requiring on sensitive pages and to perform destructive actions can help reduce risk to users. Depending on the site, it may be OK to skip showing ads on a few authenticated pages. Wordpress implemented this in 2008: http://ryan.boren.me/2008/07/14/ssl-and-cookies-in-wordpress...

This won't protect against active attackers, but is definitely a step forward and will make a full transition easier in the future, when possible.


We spent about a week trying this on GitHub. It works pretty well as long as you have no ajax requests. We were basically left with this option:

1) Lose the ajax (and spend a significant time redoing bits of the site) 2) Scary iframe hacks. 3) SSL Everywhere.

I feel like we made the best choice (I certainly don't mind removing any chance we'll have adsense any time soon :). It cleaned up a lot of logic based around determining which pages were important enough to require SSL (admin functions, private repos, etc).

It's brought on some other issues though. Safari and Chrome don't seem to properly cache HTTPS assets to disk, for one. This is an old problem: http://37signals.com/svn/posts/1431-mixed-content-warning-ho... . I'm not too worried about increased bandwidth bills on our end, I'm worried about having a slower site experience. We're also seeing users complain about having to log in every day. Are browsers not keeping secure cookies around either?


An alternative? : http://www.tcpcrypt.org/


> Tcpcrypt is opportunistic encryption. If the other end speaks Tcpcrypt, then your traffic will be encrypted; otherwise it will be in clear text.

I like that, as opposed to requiring users to have to install some plugin before they can even talk to the server.


You may like it but it really reduces the security. A man in the middle can make it look like you don't speak Tcpcrypt by manipulating the first few packets of a connection. It's the same issue as mixing HTTP/HTTPS - the HTTP parts leave you vulnerable. If encryption is not mandatory then it might as well not be there.


Which requires active attacks, ie MITM. And stopping MITM is essentially impossible - the best you can do is use our current CA setup, which assumes the original download of the certificate you got wasn't hijacked.

In preventing passive listeners, like Firesheep, this would work 100% effectively anywhere it can work at all. The only way it reduces security is in making people who don't understand MITM attacks feel they're safer than they are - at absolute worst it's like you don't have it installed.


I hadn't thought about that. However, at the bottom of tcpcrypt.org, it said "You're not using tcpcrypt =(" to me, so perhaps this could be used to show a (less obtrusive and less intimidating than browsers do when they detect the lack of SSL for some content on the page) warning to users that they are not using Tcpcrypt.


Is there an alternative to HTTPS?


It's kind of obvious that a major reason is IPV4. Getting another IP so you can run both SSL and plain HTTP is only going to get harder.


Is there any reason that the web server can't set up an SSL reverse proxy to fetch the adsense ads ? (Obviously cookies would have to be passed-through...)

It would cost you more bandwidth, but then there's no annoying warning message about mixed content.


What's the deal with that page not having any stylesheets? It looks completely broken. Am I the only one seeing this?


Turn off adblock and reload the page.


it's one of the reasons. probably not the only one.


Facebook and Twitter don't run Adsense, it's mostly run on content sites that don't require you log into them.


I don't understand your argument here--are you saying we shouldn't mind if the vulnerable sites aren't Facebook or Twitter?


I think he's arguing that since both twitter and facebook (and other unnamed sites) do not use adsense, but are still vulnerable to firesheep, there must be another reason why developers don't update the security for their website.


Facebook has an HTTPS version, but Facebook Chat doesn't work over HTTPS.


Facebook's HTTPS site is utterly useless.

1. Go to https://www.facebook.com/

2. Log in.

3. Immediately you get redirected back to http://www.facebook.com WTF?!

4. Click logout.

5. Go back to https://www.facebook.com/

6. This time you get redirected to https://ssl.facebook.com/ and you're STILL LOGGED IN.

Actually now that I try the same thing with the non-SSL version of the site I have the same problem. WTF is going on? The only way I'm able to log out is by deleting the facebook.com cookies.

I'm on Safari 5.


I use HTTPS Everywhere to force Facebook to use SSL. I can't comment on the effectiveness of their site under normal conditions.


I would have tested the cases for you, but fortunately, I have deleted my FB account. I dont trust them with anything.


Unfortunately, Facebook's XMPP service doesn't utilise SSL either. They hash the password, but everything else is trivally decodable on the wire. Plus there was the hole in Facebook chat which exposed your conversations to your friends earlier this year. Possibly the worse IM system in existence.


hes saying that most sites that use adsense do not require a login, thus do not need https. he is somewhat correct, but not enough for google to just ignore this issue.


Most content sites have login systems that people use to customize their experience, post comments or upload content, etc. Millions and millions and millions of people are logged into content sites and are vulnerable to this attack.

Also, I disagree on the premise that adsense is mostly used on content sites. It's used on all kinds of websites.


Most notably would be the oodles of forums out there that are ad supported and require logging in.


There are sites that make use of sessions without forcing you into using an account. These are also vulnerable.


Well, sometimes for strange values of "vulnerable".

Some of my sites use Apache::Session over non secured http connections, which makes them technically "vulnerable".

The only practical thing an attacker can do with those session ids though, is to mess with some custom visitor tracking in my management backend. So perhaps my information about whether a particular inquiry visited my terms and conditions page or my privacy policy page will be wrong. I can live with that.


Irrelevant contrarian opinion that adds nothing to the debate, but indicates with certainty I am more interested in being pedantic and scoring points than having a useful discussion.


Successfully executed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: