Unbelievable. I've been poking about here for like 6 months now. You are all very smart people. Why is this so hard to understand?
If you do not have a valid certificate signed by a CA, SSL is not providing any security.
Yes, the warning you get when you visit a site with an invalid cert is much scarier than what you see if you visit an unencrypted site. But it's the sites that use encryption that users care about, because those are the sites that get their passwords and credit card numbers.
Perhaps you think the browser should make an exception for self-signed certs. After all, there's nothing "wrong" with their signatures. Nothing's expired. No signature fails to validate. Why not just make the URL bar orange or something? Because anyone can create a self-signed cert and sub it into a Bank of America SSL connection.
It sure is annoying that you have to pay $20 every year to keep an SSL cert. I totally agree that this a problem. But right now, without that $20, you have a connection that provides cryptographically zero security. Short of coming up with a way to create a trustworthy CA that runs for less than $20 a year, there is no great solution to this problem.
I don't fully agree with what you are saying. A self signed certificate DOES provide 100% cryptographic security, in that nobody sniffing on the the wire, or whatever open wi-fi I happen to be using can see my data.
Securing the connection from 'spies' is only one part of an general SSL certificates function - the other is proving the identity of the site you are connecting to. A self signed cert provides zero use here.
So a self signed cert has some uses - that said, perhaps its the more techy person who would ever care about encrypted connections but not identity, and they can probably work out how to get FF to accept their cert anyway.
Would you like to bet your social security on that statement? I'll win that bet. If I can watch your packets on the wire, I can inject my own packets, or redirect you to the middleman of my choosing. If I can inject my own packets, I can sub in my own certificate. You won't know, because you're using self-signed certificates --- meaning, meaningless certificates.
If I can watch your packets on the wire, I can inject my own packets
You're certainly persistent, but this is not strictly true.
1. Major links that are sniffed, summarized by ASICs, and backhauled. These are necessarily passive, and public key ops are too expensive to do at this scale.
2. Service providers would take a lot of heat from both the public and banks themselves if they started proxying encrypted connections to insert ads or track browsing history.
If I can see your packets, I can sub in my own ARP (local) and DNS (remote) responses. In the absolute worst case, which never happens, I also see your TCP sequence numbers, and can simply inject segments directly. You have no hope of keeping me out of your connections. The "passive-only" attacker is a myth --- hackers don't use solsniff.c anymore.
Yes, I've used ARP spoofing to perform MITM attacks.
You seem to be ignoring that some established institutions are malicious these days. An ISP can happily track their users all day long, and perhaps inject some ads that the average user won't care about.
However, an ISP cannot routinely proxy SSL connections. As I said, such blatant tampering would cause too many complaints (at least at this time). And is a bank supposed to accept risk from the ISP proxy being compromised?
When the gloves are off and ISPs are inserting themselves that far up the protocol, I'll gladly say that crypto without PKI is absolutely useless. Until then, there is a class of passive only attacker. Perhaps you don't see this kind of attack as a problem, but some of us do.
It seems bizarre to me that someone would engineer a protocol that makes connections resilient against nosy ISPs, but not resilient against the lowest common denominator of organized crime.
Which is why I frown when I see somebody who wants to "dabble" in crypto by making their own cipher that probably "wouldn't stand up to serious analysis" (is there any other kind?).
But the protocol has already been engineered. Its a matter of handling certain a use case sanely rather than flipping out. An https connection with an untrusted CA has exactly the same absolute security as a plain http connection. Why doesn't Firefox throw up big warnings for every http page I visit?
Having said that, Mozilla could alleviate the situation by including CAcert in their trusted certs. Domain ownership assurance is really the only thing common SSL certs get you anyway. (And I've got no problem with "high assurance" CAs getting special indications).
I agree. An HTTPS connection with an untrusted certificate has exactly the same security as a plain HTTP connection. So don't use HTTPS without trusted certificates. It's confusing to end-users and, by instilling the click-through reflex, is damaging the security of the Internet.
It is vanishingly unlikely that Mozilla is ever going to include cacert.org among their trusted CAs, because hundreds of thousands of people use Firefox to access their banks, and there are plenty of smart security people that work at and with Mozilla.
The algorithmic security of unsigned HTTPS is the same as HTTP. Luckily, Mozilla already has a method of displaying such a page - leaving out the lock icon and colored background. There should be no "click through" unless you're proposing a user click OK for every http page they visit as well.
PKI is a hard problem, and CAs do it well for brick and mortar identities. However, that is certainly not the end of the story. The reason someone would choose to run SSL without paying for a certificate is that they believe most everything should be encrypted, but do not value the security properties provided by CAs.
You still haven't said why Mozilla should arbitrarily block the use of HTTPS in such a fashion. Should GPG prevent me from verifying a signature if it is not in my web of trust?
The only thing a basic SSL certificate (from one of the 47 (!) default CAs) does is guarantee ownership of a domain. This is exactly what CAcert does. Visiting paypal.com, I see the name of the company and country in green, from a higher assurance certificate where more thorough checks were done. This is what banks should be using.
You're steadfastly avoiding answering a really basic question:
If Mozilla "accepted" self-signed certs (to some definition of "accepted"), what should it do when someone browsing to onlinebanking.bankofamerica.com receives a self-signed cert that an attacker has spliced into the connection? It sounds like you want to make the warning for that condition less severe. Your browser cannot tell the difference between your personal self-signed cert on your own app, and an attacker's self-signed cert appearing on a BofA connection.
If the user is already interacting with onlinebanking.bankofamerica.com and the certificate changes, that's a serious error.
If the browser has a longer-stored certificate for onlinebanking.bankofamerica.com (say because they've bookmarked it), and the level of the certificate changes, that's an error.
If the browser has no idea about onlinebanking.bankofamerica.com (perhaps the user typed it in, probably without the https prefix), then the user must verify the security properties of the site. This is what a user must do now, as there may be no redirect to https, or redirect to an arbitrary https. If the site sends a certificate signed by an unknown CA, the user would not see a lock icon, blue background, green company name, etc.
I missed this comment because it's 4 days old, and I waste too much time here so that's like 5 clicks back through my comment history. But here's the answer to that: the first time you connect to a site, your browser has no certificate to "remember". People are unwilling to accept a security model that doesn't protect their first access to B of A, especially when a security model that does is available.
(And who knows if you'll get this. Interesting that HN fails at direct discussions)
The current usage model doesn't protect my initial access to BoA without me verifying that:
1. I've got a https connection
2. I haven't been redirected away to a rogue (SSL) site
You see the (https url)->(page retrieval) process as uniformly trusted (correct me if I'm wrong). I see stratification based on which third parties are doing the verification. Perhaps I'll have to wait for the emergence of a protocol explicitly designed for such things.
To borrow from Eliezer, if you really can't believe something that is happening can happen, then your mental model is wrong.
I originally posted: Every time I have hit this message, it has been mostly irrelevant to me and disrupted what I was doing
[I'm no longer so certain - I can't be sure my router configs haven't been stolen by a MITM attack. I suppose I really ought to find out how to generate and install SSL certificates from a trusted root on them, and post them to someone at the remote sites on an encrypted pen drive.]
I manage a fair amount of networking kit, I find Google results to mailing lists with mysterious and pointless SSL connections. As someone posted in the "End of the Windows Era" thread: "I don't care what OS you have, as long as you have a reasonable browser". This isn't reasonable behaviour.
SSL does not prove anything useful - at the very most that you are connecting to the site your browser intended to connect to, assuming the site DNS hasn't been hacked.
Anyone can pay $20 and get a valid certificate and that doesn't mean you should trust them with your bank account details. Any site with a valid SSL cert might have been hacked behind the SSL termination. If you're scared of MITM attacks, aren't you just as scared of valid SSL certificates on sites with fake DNS or hacked servers?
The security of the DNS and the security of SSL are unrelated. This is one of those Reddit memes that won't die. You can claim to be bankofamerica.com all you want, but you cannot complete an SSL exchange with a signed certificate that says so.
If Eve can take control of DNS and redirect bankofamerica.com to an IP on her servers, and it goes to a webserver with a ceritficate signed for "bankofamerica.com" by a widely trusted CA, then the browser will load it without complaint and show it as a padlocked site.
The only guard seems to be whether she can get any certificate company to sign a certificate for bankofamerica.com. Since it's cheap and easy to get basic SSL certificates from many places, this doesn't seem a very difficult obstacle for her to overcome with a bit of forging, social engineering, insider access, bribery, etc.
(I imagine that she could go to the real bankofamerica.com, save the certificate details it presents, and pass them on MITM style - but hope there are replay-prevention techniques involved. This doesn't affect the question above, though).
The premise of your argument is that it is "cheap and easy" to get a certificate signed by a CA trusted by Firefox and IE for a "bankofamerica.com" domain.
It is not "cheap and easy" to get that certificate. As evidence for that argument, I put forth the fact that no criminal has ever managed to do it.
Now you're starting to see why certificates are so important to security of SSL!
That event was so rare that it made national news, hasn't happened since, and has never happened to a financial institution.
If your argument is that Verisign sucks, though, I won't contest it. I'm not saying the CA business model is good; I'm saying that it's silly to say you can run SSL without CAs.
"Short of coming up with a way to create a trustworthy CA that runs for less than $20 a year, there is no great solution to this problem."
Good point. You won't find it - performing proper background checks cost more than that.
That's not to say you can't get cheap certificates - they're the domain-only validated ones where you only have to prove ownership of the domain.
These are bad because they appear the same as properly-validated certs when they shouldn't. Kaminsky's recent work shows the DNS system can't always be trusted, and so certificates validated on that weak system cannot be trusted either.
However, until GoDaddy and Geotrust stop having lots to lose from DV certs being marked-down by browsers, I doubt they'll let MS, Opera, Mozilla and the rest do such a thing.
You might be overestimating the intelligence of your audience. Reddit, one of the ycombinator startups, famously kept all its users passwords in clear text, as 'using hashes was too hard' - until they were hacked.
That's not the bug though. The bug is that the error message a user sees when visiting a self-signed site using HTTPS is muchmorescary than simply visiting that site on an unencrypted connection, even though by all reasonable standards this is a safer, more private, and more secure action.
If we're not going to warn folks about unencrypted links where every proxy in the way is a man-in-the-middle attack waiting to happen, why are we going through such contortions to warn them about the same attacks in a situation where they are much harder to accomplish?
This is because users are being trained to use sites "with the yellow bar at the top" to do personal things (e.g logging in, credit card details, etc). Making users have to jump through a couple hoops of hoops if the certificate is self-signed is a good way to protect users that don't understand the technology.
While I understand you want to be very egalitarian about it most users would value their personal information's safety over the principle of a completely open web.
In the spirit of the open web you are free to a) not use Firefox b) fork the Firefox project c) file a bug with Firefox d) contribute to Firefox and argue for this feature to be removed
Yeah it's a branding issue. The yellow bar or the lock at the bottom should indicate a "secure" site. A self signed cert is no different from a fraudulent cert.
The warning says, "Here is a site that claims to be secure. Maybe you thought it was secure, maybe you didn't. Either way, the site is not secure. DO NOT ENTER YOUR PASSWORDS AND CREDIT CARD NUMBERS HERE."
The warning is vibrant because the condition it reports on can be created by an attacker on any SSL connection. That stupid warning might be among the top five security mechanisms on the Internet.
Why is a self-signed site insecure? Why isn't an HTTP site with a credit card prompt equally insecure (please, don't try to tell me that users look at the yellow bar -- we all know from direct experience that they don't)? Why is a properly certed site known to be secure for password use and credit card transactions?
You're assuming all kinds of facts not in evidence. My point was simply that the FireFox tradition (now enhanced) of warning about secure transfers to unverified sites was dumb: it creates a clear incentive for sites not to use HTTPS, so as not to scare their users.
This clear incentive you speak of is not in evidence on the actual Internet: find a site any of us have ever heard of that takes a credit card over a bare HTTP connection by default.
Self-signed certificates are "insecure", if you want to use that word, because there is no way to verify them. If you're Bob sending your certificate to Alice, Alice has absolutely no way to tell if she's seeing your cert or Mallory's.
Self-signed certs get used in non-HTTP apps, and in internal apps, because an out-of-band mechanism (thumb drives, key continuity, etc) is being used to distribute the certificates. If Alice already has your cert, and all you have to do is prove you hold the privkey for it, you and Alice have no problem.
Of course, if you think about this for 5 more seconds, you quickly realize that nobody on the Internet has your cert already, and without Verisign to break the tie between you and Mallory, you're totally fucked.
Imagine your bank's certificate says that they run their web site. If one day the certificate changes to a self-signed one, you have clear evidence of a man in the middle attack.
What should the browser do in this situation? Someone pointed out that an HTTPS connection with an unknown self-signed certificate identifies the site just as well as a plain HTTP connection, suggesting that we should treat both connections as equally insecure: so simply hide the yellow bar.
please, don't try to tell me that users look at the yellow bar
Lets imagine a user who never expects the yellow bar. In the example above, when there is strong evidence of a man in the middle attack, the only warning the user gets is that a UI element which they don't notice is unexpectedly hidden.
You really want to warn the user in this situation, so Firefox gives them a big error message and makes them take positive action (adding the unverified, possibly malicious, certificate to their list of fully trusted root certificates) before they can get past it. That is fair enough.
But then there is the kicker. The browser can only tell that a certificate has changed if it has already recorded the old one. It can't distinguish between a site which has always had a self-signed certificate and one which just happens to have one today, unless it recorded the certificate yesterday. So it treats both as man in the middle attacks, which explains why Firefox will always give you a warning about unverified certificates until you verify them yourself.
Sure, but couldn't the man in the middle just as easily provide a non-SSL server? Or are you assuming that the user is accessing a bookmarked https:// url (in which case the browser should already know the correct certificate)?
Doesn't change the fact that it pops up a scary unintelligible message and makes you jump through hoops whenever someone tries to visit a site with a self-signed certificate. It's very much a "break the web" situation.
As suggested by the original article, the correct behaviour would be to treat it as though there's no security whatsoever. After all, logically, how is being encrypted but unauthenticated worse than being unencrypted and unauthenticated?
First time I upgraded to FF3, I couldn't figure out the right combination of clicks to get to my (self-signed) site, and I just wanted to get some work done. So I gave up and used Safari instead.
It's one thing to force users to confirm something whose security can't be guaranteed. It's another to make it so hard they stop using your app. FF3 has other nice security features I can't benefit from at all if I'm not using it.
And StartSSL may be free, but it's not easier than "click the Safari icon".
It is good because that way you know that nobody in between can see your traffic. And more often than not it is enough to know that you are talking to the same site you talked to the last time (just like with SSH).
The idea of "no lock icon" in case of a self signed/unknown CA certificate is a really good idea IMHO. The traffic is encrypted, but it does not give the user a false sense of security.
If you don't know who you're talking to, then there is no point in encrypting the data, because you're probably talking to the attacker. ("Man In The Middle")
Now, the idea of "I'm talking to the same person I talked to last time" is useful, but if the users are all conditioned to accept random certificates without understanding, then when they go back a second time (and the attacker is waiting), they'll agree to the new cert, just like they did the first time.
If you create an account on happykittens.com, you don't really care if the cert happykittens.com is sending you is signed by a trusted CA. What you care about is that the second time you visit the site, when you log in with your brand new account, that the cert the site sends you is the same you received when you created the account (the site is the same you created the account on). This has nothing to do with the fact that the cert is signed from a trusted CA or not, and thus, making it difficult for the user to accept a SS cert is not the right solution IMHO.
Key continuity is a fine answer to this problem. Just come up with a way to provide it on every device every user might reasonably want to log in from, for every site on the Internet.
Actually, no. For all practical intents and purposes encryption w/o authentication is as good as no encryption.
Unauthenticated encryption is 'better' than a plaintext in just one thing - it protects against passive snooping. Anyone willing to splice the connection will have full access to all your plaintext data and you won't even know about it. As such it's nothing more than an equivalent of reversible traffic obfuscation.
So if your "better" meant "obfuscated", then, yeah, it's better. But it's no more secure (in a conventional security sense) than a plaintext.
All that's require are scanned documents. And these documents can easily be tampered with or photoshopped. You may think your company details are checked before the cert is issued, but that's crap. We email our docs to a US company, and all the docs are issued by Irish government departments. There's no way in hell that some guy in what amounts to a call centre in the US has access to any Irish database to prove or disprove their validity. SSL certs are a crock of shit - all you need to do to get one you're not entitled to is to be slightly outside the norm, and claiming any small country as your location is good enough for that. Hell, you could make up your own government departments and documents, and that'd be good enough for must of these companies.
Give me a break. This is like saying all online security is a sham because I can always physically break into your office. You know how many times a real CA has fucked up and accidentally issued a Bank of America certificate to organized criminals in Estonia? ZERO.
You can say it's crap all you want, but until you've worked inside a CA and seen what goes on - you know shit. Good luck ordering a cert with your 'shopped docs...
You can't. Seriously, try it.
You can't get a 'proper' cert unless you go through the background checks.
You can't get a domain-only validated one unless you control the domain.
Encryption is nothing without identity assurance.
If you don't know who you're sending the encrypted data to - why bother encrypting in the first place?
Commercial certificates are not a scam. You're paying for a company (the CA) to certify (via the SSL cert signature) that an identity belongs to someone that has provided proof that they're who they say they are.
You can absolutely do what you're requesting by creating your own CA and signing certs for your sites and distributing your CA cert to your users somehow.
If you can figure out how to reliably provide this service for free you could revolutionize crypto on the internet. You might start by looking at what cacert.org has done to see what problems they've hit and why it's not as easy as it seems.
You are absolutely paying a premium based on the market persuasion tactics Verisign and Thawte have employed against the world. It's true that there's no good reason to trust Verisign more than Mozilla, Microsoft, and OpenSSL --- if Mozilla fucks up, you're just as screwed as if Verisign does.
The business model behind certificates may very well be a huge scam. Unfortunately, the technical model behind having a small number of trusted certificates shipped with your browser is not. Until that link breaks, you don't get security without paying Verisign.
It's amazing how many people still think Verisign are the only CA out there. There are a lot now, and if you hunt about, you needn't pay more than $10-$20 for a cert that's trusted in most browsers (granted, they are domain-only validated, but that's a whole different issue that needs to be fixed, I won't bring it up here...)
The reason you 'trust' Verisign/Thawte/Comodo/Geotrust/GoDaddy is because their roots are in the OSs & browsers. You can't get in those root stores without a hell of a lot of hoop-jumping. I know this. The money you pay for a cert does go to covering the costs of the background checks you have to pass through before you get the certificate.
Trust has to start somewhere - why not with large companies who have undergone rigorous procedures that also have been vetted by the companies you're implicitly trusting by installing their software?
If someone on this board wanted to argue that GoDaddy and GeoTrust haven't really seen "vetting" that makes or breaks their security, as a practitioner in this industry, I don't feel I could win that argument.
What you and I are really saying is that a company like Thawte has staked their business on those pubkeys, so that we at least know that if they screw up, they stand a good chance of losing the company.
Webmaster Bob wants to add his key to the server. Bob goes to the server page and hits add key button, puts in an email address for the webmaster, his public key and submits. Then the public key server sometime in the next couple hours goes outs and checks the website itself. If the two match it adds it to the database.
You can have the server require a reverse DNS lookup and also have it use something along the lines of OpenDNS to help secure against fraud. Also if the server itself uses a CA cert to secure data on transit that would also help secure it. Require a revocation key to invalidate self signed keys on the database would also further security.
Then user Joe can add "Self-sign Pub key extension" to Firefox to automate the checking of public keys.
This allows for a relatively cheap self-signing check. Does a minimal amount of is this the real website I'm using but still not quite the level of a paid CA cert that say a bank should have.
Here's an interesting solution. CMU just put out a tool called Perspectives that runs public notary servers. The servers probe sites periodically to get a history of keys. This can go a long way toward determining whether there is a man-in-the-middle sending you a fake SSL certificate (because it will not match the history).
Encryption ensures that only the entity you are sending the message to can read it. If you can't be sure of the entity you are sending the message to, then what's the point of encrypting it in the first place?
Why does the article pick out Mozilla in particular? Are they suggesting that FireFox makes it overly complex to ignore the warning and continue on?
Firefox 2 behaved similar to other browsers, giving one warning dialog for self-signed certs. People are complaining because Firefox 3 changed that to an annoying 4-step process.
Encryption without authentication is just that: encryption. The point of encryption is to make sure no one else is listening OR modifying the data in transit. Like, say, your cash-starved ISP, or the government.
No, it's not. If a man-in-the-middle attack is possible (certainly the ISP could), then encryption without authentication is as insecure as no encryption at all. And in fact worse if the user has a false sense of security.
The MITM pretends to be the bank's server (or whatever) when talking to you, and pretends to be you when talking to the bank's server. Both channels can be encrypted, but the attacker still sees (and can modify) everything that you think you're sending directly to the bank's server.
This is the key point that most people seem to be missing here. If browsers didn't warn about self-signed certificate, the entire system would break down because an attacker could just use a self-signed cert in a MITM attack, and the user would have no idea.
Having a trusted third party is a pretty big deal in cryptography. Without it, many of the core assumptions of public-key cryptography are invalid. It's a huge part of making sure the other end is authentic. I'd place a lot more trust in Bank A's public key if it was signed by verisign, rather than an unverified third party. Having verisign's public key in my browser elimininates a large class of man-in-the-middle attacks.
If paying $20/year is too inconvenient for you to transfer your data securely, then perhaps the data isn't sensitive enough, and you shouldn't bother.
The problem with not having a valid certificate is this: if both sides can't tie every packet in the SSL handshake back to Verisign or Thawte's pubkey, attackers can inject their own handshake passwords and set the session key.
Also, you can see the "add an exception" in the screen shot. You can manually add an SSL certificate to a white list, it's just a little bit harder, with a few more steps, than the previous YES|NO dialogue.
I think this a good thing. 99% of user probably don't need to or shouldn't interact with pages with self-signed certificates. That's a good thing. Self-signed certs should really only be on development pages. I'm sure this is a good anti-phishing measure.
In this case I disagree. The web is not all corporate, and there is a confusion between encryption and authentication.
A certificate, signed or no, is a means to establish a secure connection between Alice & Bob. This ensures no one is snooping or modifying the data passing between them. this is a good thing that should be encouraged in an age when your ISP injects ads and the government keeps tabs on what sites you visit.
A signed certificate is a means of authenticating the identity of the presenter of that certificate, to give some reassurance and trust about the other party.
These two things can and should be kept separate. What Mozilla is doing is making it much more difficult to have a secure-by-default Web.
Imagine if your mail program suddenly stopped receiving email unless each sender either paid 100 bucks per year to VeriSign, or faxed a copy of their passport to Microsoft, or you went through a scary, four-step process to "enable" them.
A self-signed certificate does not establish a secure connection between Alice and Bob, because Alice can't verify the certificate. Bob can send his certificate, Mallory can trivially intercept it and replace it with her own, and nobody will be the wiser.
Let's not encourage people to adopt security mechanisms that provide no real security. Let's make the security mechanisms we have today, which are strong enough to stop many governments and all of the largest corporations, cost-effective and easier to deploy. Let's solve the right problems, instead of trying to make ourselves feel better by sugarcoating browser warning messages.
I once bought a broadband router that was marked down ridiculously cheap. It had all the features I wanted, and it was half the price of any of the others. When I got it home, everything was running slowly. After poking around, I discovered that my machines' DNS servers had changed from Time Warner Cable's to IP addresses in China!
If self signed certificates were indistinguishable, I may have been making connections through man in the middle machine located there without any way of knowing.
Since SSL covers two cases of security, both encryption and identity, maybe it's time to invent a new icon - i.e. this web site is secure (a lock) but its identity could not be verified (an id card).
Self-signed certs wouldn't show warnings, but wouldn't show the ID-verified icon. CA certs would show both.
If they're worried about user education, the first time firefox encounters a self-signed site, it could provide a permanently dismissible dialog.
You can't have one without the other. The first icon doesn't mean anything. You might as well add a third icon for "this connection is compressed". Attackers can't read your credit card number out of a compressed stream either.
I'm not entirely sure why Firefox is being specifically targeted here. The pages that Opera and IE8 throw for self-signed certs aren't much less scary.
In my opinion, self-signed SSL certificates shouldn't cause this. If I want to use an SSL connection for my users to sign in with, I shouldn't have to pay tons of money for a wildcard certificate for my domain (they charge a large amount more just to add *. to your certificate). SSL is to SECURE THE TRANSMISSION, but these companies have turned it into a certificate war, where you must have one signed by a "distinguished" CA or the browser will tell you that you're visiting a "bad" site (Mozilla has a stopping guard, IE has attention images).
So don't even show the user that it's SSL. I don't care if my site seems more secure to the end-user, it just should be SECURE without regard to the mindset of the individual operating on it. Heck, even hide the https! Banks and online stores, sure, they should buy SSL certificates so they ease the end-user's mind. That is a relevant operating cost to incur for those individuals.
Umm, I would. Running Wireshark or tcpdump to sniff traffic over the wire is easy, and analysis can be done offline at the attacker's leisure.
Hijacking DNS and phsishing for users' login credentials to other sites requires a lot more preparation, and in most cases, prior selection of the desired target sites.
An attacker in Estonia manages to compromise a single DNS cache serving a residential cable ISP in Tuscon, AZ. Without SSL in the way, she now owns several thousand bank account logins and Yahoo Mail passwords.
(2)
An attacker in Estonia manages to compromise a single DNS cache serving a residential cable ISP in Tuscon, AZ. With SSL in the way, she now owns several bank account logins and Yahoo Mail passwords.
If you do not have a valid certificate signed by a CA, SSL is not providing any security.
Yes, the warning you get when you visit a site with an invalid cert is much scarier than what you see if you visit an unencrypted site. But it's the sites that use encryption that users care about, because those are the sites that get their passwords and credit card numbers.
Perhaps you think the browser should make an exception for self-signed certs. After all, there's nothing "wrong" with their signatures. Nothing's expired. No signature fails to validate. Why not just make the URL bar orange or something? Because anyone can create a self-signed cert and sub it into a Bank of America SSL connection.
It sure is annoying that you have to pay $20 every year to keep an SSL cert. I totally agree that this a problem. But right now, without that $20, you have a connection that provides cryptographically zero security. Short of coming up with a way to create a trustworthy CA that runs for less than $20 a year, there is no great solution to this problem.