Hacker News new | past | comments | ask | show | jobs | submit login

In the beginning of the internet, when it flourished and bloomed, it did so because it was open to tinkering, hacking and fun!

You could 1. set up a server, and just 2. enter the IP, and you had a website to hack on!

Now... You need to not only do that, you need to understand DNS and then pay a registrar to get a domain, you probably need to get hosting, because your router's secure ISP-side admin-interface may already be hogging port 443, you need to read up on how SSL/TLS and how certificates work in order to correctly request one, and then you need to figure out how those bits and pieces from that process fits into the server-stack you have decided to run with, and then you need to setup all those extra things server-side, which may or may not involve learning quite a bit of Unixy or sysadminy-things.

Phew

And that was step 1. See the contrast? In the playful internet of past times and glory you would have your product/experiment be done by now. In your "secure" internet you're merely done bootstrapping.

Yes, if you already have that, or already know all that stuff, that might not stop you moving along. For the amateur, who still should be the most welcome of all people on the internet for it to keep evolving, that's a complete show-stopper.

He'll just say "fuck this shit" most likely decide that this is just too big a task to even bother starting.

Forcing SSL everywhere on everyone is bad for the internet. I don't need to do everything "securely", and I sure as hell don't intend to setup every single experiment I do securely. Make it too much of a hurdle, and maybe I'll just stop experimenting instead.

And then, if not the internet, its spirit dies.




You are misunderstanding. The requirement for CA-issued certificates and most of the other things you are ranting about will still be only for HTTPS, which will still be optional. HTTP URIs in HTTP/2 will only need self-signed certificates which can be generated automatically by the server. Once servers get good support for it will not be any harder than HTTP/1.1.


> HTTP URIs in HTTP/2 will only need self-signed certificates which can be generated automatically by the server.

Where in the spec is there anything that HTTP URIs in HTTP/2 require any kind of certificate? Anyhow, I think its moot because all of the major browser vendors that have committed to HTTP/2 support have also announced they will support it only for HTTPS URIs, so what HTTP URIs require really only matters for non-browser HTTP-based applications that plan to use HTTP/2.


> they will support it only for HTTPS URIs

No. They may do that now but the intention is to support HTTP URIs that force TLS but allow self-signed certificates.

See https://wiki.mozilla.org/Networking/http2

"There is a separate, more experimental, build available that supports HTTP/2 draft-12 for http:// URIs using Alternate-Services (-01) and the "h2-12" profile. Sometimes this is known as opportunistic encryption. This allows HTTP/2 over TLS for http:// URIs in some cases without verification of the SSL certificate. It also allows spdy/3 and spdy/3.1 for http:// URIs using the same mechanism. "


I wasn't familiar with that, but that approach for HTTP URIs doesn't appear to be a spec requirement. Is there any indication that other browser vendors are going to follow that approach with HTTP URIs?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: