Hacker News new | past | comments | ask | show | jobs | submit login

And one reason it doesn't: https://meyerweb.com/eric/thoughts/2018/08/07/securing-sites...

Secure websites make the web less accessible for those who rely on metered satellite internet (and I'm sure plenty of other cases).

Know who your demographic is and make sure you don't make things more difficult for them. Maybe provide an option for users to access your static site on a separate insecure domain, clearly labeled as such.




Nope.

Thats fixable in the client using a http proxy.

The whole point of https is to roll back the ability for middleware devices to modify traffic without the authority of the client or the server.

This is a good thing. He shouldn't be setting up a situation where he is intercepting traffic that neither party of the connection had authorized him to intercept. Regardless of how righteous his intentions are.


> Secure makes less accessible for those with shitty connections

This demographic seems especially vulnerable to untrusted 3rd party networks that promise speed or unlimited traffic. People able to make an actually informed decision about security trade-offs are probably a more difficult and could probably work around negative trade-offs by themselves, as mentioned by others. So unless you target especially them, you should probably go with the safer default.


There's nothing wrong with making something "more difficult" if it serves the greater good or has a larger positive impact than negative. For one, using a trivially higher percentage of a metered satellite feed is not "more difficult," just perhaps marginally more expensive. What percentage of folks reading static blogs are on a metered satellite connection?

I think if every site and application that is currently HTTP was HTTPS a year from now it would be a net positive for internet users.


Having very high packet loss means something is badly wrong. A good wire-level (yes I know, there are no wires, nevertheless) protocol aims to hit lower packet loss rates by fiddling with other parameters. Example: Let's say you have 40MHz of assigned frequencies, but when both ends measure they find 4MHz of that is full of noisy crap, the rest is fairly quiet. Well, rather than lose bits in those 4MHz and toss away many packets, why not keep 36MHz with a much lower error? If only 6000 packets per second get through of 10 000 sent, then an option to send 9000 and receive 8000 of them is already a win.

Now, upgrading satellites is trickier and more expensive than upgrading your home cable box, at the extreme obviously sending a bloke up to "swap out this board for a newer one" is either tremendously difficult or outright impossible depending on the orbital characteristics. But we shouldn't act as though high packet loss rates at the IP layer are to be expected, they are avoidable. And fixing them will do a lot more than just enable HTTPS to work better.


>But we shouldn't act as though high packet loss rates at the IP layer are to be expected, they are avoidable. And fixing them will do a lot more than just enable HTTPS to work better.

At that distance the very physical latency limit is almost a second. You can literally not go below. The high latency will have a lot of protocols simply time out or consider the packet lost.

At that distance you need some well engineered ground equipment to handle the signal losses. A dish and a high powered transmitter that need to be within a degree of the target. If you're off by a degree you're likely going to hit very bad packet loss. A degree is not much and you could be cause by the ground below stretching and twisting over the day due to temperature changes. TV doesn't have to deal with sending data up the link other than using the massively more powerful and expensive dishes from TV networks.

Lastly, from ground to geostationary orbit you may find that your 40MHz band is full of crap. Not because someone else is sending but because you're sending through a solid belt of radiation and magnetic flux. You'll find that for a wide range of bands they either suck at penetrating the atmosphere, penetrating the magnetic field or get sucked up by interference from half the universe.

The layers above IP have ways to handle packet loss for a reason (although the reason was bad copper cables and bad MAC). Also, the MAC is another problem; you're not the only one who wants to use the very limited resources of the sat. One of the most common and effective forms of bandwidth limiting is dropping packets and it's normal. Packets drop all the time, every TCP connection drops packets. It happens and almost all protocols deal with it on one level or another.


Couldn’t you set up the local cache as a proxy server instead of a MitM to solve this? Though it’s a less transparent solution (you have to set up the proxy and its CA on every client).


That article specifically mentions that service workers avoid these issues. There's nothing stopping static pages from making service workers available.

HTTPS and a service worker are a far better solution than having an insecure domain.


Yeah that's pretty much not the case once you setup a HTTPS proxy with a cache. HTTPS merely requires this to be an opt-in from the client unlike HTTP where you can just do it, to hell with the client.

Don't spread BS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: