Hacker News new | past | comments | ask | show | jobs | submit login

I feel for later generations - learning tech gets harder and harder. Of course as the sum of knowledge increases this must be true, but as we go HTTPS-only the ability to type commands to a web server over telnet as a learning experience will be a loss.



> the ability to type commands to a web server over telnet as a learning experience will be a loss.

You can do this with openssl.

    openssl s_client -connect news.ycombinator.com:443
    GET / HTTP/1.1
    Host: news.ycombinator.com
Press enter twice and you'll get HTML.


There's also "socat", which is netcat with ssl support and it carries over its friendly command line.


That's how progress works.

50 years ago you could work on a brand new car with just the tools in your garage, now you'd need specialized knowledge, equipment, and more to the point that it's not really possible to be a home mechanic on many areas.

This isn't new for new's sake, these improvements bring real benefits. And as things get more complex, people will specialize more and more. In the future, your average "dev" might not know the ins and outs of how the transport layer works, but that's okay because there is someone else who mainly only does that.


> This isn't new for new's sake, these improvements bring real benefits.

Half of the time. In the case of HTTPS, for sure. But it is becoming increasingly hard to tell apart real improvement and change-for-change. This is especially true in the customer electronics space, but also in the cloud industry.

I would argue that a major selling point for a lot of things today IS the fact that they are "new" not that they are "better". Just think of all the "smart" devices that use a server in the cloud to connect to your phone 1m away...


Not a problem we would have if we had adopted new IPv6 years ago just because it's new. Connecting devices together when you don't own the router between is hard. It's harder when everything uses nat and stateful firewalls.

A product that relies on the user being able to change settings in their router is not a mass market product, so you don't see a lot of people build things like that. That's what I would want, that's what you might want, but most people want 'end to end' connectivity without having to mess with port forwarding, static addressing/DDNS update, or firewalling. Had those interfaces/problems been solved/simplified 10 years ago, we wouldn't have these conversations. But no one did, and we are here.


We assume that there will always be someone who knows it works.

With more systems working fully automated or administrated remotely, I think it's fully possible that for some niches there won't be anyone anymore with the exact technical details - or the only people who possess them will be sitting an ocean apart behind the walls of $megacorp and are not allowed to share the knowledge.

In fact, the current crisis in information security shows that even today, a lot of people have serious misconceptions how the systems they are using daily work - with practical consequences.


I don't think you have to assume that.

First, the knowledge is there, the tools are there, it's just that it takes time to learn. All most all of this stuff is in RFCs and built in the public eye in the standards bodies and OSS. You can read the code, you can read the standards, you can get a book about it. It's just not intuitive, because most complicated things grow past that.

Second, there will always be experts at various parts of the stack. That's how 'capitalism' works, right? We all specialize until we only do one thing really well. I don't think we could have scaled the internet to the size it is now if everyone who worked in IT was still keeping the mail server off spam blacklists, and keeping the company web-server humming in the closet.

We get to a point as a group where we have bigger problems to solve and so someone learns to solve hundreds of peoples spam problems, and someone learns to solve hundreds of peoples route buffering problems, and someone learns to solve end to end encryption for web traffic.

Everyone can't be a functional expert in everything, that's how we moved past the middle ages. And that's how we are going to move forward as well.


you can build a brand new 1967 car with just the tools in your garage? Wow


I mean... yeah, actually, you can -- if you had every (new) part from, say, a 1967 Ford F-series truck, anyone could absolutely, with just a Chiltons and a good set of tools, put together every nut, bolt, and screw from a giant crate of parts to build it.


If you're willing to gloss over the expertise needed to build a vehicle (experiences and know-how) then you might as well gloss over the same on modern platforms.

My current day vehicle requires no additional tools or software other than python and pyserial + a USB/OBD2 dongle that can be had for under 10 bucks and my garage filled with the same wrenches that I could have used to work on many previous generations of car offered in the past few decades. With these tools I can touch every subsystem that a factory service mechanic can.

One could say "You'd be insane to try to bang out every bit of a communications protocol and interface with the car that way", but I could say "You'd be insane to rely on a novice with zero experience to build a car i'd want to try and drive."

I have no doubt that the novice would get something rolling, but suspension / engine / AFR tuning is a skill set that requires experience and education; the novices product would be inferior to the craftsmen.

tl;dr: with sufficient gumption there isn't really that much more stuff in your way of being a backyard mechanic -- sure the companies try to dissuade you from doing so, but they have been doing that since the 50s Jaguars and 1000 dollar service manuals.

There is just a wider variety of required reading. And to those interested : using your CS skills to 'perform the impossible' on a car is a hoot -- As ECMs and control software become more powerful and more in control of the car, so does the technical wizard!


indeed. in some of today's cars it's really astounding how much more power you can make quite easily, if you're into that sort of thing.

hit a few keys and turn a few knobs and all of a sudden you're making 50 horsepower more, and living life in the no warranty danger zone. exciting!


Hondata is a great example of using a laptop to tune a car. For those interested in buying parts like assembling a big lego, http://www.factoryfive.com/ is a good place to check out.

Not thinking too much about it, I used letsencrypt on my business site as soon as I read that google would be looking at TLS for SEO purposes. Sadly SEO seemed more important than speed if I was using both to judge if we use TLS or not. If the speed is an issue what does it matter if someone can't find the site?


That's interesting. I wish I could buy these "kits" today, put it together myself and drive a cheap cool car. Is that was a option then?


Kit cars are still a thing: https://en.wikipedia.org/wiki/Kit_car . That said, they are almost all replicas of old cars, and in some countries getting them licensed for use on the road is hard.


From the Wikipedia article: "A kit car should not be confused with a 'hand built' car or 'special' car, which is typically built from scratch by an individual."


The parent, a few up, was talking about getting every part: to me that implies you're using pre-made body-panels, etc., rather than making them from scratch.


It wouldn't be cheap. The safest, cheapest and most efficient "packing crate" for the parts for a single car is the assembled vehicle. (It's somewhat different when you can stack 50 left front fenders.) A kit car (most use fibreglass bodies) would be a much better option. Expect to spend about 1000 hours (much of that building jigs). The second one you build goes much smoother and faster than the first, even if it's a different kit based on a completely different pair of vehicles (the "source", from which most of the drive train parts will be drawn, and "target").


Telnet/netcat seems simple only because all the layers below are hidden inside the operating system. You don't type in all the MAC/LLC/TCP headers, and you can avoid typing in TLS headers just by hiding them inside the library (LibreSSL, GnuTLS etc.). With libtls API you can think about cryptography in terms of "padlock icon" until you want to learn more.


No, but you can build your own stack from scratch fairly easily, and it's easier to debug when you can see the plaintext through the data structures.


And it's easier to debug a video codec if it's ASCII art, but there's huge benefits for using TLS everywhere instead of plaintext, and they more than outweigh "it's harder to debug."


True, though reimplementing those is child's play compared to TLS.


Having done semi-serious implementations of both, I disagree. Implementing IP and TCP and getting it hooked up to your OS so that your packets get spat out the network card and your responses get relayed back to your code is much harder than implementing a minimal TLS.


True. Minimal TCP with 1 MSS window may be easy, but proper congestion control with fast recovery, F-RTO, tail loss probe, SACK etc. is much harder. Miss one of these aspects and you get a TCP that takes minutes to recover from a lost packet in some obscure case. It took years to debug Linux TCP stack. Even BSD stack is already way behind.


Given that the context is "ability to type commands to a web server over telnet as a learning experience", I was of course talking about minimal implementations. TCP with congestion control is hard, just as TLS with proper compatibility is hard.


Yes: I was not including the difficulty of getting efficient congestion control implemented, but rather just the ability to deliver and receive TCP streams at all.


...as is the ability to inspect the traffic in your network to and from the devices you own. I think that is an even scarier situation, considering what others have discovered about "smart" devices precisely because their traffic was not encrypted. E.g. https://news.ycombinator.com/item?id=6759426

Personally, I'm for HTTPS connections to things like government websites (which is what this article seems to be mostly about), but against "HTTPS everything" in the way it's going to be implemented.


If you want to avoid telemetry, you need to use open source devices. Prohibiting HTTPS for everything non-government is not a way to go.


> but against "HTTPS everything"

You being against HTTPS everything, is the same as being in support of MITM attacks somewhere. I am curious when is that the allowable case?


Forcing the use of encryption puts a constraint on the minimum level of processor power and memory needed for HTTPS capable IoT devices. Doing a TLS handshake with 2048-bit keys can be problematic with current low end processors. Simple devices with no security concerns can be given a web interface with cheaper hardware so long as browsers still work with plain HTTP.


Even the ESP8266, which you can buy for a couple of bucks in bulk, can do TLS. Is there really a need to support lower-end chips?


I own my computing devices and I should be able to control the traffic they create. Encryption must not prevent me from doing that.


Encryption does not prevent you from doing that, just everyone else.

Of course, with non-free software and walled gardens, that might involve some amount of reverse engineering, injecting a CA certificate in a trust store so you can run a MitM proxy, or do something to bypass key pins, but that's never really stopped anyone from finding out what an application is sending on the wire.

You acknowledge that there is a certain amount of traffic that ought to be encrypted, so you really need a solution for all applications either way.


Effectively, I feel like it does prevent you from doing that due to the reverse engineering necessity. The time multiplier between engineering vs reverse engineering is too large.

Who's going to spend the time hacking through {random Chinese smart lightswitch clone #8392727} that's sold in small volume?

There's going to need to be a legal "right to decrypt traffic" on black boxes, if we're serious about this.


And that's where we run into problems. How do we make it so that You can decrypt the traffic from your devices, but random hackers, your ISP, the NSA, etc can't? It's the same arguments against special decryption keys for the Government - a backdoor for one entity can be exploited by other entities.


How do we make it so that You can decrypt the traffic from your devices, but random hackers, your ISP, the NSA, etc can't?

The suggestion made at https://news.ycombinator.com/item?id=13303650 of terminating TLS at the border addresses this --- traffic on the public Internet is encrypted, but is decrypted in the private local network. In some ways it is similar to a VPN. I run a filtering/adblocking proxy that works in the same way.


Any pointers on what the encapsulation for that would look like? It seems like one good option, but I'd say it's only feasible if it doesn't require work on the part of the manufacturer.

My other thought was just mandating a method of loading CA certs onto all IoT devices using an open standard connector. If the owner so chooses.


In fact injecting a CA into an embedded light-switch is borderline impossible. At least is much harder than installing a user CA on your phone.


> I own my computing devices and I should be able to control the traffic they create.

And then, you are advocating for MITM them, instead of plainly controlling what traffic they create.

If you really want to control them, you should be advocating for open source and the end of DRM.


I do advocate for open-source, but that is often not a practical solution. MITM is more powerful.


Just install your own root certificate on the devices you own and do MITM analysis with it.


And you still can MITM the traffic, you just have to install your MITM's cert on your device you want to MITM.


This only kind of works. Apps can embed their whole certificate chain and ignore the system one. I don't disagree we should have HTTPS everywhere, but for reverse engineering it does make things harder.


Since you own the computing device and the connection, it's theoretically possible to read the session encryption keys from its memory.

This may be really hard in practice though.


You can pretty easily. Just use a proxy that does TLS interception. Not a big deal these days.


If your device connects to a third party, and you can't change this behavior, do you really own that device?


Idea: https terminating home routers with open standard will be solution for IoT devices, may be something like UPnP?


This objection has always seemed pretty weird to me.

Of course you can type HTTP commands character-by-character into a terminal. You just have to use a TLS-aware tool to do it.

Meanwhile, you can't really just type HTTP commands to a server without tooling, because a whole bunch of TCP is happening behind the scenes. Why is "telnet" OK, but OpenSSL "s_client" isn't?


You can't type any lines beginning with Q or R with s_client.


You can use the -quiet or -ign_eof options to disable this.


You can also use socat (the multipurpose relay) to provide a bare TCP socket to which you can connect, even using telnet.


Then it should also be noted that you really shouldn't use telnet to interact with HTTP or SMTP either. Telnet has issues if you send control characters, which wouldn't be used for either protocol, but might exist in embedded data you might want to send or receive. Just use netcat.


I guess, but it's hard to think of an HTTP request you can't reasonably make with telnet, while s_client apparently won't let you use the Referer header.


I got you bro!

$:openssl s_client -connect localhost:443

CONNECTED(00000003)

...snip....

lots of cert info

...snip...

GET / HTTP/1.0


As others have pointed out, there are tools for creating a TLS socket, not much harder than nc or telnet. However, HTTP/2 will be a totally different game.


Even with HTTP you usually don't use netcat beyond playing with simple GET requests. Most web app developers and hackers use developer tools available in their browser, libraries for their favourite language (urllib for Python, LWP for Perl) and specialized command-line tools (curl).


fully agree, but I think the point was around the educational value of literally being able to handcraft HTTP requests. I learned a ton back in the 90's doing exactly that for various plaintext protocols (FTP, SMTP, POP3, IMAP, HTTP).

Yes, there are tools that let you inspect them, but there's something about being able to walk around right at the protocol level to understand it's nuances (e.g.: CR/LF issues with HTTP).

However, all of that being said, I'm sure the real old-school hackers think all this PHP/Python/Perl mumbo jumbo obscures the real C/C++ code which their interpreters actually drive. And those old-old-school hackers think those C/C++ guys are obscuring their assembly code.. okay, I kid, but you get the point. We all deal with abstractions at some point. Perhaps in time, HTTP/2 tooling will come to improve, and my concerns will vanish as well.


FWIW, OpenBSD's netcat supports creating a TLS socket. I'm not sure it will work on all Linux distributions since it depends on libtls/LibreSSL and most ship with OpenSSL.


Learning a field should get easier the more we collectively know about it - the information should be structured better. This is particularly true of software, where we decide how to structure it. If tech is getting harder, this is our failing, not an inevitability.


Other way round: the more structure we build, the more there is to learn. The blank-slate nature of software is particularly problematic here; people end up proliferating solutions faster than they can be learned.


You could type them over openssl s_client instead.


Indeed, the openssl tool is a veritable swiss army knife for PKI crypto.

It's a netcat replacement:

    openssl s_client -connect www.google.com:443
while also providing information on the TLS handshake that's useful for debugging (like the server's certificate chain or its list of trusted CAs for client certificates).

There's dozens of other subcommands to do useful things like decode certificates (x509), generate keys (genrsa/gendsa) and create certificate signing requests (req), just to name a few.


It's not a great netcat replacement, for one it's not binary transparent (eg. a SMTP "RCPT TO" causes a rekey due to the pattern "\nR.*\n"). The command line usage is atrocious too.

That said there are many good alternatives (ncat, telnet-ssl, etc), and eventually one will gain the popularity and ubiquity that nc, curl, and similar tools did before them.


openssl s_client is really valuable if you regularly work with minimal container images that don't have curl, wget or even ncat. openssl(1) is almost always there.


The bigger issue is hardware / software that manufacturers won't update. HTTPS best practices often require servers to drop support for old protocols.

So if you pull out a 10 year old palm pilot and try to go to HN it won't work due to SSL.


Indeed, computer history museums are going to only be able to show older working exhibits - cloud backed gadgets will just be objects to look at and imagine what they used to be.


You might have to do some custom work to make some of the "history" accessible.

For example, Stanford's CS144 class uses a patched version of the Linux kernel to enable people to create their own TCP/IP clients[1]. I'm sure that if stuff like this becomes really inaccessible for newbies, similar modifications will be done for other applications to allow simpler concepts to be taught and explored.

[1] http://web.stanford.edu/class/cs144/assignments/ctcp/assignm...


i think that's more a tooling problem. Making tech accessible to someone new really important.


Job security.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: