Hacker News new | past | comments | ask | show | jobs | submit login
This hacker might seem shady, but throwing him in jail is bad for everyone (washingtonpost.com)
198 points by Fourplealis on Sept 23, 2013 | hide | past | favorite | 202 comments



If you visit and internet cafe and someone's forgotten to log out of their bank account and you fiddle with it, that's probably a crime. Since in nearly all cases they probably didn't intend to do such a thing. We can surmise this by observing the banking website had a password to protect the account holder. This is evident by virtue of the "log out" link that's clearly visible and that the website is served over HTTPS and the normal convention that banking information is private.

Now imagine that you come upon a computer and that you click on one of the favorites. It's a banking website. No password, no HTTPS, no access controls at all. Who is responsible for the security breach? You or the bank?

I would argue that if there are no technological access controls in place, there is no such thing as "unauthorized access" You can't be unauthorized if there is no authorization. The default on the internet is "can access"

They're prosecuting him for the digital equivalent of walking down a street and taking pictures of houses which don't display numbers on their mailbox.


> You can't be unauthorized if there is no authorization.

This is really the main point to me and I'm really confused as to how the law doesn't agree with this. How can you claim unauthorized access to something when there are no systems in place to grant or deny authorization? Comparing this to walking into someone's home who left the door unlocked (as someone in this thread has done) is bogus to me. Private property is private property and social norms (as well as the law) dictate that you don't just stroll into someone's home even if the door is open. The internet does not work that way and never has.


> Private property is private property

Except in many cases the private property is being made accessible. Imagine going to an open house and the owner accidentally left the basement unlocked. You open the door and walk down, then get arrested for breaking and entering.


More applicably, imagine there is no door, not even hinges where a door should be; just an opening to the basement.

But you get arrested for walking down there anyway. Then the police tell you you're under arrest because "The owner didn't intend for you to go there."


If you wander in shouting "Lol guys, we totally shouldn't be allowed in here! Their security is awful! Quick, take pictures of all their documents and we'll post them to a news site" then you've got a more reasonable analogy.


well, all these analogies are interesting, but hackers don't get there by accident. They don't just spot the door, because these doors are invisible to regular visitor, right? You have to actively look for "doors", which implies that you have a premeditated intent of finding the "secret doors". And you also know very well that owner didn't want you in there...


Both of your scenarios are inapplicable because physically entering a property is totally unlike communicating with a public machine in the way it was intended to be communicated with.


There is a system in place. It's called HTTP status codes.


I wonder if there's some way to make a useful legal argument along the lines of: Since there's a well defined HTTP Status code for "Unauthorized" (401), then it's clear that ant request responded to with a Status code of "200 OK" is, by definition, being declared by the webserver (and it's operators) as "authorized".


> If you visit and internet cafe and someone's forgotten to log out of their bank account and you fiddle with it, that's probably a crime.

That can be construed as impersonation without unauthorized access, which is in some jurisdictions is illegal.

But that is not what AT&T did, which is more like a open brothel with conference rooms and private bedrooms. They just let in anyone that looked in one type of attire, and had some numbered badge come into one of the reserved conference rooms. IoW, the security person did not ask for ID, or a password to enter. The fault lies on the brothel, not the visitor. For all we know, anyone could have come in looking with that attire, and a matching badge out of coincidence (maybe there was a costume party, who knows, still the brothel did not do a good job of securing the reserved conference rooms).

> No password, no HTTPS, no access controls at all. Who is responsible for the security breach? You or the bank?

The bank, they are not complying with the legal statues, and more than likely violating their own privacy policy, if any exist.

> I would argue that if there are no technological access controls in place, there is no such thing as "unauthorized access" You can't be unauthorized if there is no authorization. The default on the internet is "can access"

That is correct. In that analogy, it would be an open business, like store or malls. It follows jurisdictions of private properties, with some business statues, but overall, since it is an open-doors business, there are no authorization requirements.

> They're prosecuting him for the digital equivalent of walking down a street and taking pictures of houses which don't display numbers on their mailbox.

No. Like in the example above, they are charging him of wearing an attire with a numbered badge, and coming into the reserved conference room, and learning the attendants names or addresses (which should not be there in the first place, esp. with no security protocols). The worst they can charge him is for impersonation. However, what can incriminate him is if the pages he visited clearly displayed or linked the Term of Services or EULA, which does detail this scenario, and he violated it in some way.


That's an actual CFAA crime because you have literal unauthorized access to the laptop. No further explanation necessary.


> I would argue that if there are no technological access controls in place, there is no such thing as "unauthorized access" You can't be unauthorized if there is no authorization. The default on the internet is "can access"

Or is it like walking into someone's private home because they left the door open? Or merely unlocked?

The law likes to operate on analogies, because analogous situations are ones for which we have precedent, and precedent makes the law predictable. The sad thing is, precedent goes back to the pre-computer era, too, and isn't necessarily overturned just because new technology with new social expectations is involved. Maybe in a couple generations.


I don't think it is like walking into a private home because the door is unlocked... this is more like someone walking into a store, looking around, and then getting in trouble for looking at a specific display shelf that was in the back corner. The shelf wasn't labeled as off limits, you just were wondering around where you were supposed to and happen to see it. The store can't get mad and say "well yeah, but we put it in the back corner where most people don't go... and we put sensitive stuff back there! How dare you look at it!"

Well it was right in the same store you invited me in to! There was no sign or lock or anything saying not to look at the shelf.

This was a PUBLIC website... you are supposed to be able to visit it. If you make a request to a server without providing authentication and it returns data, that is not your fault. That is what you are SUPPOSED to do to servers. If it asks for authentication and tells you you are unauthorized, but you brute force the password or find an exploit, then THAT is a crime. There was not authentication in this case.


>This was a PUBLIC website... you are supposed to be able to visit it. If you make a request to a server without providing authentication and it returns data, that is not your fault. That is what you are SUPPOSED to do to servers. If it asks for authentication and tells you you are unauthorized, but you brute force the password or find an exploit, then THAT is a crime. There was not authentication in this case.

Unfortunately none of these excuses are valid. He knew he was accessing something he shouldn't have been. If he did it once or twice and stopped that is one thing, intent is a major part of the law, and he intended to exploit something he knew he should not have been. That is why he is being found guilty.


If I find a $50 bill on a sidewalk I can INTEND to steal it as much as I want. But no matter how badly I WANT to steal it I cannot because at that point it's not a thing that can be stolen. There is no way to trace it back to it's former owner and as such, the first person to find it is legitimately the new owner.

Weev might have said that he "stole" the information or that he "intented" to perform an unauthorized access but ultimately that doesn't matter. There was no access control to prevent the internet's default of "everything is visible" so that's precisely what happened. It's not a hack no matter how badly he or the government want it to be. Intent matters not one iota.


Of course intent matters. If I run over someone with my car and kill them and it was deemed just a terrible but unfortunate accident, that is 100% different than if I drove over them because I intended to run them down and kill them.

The same applies to this case. He intended to access something he knew he shouldn't have had access to. Thus why he is guilty.


Yes, but in your example (where someone is killed) there is rather obviously an underlying act that may or may not be criminal depending on the intent. There are infinitely many acts that cannot be considered crimes regardless of how malicious the intent behind them may be.

Furthermore, just because someone feels that they have done something wrong does not make what they have done a crime. The law also must consider that action to have been illegal.

Hopefully, the appeals court will determine that accessing a public unrestricted URL cannot be considered illegal, regardless of the mindset of the person who might choose to access it.


Depending on what you find and where you find it, actually, you may have a legal obligation to attempt to return it to the owner. The law is not quite as simple as finders, keepers.



Ahem, there are no less than three examples in the wikipedia page you're trying to cite that back me up:

and cases where the circumstances were held to show no larceny: R. v. Wood (1848) 3 Cox C. C. 277 (banknote found on open land) R. v. Dixon (1855) 7 Cox C. C. 35, 25 L. J. M. C. 39 (lost note without mark) R. v. Shea (1856) 7 Cox C. C. 147; R. v. Christopher (1858) Bell C. C. 27, 169 E. R. 1153 (unmarked notes and purse found in public place)

I used a $50 bill (which is implied to be unmarked) purposefully.


If we want to stretch analogies beyond sense, how about this.

You walk into a cake shop that has cupcakes with names written on the icing:

You say "Can I have a cupcake with 'Iain' written on it?" They say "200 OK, here's a cupcake with Iain on it."

You say "Can I have that wedding cake?" They say "401 Unauthorized, Sorry that's someone elses' cake." You don't get a wedding cake.

You say "Can I have a cupcake with 'Alice' written on it?" They say "200 OK, here's a cupcake with "Alice' written on it."

You say "Can I have a birthday cake?" They say "402 Payment required, That'll be $15" You don't get a birthday cake.

You say "Can I have a cupcake with 'Bob' written on it?" They say "404 Not Found, sorry we don't have any cupcakes with 'Bob'."

You say "Can I have a cupcake with 'Carol' written on it?" They say "200 OK, here's a cupcake with 'Carol' in it."

You say "Can I have a cupcake with 'Dave' written on it?" They say "200 OK, here's a cupcake with 'Dave' on it."

You walk out with 4 cupcakes. Then the cake shop owner comes out and says "You stole the three cupcakes! I didn't intend for you to have them!"

Did you do anything wrong? Do you deserve to go to jail for it?


> Did you do anything wrong?

Possibly, it depends on intent. Add in:

    You: Hahaha, guys I can get anybodies cake!
    You: Looool their security is awful!
    You: Hahah, we could short this companies stock!
Then you clearly knew what you were doing and therefore did something wrong.


But an equally valid interpretation of what's going on is:

Cool, free cupcakes! They want you to pay for birthday cakes and pre-order wedding cakes, but they'll give you any cupcake you ask for if they've got one available!


Do the IRC transcripts sound like he thought that this information should have been shared by the server? Your interpretation would have weev thinking that AT&T intended to make this information public, that having it public was fine, and there was no complexity in what he did to get it.


You're right - weev was being a dick, and he knew he ws at the time.

BUT…

I personally think AT&T should also be held to account for their part in what happened. They put all that data up on the public internet, with no authentication required to get it. I think they're at least as culpable here as weev is. (and I don't think _either_ of them should get off scott free - they both played fast and loose with other people's data.)


If knowing that you are doing something immoral makes it a crime why isn't all of Wall Street in prison?


1) Generally they do things that are harder to prove illegal, harder to show were doing something they knew was wrong and don't send messages in IRC channels 'joking' about shorting stock when releasing bad news. In essence, they are smarter about it.

2) Some are.

3) Not everyone involved in investment is doing something immoral.


I know the US has decided to start prosecuting thoughtcrimes, such as jokes on FB, but that's actually unconstitutional. Accessing a server is not a crime, the user agent is not meant for authorization, and what he did was immoral, not illegal. The only difference between what weev and Aaron Swartz did is the type of content downloaded and the quality of the person downloading.

You're arguing to put this douchebag in prison, but not for an actual crime. Remember that the next time they use the CFAA to crucify someone who doesn't deserve it.


> Or is it like walking into someone's private home because they left the door open? Or merely unlocked?

It's more like if you were to walk into a retail establishment where the employees left the door unlocked after heading home for the day.

You can't buy anything because the cash register is locked, and taking something would clearly be stealing, but if sign posted says "we're open", can you be faulted for looking around?


Correct, and thank you.


Yeah that's the immediate counter analogy to what I'm suggesting.

I think the way I would go about arguing against it is that people on the street/sidewalk have no expectation of privacy. There are literally no access controls of any kind. Anyone can walk on the street; billionaires and homeless alike. There are no societal conventions that privacy is assured on the street and if you end up in someone else's picture it's your fault, not theirs.

Houses are not the street. They are private property. We do have a reasonable expectation of privacy there (NSA notwithstanding) and a part of privacy is access control. So the right of the owner of a house to control access to his house is fairly well understood and accepted even in the case where a house might be unlocked or a door left open.

The real question is this: Is the internet like the street or a house? The answer, in my opinion, is that "it depends" because websites can act both ways depending on how they are designed and implemented.

HN is basically a street in that it has no access controls to view content. Very nearly every page on HN can be accessed by the public (linked to or not) without being logged in. The URL of your comment is https://news.ycombinator.com/item?id=6434945 for which I didn't have to type in a password. What about comment https://news.ycombinator.com/item?id=6434944 or https://news.ycombinator.com/item?id=6434946? Should they be "protected" by virtue of them not being displayed on the webpage right now?

My credit union's website is a bit of public street and a lot of house. I can view their promotional materials without any authorization but in order to get to the good stuff I have to enter both a username and a password, then pass a captcha. That is an access control.

What is the case with the AT&T website? Did they do anything to secure the content with a technological access control like a username/password? Did they filter the service such that the webservice would only return an email address if it was accessed by the same MAC address of the iPad that was sold to the customer? No, they did none of these things. Their only "access control" was a user-agent string which isn't guaranteed ANYWHERE to be accurate.

EDIT: changed a couple of words


I don't understand your argument. You seem to agree that the reason the unlocked house is not like the street is shared social conventions. That house across the street is definitely private property whether it's signed that way or not, and I'm expected to know that because, duh, it's a house. At least, that's how I understood this:

> So the right of the owner of a house to control access to his house is fairly well understood and accepted even in the case where a house might be unlocked or a door left open.

Then you discuss the technical and interface features of websites that differentiate them as analogs of houses and streets, respectively, like whether they have access control (locks). But we just agreed that the technical and design features of the door aren't what make a house not like the street. The differentiating feature of a house is not the security of its door, or even whether it has one; it's that it's a house and we're expected to know it's private. I don't get how that difference is analogous to access controls on a website. What's the social convention that's appropriate for determining whether a piece of information on the internet can be fairly accessed or not?

To be clear, I'm not saying there aren't good answers here (e.g. a house has walls which imply privacy, so you need some analog for walls on your site [1]). Or you could argue that the analogy is bogus (e.g. houses and streets just aren't like the internet). Or you could even argue that technical safeguards are the analogous social convention to private homes (I don't get it, but it's noncrazy). Or you could argue those conventions simply haven't been established yet, and that we should consider there to be no such thing as unlocked houses on the web. I'm just saying you have haven't made any of those arguments.

[1] completely off-the-cuff and, like my other suggestions here, in need of some substance.


Basically I'm trying to draw out the differences and similarities.

In meatspace private property is default-closed (with certain exceptions) but some ability to in good faith. For example I can walk on your land to walk up to your front door and knock. You could then tell me I need to leave or you'll call the police. This is how it's worked for a long time and thus we think it's normal. You have this right even without building a fence around your property. Again, default-closed.

On the 'net the same rules of private property don't apply because the default on the 'net tends to be default-open. What I mean by this is that the simplest configuration for any webserver tends to have no access controls. So it'll serve up whatever it can to whoever asks. Furthermore the default on the internet for a long time was everyone can access everything since it was originally designed for precisely that purpose: sharing knowledge. The internet defaults to a street.

If you want to make your internet site NOT like a street (which is what it defaults to) you have to take steps to make that happen because HTTP doesn't have the mechanisms built in to do so. You have to build your access control on top of HTTP. If you do not, I would argue that we are right to assume that you meant for it to be a street for two reasons. First is that's how HTTP works and we've got some 20 years of history backing this up. Second is that to argue otherwise would place an incredible burden on everyone to have to divine the intent of the person/organization that served up the page.

What I'm getting at is to argue that weev "should have known better" strikes me as really nuts. In meatspace it would be like secretly passing a new law that divvied up all the roads to the landowners that border them so that I own the street in between my lot-lines and up to the middle of the road. Nobody knows about this so everyone keeps driving and nobody's the wiser. Then a real douchebag drives down the road in front of a rich guy's house. He hates it so he calls the cops and because he's rich and influential the DA manages to dig up this secret law and prosecute the douchebag with it.

If that law were to become non-secret and enforceable it would turn the world upside down in the US as nobody would be able to drive anywhere, walk anywhere, or generally do anything without the express permission of all the millions of people who now own the streets, sidewalks, etc. Even if you live in a big city and you could take the subway (which perhaps is still public) you wouldn't be able to walk to it unless the entrance happened to be on your land.

I think this would clearly be insanity as it would turn however many hundreds or thousands of years of convention on it's head. And to me, this is what the prosecutors are trying to argue. I understand that they probably don't really understand the technical aspects of it but to me it's really clear and their arguments sound like nonsense. But that's because we're looking at it from completely different viewpoints.


its like walking into someone's home, that had signs up over a bunch of open doors along a wall saying 'come in, all visitors welcome'. After wandering around a bit, you notice another door in the same wall has been left open, but there is no sign. Curious - you look in.

BANG. Jail Time.


No, because AT&T is a open Business, which needs to be in business zones, following business statues, not personal computers connected to ISP servers. It's more like the brothel analogy I just made: https://news.ycombinator.com/item?id=6435769


I think it's more like walking onto your neighbor's private land when they don't have fences or a "keep-out" sign, but also don't have any obvious sign allowing people in either. Still a crime, but not particularly severe or abhorrent; whether it merits serious punishment probably depends on particular details.


Read above. Linked just in case: https://news.ycombinator.com/item?id=6435845


Reading this article http://www.theverge.com/2013/9/12/4693710/the-end-of-kindnes... makes me feel not too terrible that he's being thrown in jail.


Weev's a right shithead, you're absolutely right.

I still bailed him out of jail for the time leading up to and during his trial. Why? Because UNPOPULAR SPEECH SHOULD NEVER BE CRIMINAL, no matter how revolting. Indeed, it is the unpopular and revolting stuff that needs the most defending:

"The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all." —H.L. Mencken


By "unpopular speech", do you mean the AT&T bit, or the harassment bit? If the latter, I disagree. A free and fair society can certainly draw a line between "unpopular speech" and "criminal harassment."

If I were to threaten to murder you, you wouldn't expect the police to say "Eh, nothing we can do, he's got a right to free speech. Call us back after he shoots you, you'll have a case then."


If I were to threaten to murder you, you wouldn't expect the police to say "Eh, nothing we can do, he's got a right to free speech. Call us back after he shoots you, you'll have a case then."

This is the problem with thought experiments regarding crime: they always make the facts 100% certain, when in real life, the facts are never 100% certain. If we were to rephrase your thought experiment, it would be:

"Some guy said some other guy was going to kill him. Let's throw that some other guy in prison for a while, just in case."

Not quite as clear-cut as you think, is it?


"Not quite as clear-cut as you think, is it?"

Weev did literally threaten to murder Kathy Sierra, and he then bragged about doing so on multiple public websites. So, yes, it is quite clear-cut.

(And, please, at least think for a minute before you try to rebut by saying that he didn't mean it, so it shouldn't count.)


Were you at the keyboard when he was typing this message?


Why, you're right. I don't know for sure that the NSA didn't fake dozens of threatening emails from Weev, and several forum posts, and then used mind-control satellites to keep him from posting that it wasn't him or telling any of his meatspace friends that it wasn't him, and then used mind-control satellites again to make him brag in person to that reporter. Oooh, or maybe they used mind-control satellites on the reporter to make him slander Weev's good name, and MCS once more to keep all the people in the article from revealing the truth!

Give it up, man. The guy whose image you're trying to clean prefers it nice and dirty.


If only there were some way to determine what was meant. Barring that, maybe it's best not go around saying such things if you can't even convince one out of twelve people that you weren't serious.

Regardless of whether you truly intend to carry out the threat, the threat itself is a form of violence. It imposes your will on an unwilling subject.

If you rob a bank with an unloaded gun, you can't claim afterwards, "oh, they were perfectly safe and didn't actually have to give me the money, so therefore it wasn't a crime."


No, you missed my point. My point is that anyone can accuse anyone of a crime, so the law has to reduce the incentive for someone to make something up to get attention or revenge. "He said he would kill me," is nearly impossible to prove, so if you punish it severely, you create a cure that's worse than the disease. If you restrict it to certified letters, though, then you have a better balance.


Don't we already have laws against filing a false report? I think I agree with you, but I'm not sure that the problem you described exists.


There's a huge gap between unpopular speech and harassment, which is illegal, and in many cases criminal. I agree he should not be in jail for the crime he was convicted of, but he almost certainly deserves to be there otherwise.


I have no information of the case at hand, but that could the reason why the prosecutor went for hacking charges instead of harassment.


sneak you're one of the few "true Americans" (as in when people talk about upholding freedom above all else) and unless I remember your posts wrong, you've had to leave the country to feel free. It's truly a sad state of affairs.


I think most of society can live with death threats being criminal.


Sir Thomas More: What would you do? Cut a great road through the law to get after the Devil?

William Roper: Yes, I'd cut down every law in England to do that!

Sir Thomas More: Oh? And when the last law was down, and the Devil turned 'round on you, where would you hide, Roper, the laws all being flat?


Woah, I am still against his prosecution but I don't really feel sorry for him now. Some people are just sick, why would he do that to a person for no reason?

FTA: "His rise as a folk hero is a sign of how desensitized to the abuse of women online people have become," Sierra said. "I get so angry at the tech press, the way they try to spin him as a trickster, a prankster. It’s like they feel they have to at least say he’s a jerk. Openly admitting you enjoy ‘ruining lives for lulz’ is way past being a ‘jerk’. And it wasn’t just my life. He included my kids in his work. I think he does belong in prison for crimes he has committed, but what he’s in for now is not one of those crimes. I hate supporting the Free Weev movement, but I do."

She is so much better person than I am.


From the internet, Kathy is a very nice and decent person. What happened to her is awful, and I'm glad she's still around, albeit in her reduced online presence. Shame on us all.


Of course he deserves to be in jail, but he should be in jail under harassment and identity theft laws instead of the hacking charges.


Exactly! In the United States, there are protections for that: Libel & Slander

Which is why he was convicted of. Harassment laws could also be implemented, depending on the jurisdiction.


Throwing him in jail is an awful outcome for justice and for the precendent it sets. I'm very much hoping he walks out of court a free man. Once outside the court, he could get hit by a bus as far as I'm concerned.

When we want weev free, we're fighting for law and society, for just principles, not for the individual.


I was with you up until wishing another human dead.


I may have not formulated that as subtle as I meant, since English is not my primary language. So my apologies for that. I do not wish him dead. I mean that I'll fight for him in this case since it's important for society, but since he's such an awful person I would lose all interest in him after he's out of court.

Read it as a figure of speech, since we technology people always speak about people who might get hit by a bus when thinking about the future of products.


No, your meaning was clear. That guy was either trolling by deliberately misinterpreting you, or needs English lessons himself.


To be fair, there is an important difference between wishing someone got hit by a bus and simply not caring if they did get hit by a bus. I, for one, would not want him dead, as death is rather an overly severe punishment for his actual crimes. I would probably even think it somewhat unfortunate if he actually did get hit by a bus, simply because humans dying in general is unfortunate. But, were it to happen, an honest evaluation of my feelings leads me to predict that I would not weep.


We could also wish that Weev or people like him did not exist, without actually wishing that anyone now alive become dead, or predicting what our feelings might be were such a thing to happen.


What is the greatest harm that someone has caused you? Just curious.


He belongs in prison for a long time, but not for what he's currently being prosecuted for. It sets a terrible precedent, and I'm sure that, given his personality, he'll be prosecutable for something again soon enough.


If memory serves, he was actually busted for drug possession not long after the gawker article went up, but he's not actually being prosecuted for that is he?


You're missing the point. The point is the government is charging him under the CFAA and that will set an extremely dangerous precedent.

If they want to charge him under any other numerous crimes (data theft, attempted extortion, being an asshat) then I wouldn't have a problem with it either because those are things he's guilty/might-be-guilty of.

Hacking and violating the CFAA is not one of his crimes.


He did something equivalent to scanning a public bulletin board for information that AT&T put there through what seems like sheer incompetence. There does not seem to be any hacking involved unless you also classify google as an automated hacking engine. I feel bad that someone might get jail time and a felony conviction for crawling a public forum.


And I wish the Westboro Baptist Church and a variety of other people who do objectionable things could be thrown in jail, but not by abusing laws and setting terrible precedents for further abuse.


The reason to defend weev in this case is to ensure that the specific act for which he is being prosecuted is not treated as a crime in other cases.

If weev harassed this woman in the manner described in the article you reference, he probably should be prosecuted for that. But it's not ok for prosecutors to put him in jail for something that should be perfectly legal, just because they can't (or didn't) put him in jail for something else.


The acts described in the article probably should have put him in jail. The latest is unrelated though.


That link is baffling, the first few paragraphs sound like bad things happening but they don't form any sort of coherent narrative and the link to the New York Times article is a story about someone else entirely.

It has the form of an outrage article without any actual content, as if someone fed Tumblr and Vice magazine into a Markov text generator.


Popular people -- and popular rights -- can be defined as "the ones that don't need defending."


It's worth reading the criminal complaint and indictment (https://www.eff.org/cases/us-v-auernheimer) to get some background. In particular: the discussions of using the email addresses for a phishing scheme, using them for spam, shorting AT&T stock and profiting off the data release, setting up WiFi routers so they can blame it on a third party, discussing how this was a federal crime, and how to spin themselves as a legitimate security organization. These things make it really hard to view weev as a genuine security researcher who was prosecuted for no good reason.


It's not worth reading that, because it's taken completely out of context. As badly as it's taken out of context, you're actually taking it even more out of context in your comment here. Weev actually said that shorting stock would be illegal, and said something to the effect of "if you do it, I don't want to know about it" and discouraged many other "suggestions" from people who didn't appear to have any real part in it, but were cheerleading.

In any case, that is very typical IRC conversation for a large portion of that subculture. They joked about doing these things, but they didn't actually take steps to do them. He considers himself a satirist, so it's not much different than some comedians talking nonsense over beers and having it show up in an indictment.

One of the chatters observing said they should post the list to full-disclosure. Weev replied saying "no, don't do that, its potentially criminal." He then talked about how he gets to spin it in the media and he's won. That says pretty clearly that he was only out to make a scene, which is what he has always done.


The entire IRC conversation comes off as jest - nothing actionable from what I read (besides running the scripts). Even so, since when did it become a crime to talk about doing something [illegal]?

Also, Weev himself says that he is unwilling to short AT&T's stock - I think he understood the ramifications that would have.


So because there was some thoughtcrime regarding actual criminal activity we should accept the prosecution for scraping the website? No. Prosecute him for identity theft if and after he commits it.


Everyone throws out analogies about walking into unlocked houses and such. Those are fairly poor analogies, so let me offer one which I think is far better at conveying what really happens.

Imagine you walked into a public library and struck up a conversation with the librarian:

        You: Can you tell me general information about this library?
  Librarian: Certainly, this library was built in 1990, has a million
             books on its shelves, and...
        You: What are the hours?
  Librarian: Monday to Saturday, 10AM to 8PM. Sunday, 10AM to 5PM.
        You: Frothy bacon generates utilitarian synapses!
  Librarian: I'm sorry, that's not really a proper question I can help
             you with.
        You: Can I borrow book identified by ISBN 4961357406830?
  Librarian: Sure, here you go.
        You: Can I borrow book identified by ISBN 6498794651315?
  Librarian: Sure, here you go.
        You: Can I borrow book identified by ISBN 9840546790354?
  Librarian: Sure, here you go.
        You: Can I borrow book identified by ISBN 3168706780943?
  Librarian: Sure, here you go.
        You: Can I borrow book identified by ISBN 7893781056145?
  Librarian: Sure, here you go.
        You: Can I borrow book identified by ISBN 2764894617987?
  Librarian: Sure, here you go.
        You: Can I borrow book identified by ISBN 9764660911970?
  Librarian: Sure, here you go.
        You: Can I borrow book identified by ISBN 6666666666666?
  Librarian: Sorry, that book doesn't exist.
        You: Can I borrow book identified by ISBN 8669177714641?
  Librarian: Sorry, you've been requesting too many books lately.
        You: Can you let me into the Staff lounge?
  Librarian: Sorry, you'll need to show me your staff credentials when
             asking.
        You: Can you provide me with a list of all employees and their
             salaries?
  Librarian: Sorry, you are not allowed to have that information.
        You: Can I use the general conference room on the third floor?
  Librarian: Actually, that was moved. It's now on the second floor.
As you can no doubt see, these translate directly into HTTP requests:

  GET /
  200 OK - This library was built in 1990, has a million books...
  GET /hours
  200 OK - Monday to Saturday, 10AM to 8PM. Sunday, 10AM to 5PM.
  POST /frothy-bacon-generates-utilitarian-synapses
  400 BAD REQUEST
  GET /books/4961357406830
  200 OK - [contents]
  GET /books/6498794651315
  200 OK - [contents]
  GET /books/9840546790354
  200 OK - [contents]
  GET /books/3168706780943
  200 OK - [contents]
  GET /books/7893781056145
  200 OK - [contents]
  GET /books/2764894617987
  200 OK - [contents]
  GET /books/9764660911970
  200 OK - [contents]
  GET /books/6666666666666
  404 NOT FOUND
  GET /books/8669177714641
  429 TOO MANY REQUESTS
  GET /admin
  401 UNAUTHORIZED
  GET /employees/salaries
  403 FORBIDDEN
  GET /floor/3/conference
  301 MOVED; Location: /floor/2/conference
In both cases, we have a gatekeeper (librarian / web server) which is capable of responding to requests, can authorize various requests, can require credentials for sensitive requests, can limit the rate at which requests come in, can deny requests altogether, and can identify when requests for certain things have moved to new locations.

The librarian is smart enough to not hand out things like access to the staff lounge, a list of employees and their salaries, or even things like an arbitrary library member's borrowing history. The web server has been configured to not hand out things like admin access or other things which are deemed sensitive, but the owners of the web server have taken the position "Well, nobody's going to be guessing ISBN numbers, so we'll let anybody on the internet request the contents of those books."

When is the onus on the web server owner to configure their security properly? When is a "200 OK" response actually not okay? This is the "mind reader" aspect the article mentions.


So if I ask the librarian for a copy of the book with ISBN

    1; DROP TABLE books; --
is that okay because, technically, the server let my request through?


This is currently downmodded because people don't like the implication. And they shouldn't, because it quickly forces someone into either a) agreeing with the law or b) saying that SQL injections must be, ipso facto, legal.

Including ones like:

1 AND ("1" = SUBSTRING(select social_security_number from employees where employee_name = 'Angela Smith', 1, 1))

You can use variations on this to...

a) Ask our librarian for a series of about 50 books and hear whether or not she has them in stock.

b) Read Angela Smith's Social Security number right out of the database.

There apparently exist a lot of people on HN who would prefer to think that, despite my near-magical ability to correctly divine the SSN of any employee (or any other piece of data in the DB) with a SQL injection attack, the fact that I'm just looking at a book listing page in a totally authorized fashion means I must not be doing anything wrong.


  You: Can you provide me with Angela Smith's email address?
  Librarian: Sure, here you go.
Later,

  Librarian's manager: You weren't supposed to give out that information!
  Librarian: Oops. I had the wrong access rules.
  Librarian's manager: Let's call the cops on that guy. 
    It's his fault that you gave him the information he wasn't
    supposed to have.


As is often the case, and is often ignored, the key is intent.

There is a difference between:

    You: Can you give me the email address of user 50?
    Librarian: Sure, here you go

    Librarian: Oh balls, I wasn't supposed to hand that over, that could have been anyone!
And

    You: Can you give me the email address of user 50?
    Librarian: Sure, here you go
    You: Hmm

    You-irc: Hey guise! The librarian is giving out everyones email addresses, this is totally breaking privacy laws right? 
    You-irc: lols, I'm going to get all of them! This could be used for a massive phishing operation
    You-irc: or even make their stock price drop, we could short it

    You: Hey librarian, can you give me the email address of user 51?
    Librarian: Sure, here you go
    You: Hey librarian, can you give me the email address of user 52?
    Librarian: Sure, here you go
    You: Hey librarian, can you give me the email address of user 53?
    Librarian: Sure, here you go
    ...
    You: Hey librarian, can you give me the email address of user 1023821?
    Librarian: Sure, here you go

He didn't grab one or two, then send the information to AT&T to get them to fix it. He deliberately collected a significant amount of data he knew was personal information and gave it to someone else. That alone would be enough. If he just wanted to verify that the attack worked, get the code of someone else who gives you permission, show that they can be easily generated and you're done. You don't need more than a few to prove the point.

The service was clearly not intended to be a directory of email addresses for people to use. It was clearly there to return the email address to the user of the iPad with that ICC-IDC code (which, unlike my example, aren't obviously guessable)

I'm not going to say anything about the sentence, but I do think he was guilty.


This is the issue I have with all of this. Everybody is defending HOW he did what he did with no thought as to WHAT he actually did - as if it shouldn't matter.

He knew what he was doing was illegal and didn't care, he got caught and tried to justify his actions by blaming AT&T for having a faulty configured server.

Not good enough for me and the jury agreed.


How he did it absolutely does matter. He did not know what he was doing was illegal because that is the expected interaction with an HTTP server. He certainly knew it was immoral but we give Wall Street a pass on that.

Suppose I write a scraper with user agent "I am a teapot" and I discover AT&T emits personal data when I access with that user agent. What is the arbitrary cutoff for number of things downloaded before I am a criminal?

There are in fact actual criminal charges that can be brought for identity theft, we don't need the US courts to be more aggressive with the CFAA by considering thoughtcrime in their deliberations.


Later, when I crack your bank password,

    Me: Can you provide me with all of guelo's money?
    Bank: Sure, here you go.
Also, when I approach your house,

    Me: I have these lock picks. Will you let me in?
    Lock: Sure thing, boss!


Well in a private by default world, browsing the internet just became one hell of a lot scarier. Any page you visit could become a felony.


Not so, because as others in this thread have stated, the key is intent.


Judging intent doesn't really work at scale. That's why we invented access controls.


Access controls are in place, but they aren't absolute. Hence why intent is key in this case, and why he was found guilty.


Ah wonderful, so now I have to worry about how my intentions might be perceived by the government when visiting a publicly accessible web page.

But the company leaking consumer information to the public without any proper security at all is not punished.


Yes. If you accidentally stumble upon something you shouldn't and don't exploit it or sell it to someone when you know you clearly shouldn't be you will be fine. It is pretty straightforward.

Everyone here keeps purposefully ignoring intent, but in the context of the law this is impossible. So no matter how much you hate it, this isn't something that can be a binary yes/no illegal/legal question based on some computer response to your query.


He didn't exploit it or sell it and he's fucked. We're ignoring intent because a crime has yet to be committed. Intent doesn't matter without a crime. If intent is the only dividing line, you are in favor of thoughtcrime.


Well...his crime actually was that he intentionally accessed data he knew he should not have been accessing. That is a crime. Thus he was found guilty. I'm not sure why this is so hard to reconcile or is being purposefully ignored just because this crime is one of many that involves a computer.


Please show me the section of US legal code regarding intentional access of data one knows one should not be accessing; without mentioning trespass, which he did not do, and without mentioning causing a computer to act in a manner the owner does not desire, which he also did not do.


Sorry no legal code offhand, but you can surely break the law without trespassing and "causing a computer to act in a manner the owner does not desire". This shouldn't be hard to comprehend.

The law involves intent. They proved he intended to act in bad faith while gathering that data and he was rightfully found guilty.


The law requires intent and a crime. If you cannot tell me which specific crime, your argument is invalid.

Why you are having a hard time understanding that "we put him in jail because we don't like what he did" is wrong I have no idea. You must be a troll disagreeing on purpose.


His crime was obtaining information he knew he shouldn't have been accessing. He didn't get out in jail because someone didn't like what he did, he got put in jail because he broke the law.


That's not a crime, which is why I asked you to point out the appropriate legal code. You are now considered a troll. Have a nice day.


Actually it is. He was found guilty.


Oh, I see the problem now; you have no conception of how the law works and no inclination to learn it.


And we peer into the mind of a third party how?


There's a fairly large body of law that hinges on the intent of the individual.

http://en.wikipedia.org/wiki/Intention_(criminal_law)


> Later, when I crack your bank password, > Me: Can you provide me with all of guelo's money? > Bank: Sure, here you go.

For me this is rather a good argument against using home banking (which I indeed don't use for security reasons - and as a computer scientist I'm surely not technologically backward).

UPDATE: if money is lying around on the street you are not be allowed to keep it (the same as I should not be allowed to keep the "money lying around in the internet"), but you can claim for getting the legitimate finder's reward.


[deleted]



> UPDATE: if money is lying around on the street you are not be allowed to keep it (the same as I should not be allowed to keep the "money lying around in the internet"), but you can claim for getting the legitimate finder's reward.

It's actually a bit more complicated than that. Property (chattels) can be lost, mislaid, or abandoned. The distinction between lost and mislaid is that you mislay something when you intentionally put it somewhere but forget to retrieve it. That doesn't really apply to currency in the street -- it's unlikely that someone would intentionally leave money in the street.

So we are dealing with lost property. At common law the rule was that the finder of a chattel in a public place had a superior title to anyone except the true owner, and that if such a possessor knew or could reasonably ascertain the owner's identity he had a duty to notify the owner. A breach of that duty could result in either tort or criminal liability or both. In the case of fungible currency in the street, I'd say there's a good argument that it is not reasonable to ascertain the true owner.

However this common law rule has been modified in most jurisdictions by statute. Illinois has a typical such statute:

(765 ILCS 1020/27-8)

Sec. 27. If any person or persons find any lost goods, money, bank notes, or other choses in action, of any description whatever, such person or persons shall inform the owner thereof, if known, and shall make restitution of the same, without any compensation whatever, except such compensation as shall be voluntarily given on the part of the owner. If the owner is unknown and if such property found is of the value of $100 or upwards, the finder or finders shall, within 5 days after such finding file in the circuit court of the county, an affidavit of the description thereof, the time and place when and where the same was found, that no alteration has been made in the appearance thereof since the finding of the same, that the owner thereof is unknown to the affiant and that the affiant has not secreted, withheld or disposed of any part thereof. The court shall enter an order stating the value of the property found as near as the court can ascertain. A certified copy of such order and the affidavit of the finder shall, within 10 days after the order was entered, be transmitted to the county clerk to be recorded in his estray book, and filed in the office of the county clerk. ...

Sec. 28. In all cases where such lost goods, money, bank notes or other choses in action shall not exceed the sum of $100 in value and the owner thereof is unknown, the finder shall advertise the same at the court house, and if the owner does not claim such money, goods, bank notes or other choses in action within 6 months from the time of such advertisement, the ownership of such property shall vest in the finder and the court shall enter an order to that effect.

If the value thereof exceeds the sum of $100, the county clerk, within 20 days after receiving the certified copy of the court's order shall cause a notice thereof to be published for 3 weeks successively in some public newspaper printed in this county and if the owner of such goods, money, bank notes, or other choses in action does not claim the same and pay the finder's charges and expenses within one year after the advertisement thereof as aforesaid, the ownership of such property shall vest in the finder and the court shall enter an order to that effect.


My bank won't give you any money with just a password. You also have to claim to be me. That's the illegal bit.

My lock isn't authorised to give permission to anyone. Lock picks are forcing it, not requesting it.


This is currently downmodded because people don't like the implication. And they shouldn't, because it quickly forces someone into either a) agreeing with the law or b) saying that SQL injections must be, ipso facto, legal.

You're drawing a false dichotomy based on premises that nobody in this thread has actually raised. It's entirely reasonable to a) disagree with the law, b) believe that SQL injections can be illegal based on some other rationale, and c) disagree with others on the appropriate penalty for SQL injections.


It is the server's problem. A misconfigured server is not a client's problem.

Set aside the SQL injection. Suppose there's a bug in Apache's path parsing such that using "\" instead of "/" causes it to interpret it as an escaped string, which somehow (bear with me) causes it to run exec("/bin/rm -r /"). Now some n00b comes along and uses "\" in the path, because he's used to paths on MSDOS; crashing the server. Whose problem is it? The client's, for sending a malformed request? How do you expect the client to know that the "\" will trigger a catastrophe? Or what if the client made a mistake, and while he thought it was "some query string" in his cut buffer, it turned out to be "; drop table *" (or something like that). Now whose problem is it?

If the server willy-nilly takes any input and doesn't check it, it is the server's fault.


Whether it's the server's or the client's fault doesn't matter that much from a legal perspective. Intent plays a big role: if you knew that ending a URL with "\" causes `rm -rf /*` to be run, and intentionally run that on a server, you could likely be prosecuted and convicted if it were proven that you did it intentionally. If it were done accidentally by a client, they would (likely, and hopefully) not be convicted.

Weev intentionally exploited an information disclosure flaw. Should he have gone to jail for that? No, I don't think so at all. But the scenario you're presenting has no relation to what happened here.


No, Weev did not "exploit" anything. He _requested_ information from a server. If the server owner had so desired, they could have made the data private by adding a password. They chose not to. In the end, the decision to offer Weev the data was made _by the server_ .

And if you're going to bring up the UserAgent spoofing, let me remind you that most browsers have done something like that for > 15 years.


Did Weev think that the email addresses didn't count as personal information, and were perfectly fine for anybody to scrape?

> If the server owner had so desired, they could have made the data private by adding a password.

But the server is still just sending data in response to a request, even with a password. The only reason a password is a line we draw is intent. It's hard to say you didn't realise that guessing at someones password was wrong.


Then, it seems, a good solution to solve the problem is to have server owner to declare in advance what are intended use and what's not. Accessing information without providing the correct password is certainly unintended use, so is guessing passwords. And accessing knowing the password is definitely the intended mode of operation.

A logical step is to make that machine readable. Oh, wait, suddenly this is getting to the server software and configuration, that server developer/administrator had screwed up.

My question is - why we don't make that logical step and simplify things instead of relying on some "should be common sense" and "you should've known you wasn't supposed to do so" completely-gray-area?


> Then, it seems, a good solution to solve the problem is to have server owner to declare in advance what are intended use and what's not.

You mean like the Terms of Use for the AT&T website?

http://www.att.com/gen/general?pid=11561#14


Sort of, but in machine-readable form and under well-known location (like /robots.txt) so you could read and comply with them before you access the site.

As for those exact terms, I suspect (IANAL) those exact terms prohibit almost any access to the site, as, for example, they forbid any programmatic access to obtain the information, and I haven't heard of any non-software user-agent implementations.


You can translate "programmatic" as "automated" as in "someone coded a program/tool to, in a programmatic way, access the website and retrieve the data"

As opposed to a human being in a non-programmatic way, opening his browser and accessing the website.

What's so hard about it?


> someone coded a program/tool to, in a programmatic way, access the website and retrieve the data

Doesn't, for example, Firefox, perfectly fit this description? Yes, I do manually enter the base URL to access, but if that's the distinctive feature...

> As opposed to a human being in a non-programmatic way, opening his browser and accessing the website.

... then manually typing in ./scrape.py www.att.com is non-programmatic, too. :)

Or, maybe, I'm not getting the correct meaning of "automated" due to bad English comprehension and false analogies from other languages. But I always thought every request on the Internet is automated and done by some kind of hardware+software combo, so forbidding "programmatic" access is complete nonsense (access control and rate-limiting are the proper solutions).

(And, if that matters, author of scrape.py does not need to conform to AT&T's TOS if s/he don't actually use the script by themself.)


Wait: so before accessing a website I have to go read its terms of use?

What if I set up a website, put a clause saying "you agree to pay $50/page view" in there, and hid it away. Google crawlers will find my site in no time, and then I can start raking the dollars in, right?


No: such a clause wouldn't be enforceable in that context.


He did not exploit a software flaw or a platform flaw, however he exploited an information disclosure / access separation EXPOSURE. Exploiting just means "taking advantage of something."

He did exploit the fact that AT&T did not make the endpoint in question accessible only if the logged-in user matched the actual user ID (or just made it entirely inaccessible).


dude what part of he gave the info away to a third party before reporting it do you not understand to put ur bullshit out there >


Apparently in the future they've all learned to write from browsing MySpace profiles.


I think SQL injection in many cases[0] demonstrates a clear difference in intent from a GET request for a resource the user legitimately expects to exist. There's no good analog in describing the behavior of a librarian because humans generally know not to follow arbitrary instructions from random people.

The closest analog I can think of would be giving the giving the librarian drugs to modify his behavior before asking him to perform some act or provide some information he normally would not. Giving the librarian a brownie before requesting access to the staff lounge would probably not alter his behavior nor be treated as a crime. Giving him a brownie laced with scopolamine before requesting access to the staff lounge would be, even if scopolamine had no dangerous side-effects.

[0] One might reasonably expect an SQL injection string to return a legitimate resource on a documentation site or general-purpose search engine, for example.


No, it is because lying about your user-agent is not explicitly trying to make an HTTP server perform an action it is not supposed to perform and is therefore not in the same category as SQL injections. HTTP servers are not supposed to use user-agent as authentication.


It's the equivalent of going to a Chinese restaurant and asking for the "Chinese menu" rather than the "American menu" even if you can't read Chinese.


Heh, I'm reminded of an anecdote of an elderly English relative who was in a chinese restaurant (in England), and was suprised that the English menu was chinese food but written in English, instead of steak, potatoes and veg that used to be on the "english menu" in chinese restauarants decades ago. :P


Except for that the Chinese menu wasn't written in Chinese but in English. Moreover it contained an access card to the staff lounge where the customer records were open on the table.


Again completely wrong because trespassing on the staff lounge is nothing like receiving a response from an HTTP server. It is like asking for the Chinese menu and being given a list of customer records.

EDIT: and then noticing what happened you ask if they have a version in Korean.


I approach this from a different angle. If someone broke into my web app by injecting SQL, I'd be mad that I allowed them to do so. If someone broke into my apartment by smashing the window with a brick, I wouldn't be mad at myself for not using thicker glass.

Therefore, I see SQL injections as sloppy programming, but physical break-ins as sloppy ethics. IMHO YMMV IANAL KTHXBYE.


Where do you draw the line?

What if your site uses Wordpress or some CMS, and it has a SQL injection zero day that is then exploited to gain access? Even if you did due diligence, kept your kernel and all your software up to date, and generally secured the server and the application as best you could, you could still be entirely unaware of flaws lurking within.

It'd be more comparable to the lock on your front door being vulnerable to easy lockpicking with a paperclip and 4 seconds. You're still not "allowing them to break in" by being sloppy (it's not like you left the door unlocked), but the manufacturer of the lock was sloppy and as a result, someone is able to break in without any "brute force".


If you actually cared about your data being taken care of in this instance you should probably be running an IDS-esque or similar to notice and stop that form f attack in a blanket fashion (these certainly exist for SQLi attacks, names escape me in this moment).

When using a proprietary, paid for web service or app you can blame the service provider.

When hosting OSS code on your own server, exactly this is what the NO WARRANTY section in the license is about, thus making it fully your responsibility to go over the code or to accept that bugs and security vulnerabilities happen.

Edit:

To all those talking about the skill level of the individual - if you are using a proprietary service, you can easily point the finger at the service provider. In the case of OSS code, the license is there to remind you that you are taking responsibility for being competent enough to use the code yourself.

If your house was broken into because the lock was shoddily installed by a locksmith, you might have some legal recourse (though, IIRC, you may be required to validate & disclaim the install) but if you were to install the lock yourself, you have nobody to blame.


I work in information security, so don't get me wrong, I agree with you for the most part. People writing their own applications and/or setting up their own server/service are often extremely naive in how they go about securing them.

However, in terms of legal (or ethical) culpability it shouldn't really matter. An intruder is an intruder. Sometimes it's due to utter ignorance and foolishness on the part of the owner, sometimes it's due to a latent flaw in something they're using, sometimes it's a compromise of their hosting company, sometimes they get hit by a complete zero-day.

You should have legal recourse no matter the case, unless you are truly grossly negligent (posting your admin password on your index page, for example).


So does that mean if someone picked the lock on your front door and just had a look around your apartment, without doing any damage; you’d be ok with that and just be mad at yourself for not installing a better lock?


I guess if I never knew about it, I wouldn't be upset.


And I imagine the line you draw for where it's "sloppy programming" vs a legitimate break-in coincides exactly with your knowledge and skill level...


If someone broke into my apartment by smashing the window with a brick, I wouldn't be mad at myself for not using thicker glass.

OK, but did they do anything wrong with SQL-injecting/breaking-a-window? I mean, if someone smashes your window can you call the cops and/or sue them for damages? In order to call the cops on the window smasher, you have to acknoledge they did a wrong.


And if you were missing window panes altogether, is it open season on robbing your house?


>This is currently downmodded because people don't like the implication. And they shouldn't, because it quickly forces someone into either a) agreeing with the law or b) saying that SQL injections must be, ipso facto, legal.

Not if you make a distinction between using a service and breaking a service. Analogize with entering vs. breaking and entering. In many cases it is valid to punish someone for bypassing security. But if a system has no security by design, then there is nothing to bypass. SQL injection, on the other hand, is always bypassing the design of the code, and loses any presumption of authorization.

P.S. Yes there will be edge cases. There are always edge cases. But this is not an edge case. The lack of security was definitely design, not software-bug.


Tackling that out of order, because it seems clearer.

> Analogize with entering vs. breaking and entering.

From Free Dictionary [1]:

    breaking and entering v., n. entering a residence or other enclosed property through the slightest amount of force (**even pushing open a door**), without authorization.
Emphasis mine. If pushing a door counts, so does changing the user agent header or auto-incrementing IDs. The key here is "without authorization". As that analogizes with the Weev situation and our librarian, much of the debate I see here on HN seems to hinge on whether that means authorization in some technical sense or in the sense meant by that breaking and entering definition. I submit that it means the latter, in part because that reflects how we see authorization in other contexts (like doors we're not supposed to enter) and in part because I don't think the first view holds up to scrutiny on its own terms.

Let me explain what I mean by that. Consider two situations:

1. A server responds to requests for email addresses without checking whether the user is authorized to receive that information.

2. A server responds to all requests without checking whether they contain injected SQL.

In pure technology terms, these aren't really any different. In either case, the server is simply failing to check that the request has certain properties (came from the right user/does not contain context escapes). But of course case 2 is particularly nefarious, because passing SQL directly to the database is clearly not the intent of that interface, and that it's not the intent is obvious to the SQL injector. So the intent of the software is what matters. But then it's what matters in case 1 too. The lack of technical enforcement of this intent is not the issue.

Which leads me to some clarity about this:

> Not if you make a distinction between using a service and breaking a service.

I believe this use/break distinction exists, but the distinction isn't something that's determined by the code or the vaguer "design of the code"; it's determined by the purpose of the service. The service as it would be described functionally, not technically. The service was not meant to be used by Weev and his scraper any more than our librarian's database was meant to be used by SQL injection, or any more than your unencrypted traffic was meant to be intercepted by my packet sniffer (you did nothing to not authorize me to see it!). To drive that home, the library's hapless database admin who foolishly decides to update the list of books using her own SQL injection bug is not hacking, because she is authorized to fiddle with the database, even though, in your terms, it's bypassing the design of the code.

In other words, authorization is not the same as the technical artifacts involved in authorization. More generally, I don't think being bad at making software justifies people accessing it when they know it's not meant for them.

As the law is actually applied here, I am somewhat sympathetic to Weev, on the grounds that I don't think it was a serious crime (imagine if it were bank account information!), but that's a quantitative issue, not a qualitative one. I can also see that there are many cases where it isn't obvious whether something is meant for you to access or not (like a door that seems to lead into a public place but which turns out to be a private space), and I can imagine there being issues there. But this isn't one of them.

[1] http://legal-dictionary.thefreedictionary.com/breaking+and+e...


>Emphasis mine. If pushing a door counts, so does changing the user agent header or auto-incrementing IDs. The key here is "without authorization".

I'm not so sure about that. If it's not pushing to request record 334, why is it pushing to request record 335?

But I digress. Normally making standard web requests is analogized to looking, without touching. You have explicit authorization to go through the front door, and anything 'bad' you did inside was restricted to what you looked at.

>I believe this use/break distinction exists, but the distinction isn't something that's determined by the code or the vaguer "design of the code"; it's determined by the purpose of the service.

But then you get into the realm of having TOS be a legal, no matter how inane they are. This seems a far worse alternative.

>To drive that home, the library's hapless database admin who foolishly decides to update the list of books using her own SQL injection bug is not hacking, because she is authorized to fiddle with the database, even though, in your terms, it's bypassing the design of the code.

That's why I only said they lose the presumption of authorization. If all you know is someone SQL injected, you have to resort to other means to figure out if it was authorized. For example, if they already have equivalent access through non-code-bug means, and they simply prefer SQL injection, then there is no problem. But if they were doing it to avoid audit logs, there might be a huge problem.

>In other words, authorization is not the same as the technical artifacts involved in authorization. More generally, I don't think being bad at making software justifies people accessing it when they know it's not meant for them.

When it comes purely to accessing it, when it's non-HIPAA/etc. data, I don't think there needs to be very much justification.

And I don't see 'has no password' as a technical artifact. Details of web servers don't need to be involved here. The design is wrong on a fundamental, user-understandable level.


> I'm not so sure about that. If it's not pushing to request record 334, why is it pushing to request record 335?

For the same reason that it's OK for me to push in my door, but not to push in yours. Or why it's OK for me to type in my password, but not to type in your password.

> You have explicit authorization to go through the front door, and anything 'bad' you did inside was restricted to what you looked at.

There certainly isn't explicit permission; I think you mean implicit. Assuming you do, we just disagree here. You don't have a "presumption of authorization" when accessing something a reasonable person would know isn't meant for them to see. I think most of the rest of our disagreement flows from this.

I also don't see where you've made the case that get-the-employee's-SSN SQL injection attack is relevantly different from the send-an-id case.


>For the same reason that it's OK for me to push in my door, but not to push in yours. Or why it's OK for me to type in my password, but not to type in your password.

Okay, I'm confused by your analogy. I was thinking of the situation as having a single door at the entrance to the establishment. I don't think it makes any sense at all to treat each page as a separate household on private property.

And I meant explicit. There is explicit permission to contact the web server.

>I also don't see where you've made the case that get-the-employee's-SSN SQL injection attack is relevantly different from the send-an-id case.

If I send a non-secret ID into the system and get info back, the system is working as designed. If I use SQL injection, the system is not working as designed. I think that's important. In the former case, I may be doing something unexpected, but I am not exceeding the authority given to me.


Er right, the analogy is bit confusing for me too; let me see if I can clear up what I meant here.

> I don't think it makes any sense at all to treat each page as a separate household on private property.

It's certainly true that some pages on a site might be private to someone else and others not. The page/site distinction is orthogonal to the authorized/unauthorized distinction. It's not clear to me why, in your story, it was OK to ask for ID 334, but it if it were, that doesn't automatically make it OK to grab 335.

The library is itself an analogy and it's a bit broken here, because of course at a library the ISBNs and the books they correspond to aren't meant to be private, unlike the ATT/Weev case. So let's imagine that your iPad automatically looked up your email address using its ICC ID because that was why they built it (I don't actually know if that's the case or not). It doesn't follow that you have permission to look up everyone else's. That's where I was going with the doors analogy. I hope that clears that up.

> There is explicit permission to contact the web server.

I deleted my counter to this because I think it's an irrelevant semantics issue. Let's let that one go.

> In the former case, I may be doing something unexpected, but I am not exceeding the authority given to me.

We keep stumbling on this concept of authority, and this thing about design. The design part I don't get. Didn't the library software's author design the system so that it took query parameters and directly formed SQL strings out of them? So isn't it working exactly as designed? Of course not, because the designer never intended it to be used that way [1]. So their intention is exactly what matters, and the same applies to trying incremental ICC IDs--clearly not the intent of the service. That's why this design/expectation dichotomy isn't there, or at least isn't relevant (it's also why I was distinguishing the "purpose of the software" from the "design of the code" earlier).

As for authority, you continue to see it as something the webserver can provide by virtue of its technical characteristics, and I--and the law--simply don't. Authorization in the technical sense is a technological codification of an authorization policy, which may be implicit. If that policy is obvious (which I think it is here), then it's that policy that matters, not its technical enforcement. I fear we're retreading ground here, though, and this may just boil down to having different axioms.

[1] You could say that there are layers to design and that the flaw in the designs here ("don't do any access control" vs "don't sanitize strings") are at grossly different granularities. That's true, but I'm not sure how you plan to formalize the appropriate level of generality that makes such a design flaw equivalent to permission to entry as opposed to simply not working as designed. I don't think you can without resorting to the designer's intent.


I'm talking about intentional design here. It is on purpose that the system has no authentication. It is on purpose that the system returns records solely in response to an ID request from any client. It is not on purpose that the system can be SQL injected.

Intention of use is entirely different. The high level intention of use / purpose is often opaque and contradictory. Using it as a threshold would be foolish. "it securely stores passwords but also mails you a reminder if you forget" "it sends marketing mails that don't get marked as spam" "it shows people images that they can't save" "people will stay signed up for 15 months and we will profit on the loss leader"


If you ask the librarian to hold a book-burning party, and they do, should you get off scott-free?


Not if I tricked the librarian into setting fire to the library.


What really is the line between tricked and asked? Deceit?

Lets go with deceit.

So is asking for book ISBN '1; DROP TABLE books; --' deceitful? Perhaps, that's not an ISBN after all. Is asking for book ISBN [some valid ISBN that you pulled out of your ass, but happens to exist] deceitful? I don't think so. If you are just asking for randomly chosen ISBNs and getting responses, I don't think there is any trickery involved.

In one case you are counting on the system to correctly understand your (validly constructed) request, in the other case you are counting on the system to misinterpret your request in a dangerous fashion.


I'd like to fix up the analogy a bit. The problem isn't that you're asking for a random ISBN numbers; it's that you don't have a library card. The library's electronic catalog won't let you log in without the card, so you just start asking the librarian for random ISBNs and accepting the books he gives you. He doesn't check on your card because no one trained him to do that, but you know that checking out books is meant for card-carrying library members. I sense deceit there.


Sane people would point out that whoever trained that librarian was an idiot.


The line is in the intent.

If I hand the librarian a piece of paper with that SQL-injecting ISBN my culpability depends on whether I was aware of the likely gravity of my actions or whether e.g. I was told to get that piece of paper by a trusted source (e.g. my supervisor) and didn't even read it.


What really is the line between tricked and asked? Deceit?

What you (the asker/trickster) think the askee thought of it.


Then that is a poor librarian. A good librarian should have just said:

400 BAD REQUEST

Whomever staffed that librarian, should interview or train their staff better.


You shouldn't have to train your staff not to burn down the building they are working in...


Obviously. If the administrator locked access to matches, nothing could be burned. The admin is responsible for leaving matches open at the library, and allowing the librarian to do as they please.


You totally nailed it. 100% right. Distilled it down to the essentials of how the internet works and the nature of a protocol as a contract. Bravo.


His whole analogy only works because the librarian is a human, and if a human with some apparent authority lets you do something, you can reasonably infer that you have permission to do it. But you can't anthropomorphize a server like that. It's not a gatekeeper, capable of granting permission, just a dumb lock which may be flawed. Only humans can consent.

To repurpose his analogy, if you sneak into the staff room and the librarian doesn't notice and doesn't stop you, you can't use that to say it must have been okay.


Both are gatekeepers. One has been configured with an employee handbook. The other is configured using .htaccess or similar. When making requests of either, how do you know whether you have permission to make the request you're about to make?

If a server cannot consent, does issuing "GET /" to a web server mean you snuck into the homepage and are not authorized to view what the web server was configured to provide to you?


People boil down to dumb locks--if presented with the correct context and input, if they are rational they should by definition grant access.

This library analogy was the best way I've seen the issue put, and one that is actually accurate.


I totally didn't understand where you were going with that.


I agree with library analogy


The problem is that the people who wrote the code are human. The people who deployed the code are human. The people who paid for the internet connection that lets you connect to their service are human.

There are any number of explicit steps that are taken to put code on an HTTP service on the internet.


If you're going to repurpose the analogy, do it right. The librarian is supposed to let you into x room. Someone somewhere expected you to look at one record before leaving, but you check out some other records. But at no point did you wholesale sneak past the librarian.


What happens in the scenario where the librarian is instead a low IQ worker helping out?

    You: Can I have this book?
    Worker: No, sorry, not allowed.
    You: It's ok, the boss said so.
    Worker: I don't think so.
    You: We're friends, right? You don't say no to your friends, do you?
    Worker: Well, ok, I guess you can have it.
Hey, the worker said it was ok, I guess you were authorized after all!


Sounds like you were authorized; if that happened and you read the book what crime would you be charged with? Seriously. Taking advantage of a worker with intent to learn secrets? Totally immoral, not illegal.


Maybe we should just stop throwing around analogies altogether when it comes to politics. Analogies are useful in teaching since it allows people to relate concepts they already understand. However, it's just an abstraction, and is inevitably imperfect.

In normative arguments, analogies are used to bend reality to make your position seem reasonable regardless of whether or not it actually is. It would be better to judge weev's case on it's own merits rather than try to justify a position using increasingly complex analogies.


The problem is that there is no consensus on the correct way to regulate these kinds of interactions yet. When we try to work out what a reasonable way to regulate something new is, we usually do so by analogy to other things that we already know how to regulate. That way we can make a whole bunch of analogies with various current situations and try to work out which one is the best analogy in the relevant aspects so we can come up with a good starting point for regulation.

This doesn't always work (particularly not for truly disruptive technical or social changes), but it's a pretty good way of doing it.


Really, you are going to give people crap for bad analogies, and then try to compare the actions of a conscious human being to an automated computer system?


An automated computer system can only do what it's told. It perfectly carries out the instructions it is given. A human being can really screw everything up using their judgement and coming to the wrong conclusion.

The problem here is that AT&T employed a human being to design an automated system who didn't know enough about the automated system to ensure that it was correct. And then this automated system did exactly what it was told to do and made AT&T look bad.

But the fact that the code running on the webserver didn't reflect the intent of some AT&T exec or their company policy isn't the fault of those accessing the webserver. It's AT&T's fault for doing a really terrible job of QA/QC on their own systems prior to a really big launch.


The bad analogy can be extended to mechanical devices, such as a lock. The lock is also an automated system, and it is a system that will also perfectly do what it is told with even higher fidelity than a computer due to its relative simplicity.

I with my lockpick tell pin 1 to move up so many millimeters, I tell pin 2 to move up so many millimeters, and so on, and suddenly the lock opens. Suddenly I'm in, and it should be legal because the lock wasn't designed as well as it could have been and because all I did was follow a legal protocol with the lock.

Bad analogies like this are so common here when discussing technology issues, and it is a never ending irritating game of come up with the least-worst (but still bad) analogy. People here also often confuse their understanding of technology with legal/ethical sagacity, which is laughable.


Even though conscious human beings aren't able to follow a list of access rules quite as well as a computer system could, I won't hold that against them. There's nothing in my example which requires the librarian to make a judgment call. Barring mistakes caused by fatigue, laziness, corruption, and so on, the librarian should be able to perform nearly at the same level as the computer.


I upvoted immediately after your first two sentences, and wished I could do it again at the end of your post. Well done.

I've been trying to think of a proper analogy to changing your user agent. Wearing a fake mustache and requesting information from someone publicly giving information to only mustachioed persons?

    You: Can I borrow the book identified by ISBN 1234?
    Librarian: We restrict access to that book to only people who will read it with our page turning machine.
    You: Can I borrow the book identified by ISBN 1234? I am using your page turning machine now.
    Librarian: Sure, here you go.


Oh man, an analogy which actually makes sense! This has got to be a first. Please spread it far and wide.


I challenge you to actually try to do this and see whether or not the librarian calls security.


That is actually a strength, not a weakness, of the librarian analogy. If there had been a WAF in place that alarmed when it saw Weev incrementing his way through the customers, we wouldn't be having this conversation.


Biot, Could you post that to the Washington Post story comments? It might be helpful to others reading the story there.


A bit nitpicky on one portion over an otherwise good analogy. If they library is funded by a city, the list of employees and their salaries are public knowledge. I use to work for a small town library in high school. I could see all of my high school teacher/staff information along with co-workers and any other town staff salaries in the town record (publicly displayed in the library itself). Of course, what you may do with the information is another thing.


OH MY GOD ARE YOU SERIOUS.



I thought it was funny itself. It did make one teacher uncomfortable -When the class was pestering them about salary,I mentioned the information was available publicly to the class (not that any of them would have gone to check). It does have its uses. Their salary is paid for by taxes. People want to know what their taxes are paying for (especially since education in the town was debated hotly for different reasons).


I have heard that America's bizarre obsession with keeping your salary secret is not shared by rest of the world. I haven't checked myself, though.

I'm deeply suspicious of the "tradition" either way since working at a place where it was actually a policy violation to tell a coworker what your salary was. Because then they would know if they were getting stiffed, and they might ask a raise.


I have friends who make somewhere in the general neighborhood of several times less than I do, and other friends who make likely as much as twice what I do.

The result of having a wide range of salaries in your social circles is that salary becomes taboo. There is little good to come of talking about it with non-coworker friends, somebody is just going to end up feeling bad, or jealous, or self-conscious, or asked for a personal loan... Even more closely guarded than salary is personal worth, for many of the same reasons, and more.

If this taboo doesn't exist outside the US (and I rather doubt claims that it really doesn't), I would nevertheless refuse to participate in non-anonymized discussions about salary in those settings. What good could come of it?

This taboo carries over to the workplace and into interactions between coworkers who arguably could benefit from discussing salary. If it were born just from employee handbook rules, then nobody would respect it. There are plenty of other rules in those things that nobody reads.


I like the librarian comparison. Perhaps a receptionist would also work for this, they can give out some information about people working there (office phone number, etc) but not salaries.

> When is the onus on the web server owner to configure their security properly? When is a "200 OK" response actually not okay? This is the "mind reader" aspect the article mentions

This is why the laws usually care about the intent. It's a combination of the action and the reason that's important (which is why there's a difference between murder and manslaughter).

There are no hard and fast rules, and there simply cannot be.

Weev clearly knew that AT&T shouldn't be handing over the information. He wasn't there saying "Wait, this isn't a normal service?".

> The librarian is smart enough to not hand out things like access to the staff lounge, a list of employees and their salaries, or even things like an arbitrary library member's borrowing history.

If the librarian didn't know to restrict access to salary information, lets say the managers thought that if you knew the SSN that was enough of an ID to get access, and you repeat the example, it becomes a bit more clear how intent is important.

    You: Can I have the salary of IanCal?
    Recep: Sure, it's £X

    You: Hmm, hey Dave, I think there's a security issue here, mind if I know your salary?
    Dave: Sure, it's £Y
    You: Can I have the salary of Dave?
    Recep: Sure, it's £Y

    You: Best go tell the managers.
That would be looked on very differently than:

    You: Can I have the salary of IanCal?
    Recep: Sure, it's £X

    You: Hmm, hey Dave, I think there's a security issue here, can you generate SSN numbers?
    Dave: Yeah I think so
    You: Can I have the salary of SSN#1
    Recep: Sure, it's £Y
    ...
    You: Can I have the salary of SSN#147934
    Recep: Sure, it's £Y
    You: Hahahahaha, let's give all the info to a news site, bet you'd make money shorting the stock!
The core of it is the same, you've requested information for someone else that you shouldn't really have. Even if you remove the consent of Dave in the example, it's still different than the second example. And that was what was important.


I like your analogy. Exactly what crime would you be charged with after calling with SSNs? He committed thoughtcrime, sure, but let's wait until he attempts to follow through with it before charging him.


Example is good, but not quite right as they weren't asking a neutral resource. The librarian (let's switch for clerk) was told that if someone looking like X came in and made a request they were allowed to hand over a Y to them, so each time the `You` came in you had to wear a disguise that convinced the clerk you were actually `X`.

AT&T should be rightly mocked for their poor security, but weev wasn't just stumbling through data, he'd made good attempts to impersonate the setup the server wanted. He'd put a pair of glasses and a funny moustache on and the server was alright with it, which is definitely ridiculous.


Here is my analogy:

1. You just finished your workout and went to a locker room at your gym (he went to a public website)

2. You opened up your own locker and took your stuff from it (checked his account)

3. You found out that very few people are using locks in the gym locker room (figured the account id in url )

4. You know that it is not your belongings in other people lockers, but they are not locked just because people are just lazy or don't want to spend money on the lock (he knew that those accounts do not belong to him, and were accidentally not locked by by at&t)

5. You decided if those lockers are not locked - that means that clothes inside of those lockers are public property and you can easily borrow them (tried to browser to other urls and get private account info)

6. You go ahead and try opening every single locker in a room and put all the belongings you find in opened lockers on ebay to make profit and sell it, BEFORE letting know the owners or the gym that those belongings are not locked. (sold private data to somebody)

I think thats not legal behavior, as long as you understand that the property you are taking is not yours - you are making a crime by taking it (stealing)


Your analogy starts to break down somewhere around point 3 or 4. It's not that few people use a lock on their locker. A closer analogy would be that the gym installed an electronic lock on each locker, but didn't actually make sure they worked.

It also wildly disconnects around point 6. You make it sound like he stole everything that the users had in the accounts. In reality, he just copied their info. He didn't give himself anything from their accounts, like transferring credits to give himself free cable or something like that. Instead of stealing everything and selling it on eBay, it was more like him going through people's lockers, taking a picture of what they have inside, and then selling the pictures.


I reject that analogy pretty hard. Account ID is basically locker number. It's not a password/lock.


It's more like you meticulously wrote down the contents of the lockers without taking anything at all and then sold the information about what types of clothes people at your gym wear to a marketing firm.


How about replacing step three with "You notice that all the lockers have glass tops" and following that with a story about taking photographs?


Why would we want an analogy that more accurately reflects the reality of the situation? We're trying to justify this, not let him out on appeal.


I know this is not a position people over here like to support, but..

  But this technique, known as "scraping," is surprisingly common among
  technologically sophisticated users and has a number of legitimate
  applications.

  To get a list of sex offenders, Poulsen wrote an automated program to search the
  Department of Justice Web site for each zip code
  in the United States and then save the name and
  address of each registered sex offender in that
  zip code to a file.
Really? Really? That's a 'legitimate application'? Nevermind that the pure existence of that registry is a slap in the face for people with my understanding of Freedom and Liberty (in caps), scraping _that list_ is why we want to protect scraping? I haven't felt that disconnected to content on this site for a long time.

  Yet most people would agree that Poulsen's actions
  were a legitimate journalistic project. So we might
  want to be careful about subjecting this kind of
  technique to criminal penalties.
Most people?? In what world?

I'm sorry for the detour, but the whole article is trying to defend weev while linking to atrocious actions of that guy in the past and coming up with the most despicable (Thanks Hollywood, learned a new term) reason for scraping _ever_. Disgusting.


So does anyone know why exactly they weren't able to get Weev on criminal harassment? I wouldn't expect the gummint to fail to bring the charge unless they thought there was no hope of victory, but it seems like such a gimme.


I'm not sure there is any relevant law. At the federal level it appears to require "obscenity" which is very hard to prove for anything short of child pornography.


IIRC, he photoshopped pictures of Kathy Sierra's kids into porn and posted them online, and emailed her graphic threats to rape her with a chainsaw, . It doesn't seem like you'd have a hard time convincing a jury of "obscenity".


It is annoying when people throw analogies around describe it to a highly technical audience. When is hacker news going to discuss the fact that User-Agent in the http header is not a security feature? When is the discussion that sequential id is equivalent to no security?

No analogy in the world is going to change the fact that User-Agent checking and sequential id:s are not security features. And if courts are allowed to make them security features it is bad news for everyones security.


/u/biot's analogy is apt. But I don't understand why it isn't a defence that the HTTP protocol starts with a REQUEST . The server is the one who actually serves up the information.

If I _request_ something from you ("hey, can I borrow your car?"), and you give it to me, then what's the problem here?


Kudos to the WP for ongoing coverage of this case. There are important issues being litigated here that could affect everyone, and I'd argue they are worth discussing without regard to this particular defendant and the sheer stupidity of his actions.

However, I find WP's use of Poulson's activities as an example of "legitimate" automated HTML retrieval ("scraping") to be an odd one. It seems an awkward a comparison to convey what should be a simple point, in my opinion.

How about something much more common? Googlebot. Imagine if we forbade Google from using automation and from scraping content and placing it in the Google cache. No more web search.

Alas, because of the ad hoc nature of the Web (i.e., there is no unifiying organizational scheme for locating content across all websites as there would be in, say, locating content in a library of books), you cannot access Web content until you first discover it. In order to discover content, you generally have to search. In order to create an index and cache of content to search, someone has to scan/crawl/scrape websites. The later three are activities that are routinely automated. As such, they will violate many website Terms of Service and may get you banned simply for being "automated".

In fact, to use Google as an example (not picking on them per se, it's just that they are a well-known example), crawling Google will "get you banned" from using Google, temporarily.

The irony of this has always intrigued me: Google may crawl your servers, but under Google's policies, you may not crawl Google's servers.

If I create an index of your website, at your expense (by aggressively running automated queries against your http server, as Google does, for example), am I obligated to share it with you?

In any event, attempts to criminalize automation should raise red flags with anyone who is even slightly tech savvy.


>> The irony of this has always intrigued me: Google may crawl your servers, but under Google's policies, you may not crawl Google's servers.

It looks like some of their site can be crawled and some not, that's how robots.txt has worked for a long time:

http://www.google.com/robots.txt


And search results (the data they have obtained via crawling others' sites) is not among the data that can be crawled.

What are you suggesting?


I love the use of analogy to describe the situation to those who may not understand exactly what Weev did. But can we decide law simply on analogy? Which analogy is a more accurate tale of what Weev did? What I like about this article is it explains what Weev did and how incredibly common his techniques were, without too much analogy. Analogies may be much more effective, but a direct explanation feels a lot more genuine.


How about mines?: https://news.ycombinator.com/item?id=6435769

You are welcome to critique, not harass⸮:


government always prefers "shoot the messenger" to actual security. There should be literally be nothing illegal about what he did in that case. he didn't "hack" anything except HIS computer to pretend to be an iPad. And that would be the point of identifying it as a security concern. After all, if he had figured it out, surely the Russians and Chinese figured it out between when he did it and they prosecuted him... it doesn't make the hole go away!!!

What he did is like sticking a GM car key into a Toyota. Generally that doesn't work, it shouldn't work... but what if it does anyway? shouldn't the company that makes the cars fix that?


The information was public. He did nothing wrong.

It is similar to accidentally posting all those email addresses on a bulletin board on the street and hoping no one reads them.


he is a hacker? He must be doing computer sorcery - off with his head.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: