Hacker News new | past | comments | ask | show | jobs | submit | netaddict's comments login

  127.0.0.1 facebook.com
  127.0.0.1 www.facebook.com
  127.0.0.1 static.ak.fbcdn.net
  127.0.0.1 www.static.ak.fbcdn.net
  127.0.0.1 login.facebook.com
  127.0.0.1 www.login.facebook.com
  127.0.0.1 fbcdn.net
  127.0.0.1 www.fbcdn.net
  127.0.0.1 fbcdn.com
  127.0.0.1 www.fbcdn.com
  127.0.0.1 static.ak.connect.facebook.com
  127.0.0.1 www.static.ak.connect.facebook.com


0.0.0.0 is a better target than localhost, because it means the connection fails immediately rather than possibly timing out, and it won't accidentally succeed if you're running a local server.

The IPv6 variant (two colons, ::) is shorter still.


You could use chrome's facebook disconnect extension https://chrome.google.com/webstore/detail/ejpepffjfmamnambag...


That only works in one browser, on one computer. I have an iPad and there are two iPhones in this household as well. A few lines in the firewall configuration is a much easier and effective solution.

I am working up to doing the same for Google. I might transparently proxy google.com/search to Scroogle just so browser search bars continue to work.


This link was posted here yesterday. Matt Cutts' comment: http://news.ycombinator.com/item?id=2795465


Suffice it to say that many Google employees have looked into this, including me. After digging into the situation, I agree with the action that Google took in this case.

I also went and got Google's official comment: "For the privacy of those involved, we don't discuss motivations behind account suspensions but we are confident in our actions in this case."

I'm sorry that I can't go into more detail.


Your responses remind me of disturbing passages from Kafka's The Trial. There is no need to discuss specifics in public, but why not let the Accused know the violation they've committed? Why is there no appeals process where the Accused can respond to specific allegations? And why not let the Accused download their data?

I'm sure your response seems reasonable to you, since you are familiar with the details, but try to understand what it looks like from the outside. Personally, once the Apple cloud is up I'm moving all my important data away of Google.


I know that from the outside that it probably looks like Google has been unfair or uncaring in this case. I'm sorry that I can't go into the details.



Ah man, child pornography, once again the edge case that screws up reasonable planning. Can't explain, for fear that if the user wasn't really guilty, you're taking a pretty serious step in accusing them of peddling child pornography. And also can't give them their data, for fear that if they were guilty, you're redistributing child pornography.

The only thing that might be nice, besides some personalized review of hard edge cases, is to allow users to export "untainted" data, if it can somehow be isolated. E.g. if one gallery was flagged, still let him export his uncontroversial email archives.


I think it would ameliorate a lot of the fear, uncertainty and doubt around this situation if Google could answer the following question: Does the user in question currently have the ability to extract the data he had in Google services, and if not, will Google provide him with his data?

If there's no way to answer that question without compromising his privacy, can Google at least share its policy on data liberation in situations such as this, and publicize it broadly via the Data Liberation Front?


Right, and this is what has caused millions of people to panic today and realize that Google really can't be trusted with our data. If someone hacks you and does something with your account, or even if you just do something stupid, years of data is just gone like that with no explanation. Now to me Google is just as unreliable as keeping all my data on a single hard drive with no backup.


Here's the resolution: http://www.twitlonger.com/show/bvqdos That explains why providing the data back to the user was a delicate issue.


The response this time is striking different from a case[1] just a few days ago. Is this standard Google policy or did she just not talk to pr/the lawyers?

[1] http://www.reddit.com/r/google/comments/it8ah/when_google_de...


Quit using the privacy BS excuse. THE USER IN QUESTION HAS NO CLUE WHY HIS ACCOUNT WAS CLOSED. TELL HIM.


What the fuck Matt, at least tell HIM what he supposedly did! WHAT THE FUUUUCK???


The reason for suspension is probably the quotes in her name. A Google engineer on g+ posted that you should not have any special characters in your name to avoid getting suspended.


Then why does G+ allow you to have quotes in your name?

This is absurd.


>Then why does G+ allow you to have quotes in your name?

Because it's still in beta.

>This is absurd.

No it isn't.


Yeah it kind of is. Google has a lot of resources. I still don't know if their strategy of long running public betas is marketing or a true inability to start with something much better then a beta version.

If it's marketing this, quotes in your name stuff, is not funny.

If it's a true inability to iterate and scale without a public beta, well they are not a start-up, it is kind of absurd.


They went from a few thousand to 10 Million users in about 30 days.

Of course there are going to be little hickups like this. The problem is already fixed. How is that absurd?


"The reason for suspension is probably the quotes in her name."

I think the profile previously said "Adafruit Industries." You can still see that name in Bing.


Unless the small number is less than 1!


Huge number * .999 is probably still a huge number. But they said Huge * small = Large which suggests small is <1 but not tiny. AKA something like 1% to 10%


That article is not about Afghanistan but Pakistan.


I thought we were discussing Afghanistan and Pakistan like it says above.


There are very few computers, fewer with internet connections while Television and newspaper are everywhere. Since a month Assagne have been in news, almost daily but FB rarely gets any press. What you see are occasional coverage. Facebook is a part of life of a particular class of society, but there exist another part whose main issue is to fight poverty and related stuff. They may not even have heard of the name of Facebook but they read newspapers. Even for US, how many times was Mark mentioned vs Assagne in print media say NYT ? In the end, Facebook can affect only those, who uses them or at max have internet connections. Wikileaks affect almost everyone.


Honestly wikileaks barely actually affects anyone. Facebook has changed the day to day lives of hundreds of millions of people around the world. I can't imagine my life being different at all in the absence of wikileaks, and unless wikileaks in the future leaks something more interesting then I doubt it ever will.

The actual nature of wikileaks's leaks has turned out to be so prosaic that it hasn't really changed anything. Apparently diplomats bitch about things. Whoop de fricking doo.

edit: Wait, I forgot about the Kenya thing, of which Assange claims "1,300 people were eventually killed, and 350,000 were displaced. That was a result of our leak."


Another reason for developing distributed DNS system http://p2pdns.baywords.com/


I'd love to agree, but I don't see a way out of what seems to be a fundamental problem with any such system:

How does the system decide who gets domain-X in cases of conflicts? And there will be conflicts, and malicious ones at that, so there must be a resolution technique, and it must not be decided in each case by end-users - they have no way of knowing quickly / accurately enough, and it would prevent the average person from being able to use it. Plus, it could simply be spammed with billions+ of claims, shutting down the usefulness of the entire system, especially if it's first-come first-served.

Meanwhile, if there are any higher-priority deciders, they can be manipulated similar to how DNS hosts are in this circumstance (or certificate authorities, in the https world). So it must be distributed... it strikes me as a paradox.

edit: the only way out being that a distributed DNS could be a mirror of official ones... but what happens when domain-X gets seized, and then sold to another, assuming it's a legitimate purchase for non-phishing reasons? And how do you resolve domain ownership transfers - they look the same as seizures, from a data standpoint, except they don't have a big "Your Gov't Wuz Heer" stamp on them.


I wonder if such a system really even needs domains anymore. Would it be possible to scrap domains altogether and use IPs only?

The link structure of the web is almost completely based on domain urls, but I wonder if there's not some way to work around that in a DNS-less/P2P system.


Many common services (HTTP, SMTP, IMAP, POP3, and especially DNS, if you think about it) are provided by daemons that don't really care much about the domain name of the machine they're running on. For example, you can configure your web server to deliver pages for www.example.com, and it will, as long as that domain is in the HOST: header of the request. No DNS is required, you just point the request at the web server's IP address.

The obvious problem is that, to my knowledge, you can't embed a HOST: header in a URL to fetch that resource from an arbitrary IP address (something like http://HOST:www.example.com@192.0.2.144/).

Like HTTP, SMTP servers will gladly accept messages for domains it is configured to handle. But it also depends on DNS to get MX (or A) records to deliver to domains it doesn't handle. It's trivial to support email addresses that use IPs instead of domains (like bob@192.0.2.144), but such addresses are less portable than using domains and also create conflicts because two users cannot have the same name, even if they operate in different realms. Besides, they're butt-ugly and harder to remember than domains.

tl;dr: Using IPs only creates problems and DNS is a HUGE part of the solution. Any replacement will have to solve the same problems.


DNS, URIs, and application-level protocols such as HTTP and SMTP work together, but that doesn't mean they are the same beast. The reason for the existence of URIs is to provide identifiers for resources. DNS makes these human-readable. Applications in turn use these facilities.

When the user types http://www.example.com into the address bar, it's the web browser that figures out what to do next. Which is: realize it needs to to a HTTP request. Where to? Not an IP, ask DNS. Now connect to the IP address. But an HTTP server can host multiple domains, so include the host name in the request (that's the Host header). The web server then looks in its configuration, and sends the right page back. Note that the HTTP headers are specific to the application protocol, and are irrelevant both at the DNS and URL level. It just happens to be the same string :)


Domains make IPs human-readable and human-rememberable, and allow multiple servers to identify as the same site (ie, for human and CA purposes), and allow them to move. Somewhere you need a translation and connection system.

You could most definitely change your links to go to IP addresses. But they're more fragile.


How are they more fragile? Other than load balancing, server IP addresses hardly ever change.


Server IP addresses change every time you switch to a different provider.

Ironically the DNS actually provides a layer of resilience against being forced offline by targeting your provider, but that moves the vulnerability up one level, now you rely on the DNS to do its thing, and it too is under the control of a single institution.

The bar is a bit higher than a C&D to your provider but apparently not nearly high enough.


It depends on what your objectives are. If they are to ensure that no single legal jurisdiction can force a domain that has already been issued to stop working by taking actions in that jurisdiction (except via the domain owner), that is easily achievable without major problems.

E.g. the system could have the following components:

1) A centralised issuer (CI) of time-stamped certificates for a TLD, which certify someone is the owner of a domain. Certificates are only issued for domains that don't exist yet. The public key is in the certificate, and the private key is kept by the owner.

2) A network of TLD nameserver operators (TLDNSO) for the TLD. TLDNSOs have stable IP addresses which are distributed to client software in advance - and there is a centrally agreed on list. TLDNSOs are geographically dispersed, and spread across many different legal jurisdictions.

3) All certificates from the CI are sent to all TLDNSOs. Certificates without a timestamp that corresponds within a limited threshold to the time the certificate was received are rejected, as are certificates for domains when another certificate for the same domain is held by the TLDNSO.

4) TLDNSOs accept domain resource record (RR) updates - e.g. nameserver records - from anyone, provide they are signed with the private key that only the domain owner has. Likewise for the equivalent of WHOIS details.

5) Domain owners can sign a transfer certificate, which includes the public key of the new owner, and is signed with their private key, and sending the transfer certificate to the new owner. The new owner sends it to all TLDNSOs, who will from then on accept requests with the new owner's key, rather than the old one.

6) Clients can query TLDNSOs using DNS or DNSSEC, or using a new protocol which lets them inspect the certificates from the CI and any ownership changes. Clients using the new protocol query several TLDNSOs in several jurisdictions - there could be a fairly complex set of conflict resolution rules, but one of the most important would be that if two CI certificates were received, more TLDNSOs get queried, and the most frequent answer is the one that is accepted.

This makes most kinds of attacks on existing domains difficult:

1) The CI can be compelled by authorities in its jurisdiction to issue certificates, possibly backdated, but they won't be accepted by TLDNSOs except the ones that can be compelled to accept them (a minority in the case of unilateral government action), because they already have a certificate for the domain. The new CI certificate will be rejected by clients if only a minority of TLDNSOs present it.

2) TLDNSOs can be compelled to remove CI certificates for individual domains, but if only a minority are in any one jurisdiction, clients will get the record from other TLDNSOs.

3) Only the domain name owner has the private key needed to revoke or transfer a domain name. Obviously, the domain name owner can be compelled to reveal they private key (if they have it in that jurisdiction anyway), but that is outside the scope of this document. They could encrypt the key with a secure password and refuse to disclose it - that would be legal in some jurisdictions and illegal in others. By this point, authorities would probably focus on taking down the servers hosting the website rather than the DNS.


So, basically, DNS+CA, with mob-rule for conflict resolution for both servers and clients.

I like it - it can be implemented along-side DNS, mirroring DNS entries where the owners will agree to create a key. A progressive take-over is possible, layered on top of existing services.

It's not really "p2p" in that it needs hosts that must still be large and unmovable, thus a target, but a definite improvement. I'd still like to see/find/come up with a way to make something as totally host-free as possible, but no doubt it'd be incredibly slow compared to a more centralized solution.


Actually I think that a Bitcoin-style approach would be better.

Basically just use something like Bitcoin's block chain, but store domain registrations instead of only transactions.

So you only get a domain if you can prove that you've send some computational power of your computer. In addition, once you've registered a domain and the registration is sufficiently far in the past, it is infeasible for an attacker to manipulate your domain due to the block chain (and the computational power required to attack or delete it).


Yes, I think mirror+specific alternatives...

In fact, the simplest would be "mirror + backtrack".

If a given site had an earlier dns entry, the alternative dns would point to that earlier entry as the second alternative. If you think the second alternative is "really it", you can make that permanent for you.

It wouldn't solve everything but it would make a variety seizure approach not work well in the short term.

You'd still have trouble if you lost your ip address(es) but this would mean seizure would need multiple points of failure.

Moreover, this would need only a minimum of centralization. A browser plugin to "find hidden/seized sites" might actually be trivial to produce. Name it something catchy. Anyone would to work on this?


But what that means is that the internet goes from being a www.com => globally-identifiable site to everyone having their own version. Links, URIs, Universal Resource Identifiers are no longer universal, and can't be used to reliably direct people around. It could be mitigated by changing the scheme (say: http://version/www.com/), but then every address-reading system on the planet has to be changed to handle it without throwing validation fits.

The average person will not understand this and will simply use whatever comes through automatically. If it doesn't lead them where they expected, they simply turn back. Leaving us back where we started.

Heck, you can already do this: you have a hosts file. Just map www.seized.com to the original IP. As long as the servers are running, it'll still work. The problem comes when traffic drops to zero, ad revenue drops to zero, and the reason for the site's existence is lost. Which is precisely the same problem with running alternatives; the average person, who accounts for most of most site's traffic, will take whatever is served to them and not manage it on their own.

Any system like this would eventually bloat to unmanageable levels, as again, ownership transfers look the same as seizures (or the reverse can be made to be true, with the intent to trick people). Eventually, loads of sites would have tons of alternatives. People could nab the servers / IP addresses of the old ones, and run phishing sites that look like the originals, further degrading the use of any alternative addresses...

... so people will use what's already decided for them. Which is what we have now. The fraction of a fraction of 1% of people who will visit the alternatives will not prevent their eventual death. Only the most popular seized pages will have any chance of continuing to exist... at which point their servers are simply seized along with their domains (where possible. governments cooperating in this is only increasing, and if the ACTA goes through it'll likely become the standard, done automatically, instead of the exception).

---

All of this also relies completely on the internet backbone routers not being manipulated. All it would take is a re-write rule, and any attempts to reach the address are taken out entirely. If a distributed DNS gains traction, do you honestly think this won't become a government's weapon of choice? Those routers exist somewhere.


Hmm,

"But what that means is that the Internet goes from being a www.com => globally-identifiable site to everyone having their own version. Links, URIs, Universal Resource Identifiers are no longer universal, and can't be used to reliably direct people around."

Yeah, it suck the state is breaking the Internet. I don't like it.

We should be clear. We shouldn't claim consider this to be an improvement. We should consider this a counter-measure to something like an act of war on the Internet, which it is.

"The average person will not understand this and will simply use whatever comes through automatically. If it doesn't lead them where they expected, they simply turn back. Leaving us back where we started."

The average user is smarter and smarter. This is something like a war. Normally fat, dumb and happy humans can often sudden exhibit more intelligence in this kind of situation.

Only the most popular seized pages will have any chance of continuing to exist...

The state cares most about these sites too. The state doesn't like actions which dilute it's power. Even when they don't really work, they also make it look bad, which should not be underestimated. Basically, this an electronic form of civil disobedience. And I believe things have come to the point that this might matter.


>The average user is smarter and smarter.

And relying on more and more complicated tools that run more and more automatically. For example: it seems to me that one of the major reasons password managers (and thus better passwords) are gaining ground is because they're better in every way - you don't need to remember anything. It makes things easier and more automatic, so people use it.

My basic theory on humanity is that people aren't stupid, we're just lazy. And I mean that in a good way; laziness often leads to efficiency. Especially when taken to a global scale, things change when they become easier, not necessarily better.

I don't think we'll agree here though, so I won't debate that point further.

>We should consider this a counter-measure to something like an act of war on the Internet, which it is.

Sadly, yeah, it does seem to legitimately be under attack. From all sides. Something along these lines might work to make a parallel internet, which could be useful, but I don't see it as a solution, so something still needs to be done, and the sooner the better. Why not now?


Humanity is lazy, not stupid, I agree... (surprisingly enough)

Something needs to be done, I agree...

Aside from my web plugin proposal, what would you imagine happening?

(not to discount your other points...)


I'm not really sure. I need a bit more crypto knowledge & math, and a good chunk of time to brainstorm for it; no solution comes to mind. They're all necessarily bound by that you need to trust at some point; but ease-of-use is paramount in my opinion, if you want to actually change things. No matter what, there are gives and takes.

Browser plugins might be the eventual solution's first steps, though they're more and more becoming sandboxed websites (which I like. Fewer security issues, easier programming, etc), so you'd have to go with something lower-level, which means it's harder to do cross-platform. But that's likely to be the case regardless, unless a single platform wins or virtualized, standard OS APIs become the norm.

All that said, I'm not sure there is a best solution, nor one which I'd actually be happy with. Much less something which works efficiently on a global scale. But I'm essentially a communication-anarchist: I generally think it would be best if anyone, anywhere could privately, anonymously communicate with anyone else. And I realize just what a can of worms that would be.

edit: Just for clarification, as I sometimes come off this way: this is meant in no way to be an attack on the idea / goal / you. And if I'm missing something, I'd love to know. Discussions like these often lead to solutions though, so I enjoy them and end up saying a lot :) I think some of it comes from having both of my siblings in debate teams, and having judged at a few debate competitions; I tend to come off more certain / forceful than I intend.


I'm not particularly defensive here, just fishing for people who'd like to help on the idea - which I came up with right on the thread above.

The thing is, the dns-backtracking-browser-plugin sounds like a simpler and more doable approach compared to anything else I've heard of. Any more elaborate approach would have to settle who owns a domain and that's not any easy thing for the present system.

It would certainly need to be system/browser specific but otherwise doesn't sound hard. Indeed, I could do it in a couple weeks and a really smart person could do it in a day.

Obviously it's a stop-gap. The distributed peer-to-peer client featured here a couple of weeks ago is a far more robust solution. (see http://news.ycombinator.com/item?id=1985431). That would include a system fairly similar to what you describe.


I've been meaning to read through the details on that for a while... guess I'll just have to do it.

But yeah, trackerless-torrents are about the epitome of such a system, though I think it'd have to be changed drastically to support a fast query architecture like DNS-like services need.


Here is the list of Gawker passwords along with MySQL, FTP accounts http://pastebin.com/9rRmf6W5

Thousands of people still use "password" as their password.


How many people really care about the security of their Gawker account though? They just want comment on a blog post; in order to do that, they must remember a password. "password" achieves that admirably. Now, if they are using "password" as the password for their bank account...they could have a real problem.

It would be great for people if tools like 1Password were more prevalent, even built in to browsers. It becomes trivial both to create and maintain an unlimited number of secure passwords.


And if you read that file, you will read that they used DES for hashing. Reminds me of the LM hash. The LM hash generated two hashes using DES from two 7 byte parts of a 14 byte password. Basically they use each individual 7 byte part as a DES key to encrypt a fixed string. Repeat this twice for each 7 byte part, and concatenate the results, and you get the LM hash.


Is this gawker.com only? I have accounts on related sites like kotaku and jezebel, but I don't see any of them in that list.


The list on that pastebin is only a sample of what they bothered to crack themselves (easy passwords like "password" and "qwerty"). The torrent posted in another comment contains the entire database.


Their entire database was stolen. So you should change your password to be safe.


App Engine has a billing plan if you want to use more resources than the free quota. http://code.google.com/appengine/docs/billing.html It is a lot cheaper than AWS


For avoiding NY Times wall, if you are using Chrome, you can just right click the link and "open in new incognito window".


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: