Hacker News new | past | comments | ask | show | jobs | submit login

I don't have an opinion one way or the other on this extension or Facebook's decision, but the premise of your comment --- "we put a service up on the Internet but you can only talk to it on our terms" --- that is actually how things work, and how they should work.



> that is actually how things work, and how they should work

I don't think so. Users simply don't have the power to negotiate these contracts. These "take it or leave it" deals are abusive. Especially since many times these platforms have network effects so strong you need to be part of them in order to not fail at life. Under these conditions, nobody can truly consent to anything. These "terms" should not even apply. Nobody even reads them, it doesn't matter what they say because it won't change the fact they need to be on Facebook because of family, work, school, whatever. They click "agree" not because they agree but because the sign up form won't submit if they don't.

So technology that lets us alter the deal is very much welcome indeed. They don't want us using this stuff but their permission is not necessary. Software is gonna interoperate with their site whether they want it or not. They should not even be able to find out that we're doing anything out of the ordinary. From their perspective, they should simply see a normal user agent issuing normal HTTP requests.

Adversarial interoperability. If they refuse to make the site work like we want it to, we'll do the work for them. This should be considered a form of legitimate self defense against their abuse.


> you need to be part of them in order to not fail at life.

Srsly? "Failing at life" would appear to mean dying.

I don't think a social media account is a matter of life and death. FB is basically a kind of entertainment, so if you don't like the T&C you can always join a sports team or a choir, or whatever.


> "Failing at life" would appear to mean dying.

In my country it's simply not possible to communicate effectively without WhatsApp, Instagram and Facebook. If I refuse to use these things, I might as well simply ostracize myself from society. For some people, it could cost them their jobs, actually putting them at risk of dying.

> FB is basically a kind of entertainment

No. Facebook is communication, jobs, local information, even a market place with local groups where people sell their stuff. It's hard to describe just how thoroughly this company has managed to infiltrate my society and its way of life. People write pop songs about getting blocked on Instagram.


The nuance of how "we put up a service ... talk to it on our terms" is enforced is what is deeply concerning to me. Is that up to Facebook to use technical means to enforce their terms or is the force of law behind them? Where is that line drawn?

If I modify the DOM with an extension to hide content I don't like am I running afoul of the law? How about using Lynx instead of Chrome?

What constitutes "talking to" a service? Is it data I send to that server, or is it how my computer processes the data I receive and how I interact with it?

Different people are going to have wildly different opinions, and some of them are very troubling to me. Committing fraud is one thing, but simply using a service without exceeding your authority in a way the service provider doesn't prefer seems like something the service provider should handle without the force of law behind them.


It is up to Facebook to use both technical means and enforceable contract law to draw lines around how their service can be used, the same way it is up to any of us to do the same with services we stand up on the Internet.

There are limits to both tools, and legislatures can enact new restrictions in response to public demand. But none of that is in play in this story.

If the argument upthread was "we should demand laws that prevent Facebook from locking out extensions to their platform", I wouldn't have a rebuttal (I might or might not support those restrictions). But the sarcastic dunk that was actually made, that it was somehow ridiculous that Facebook would have some say over the terms of how their platform was used, was weird and worth commenting on. It's not only not ridiculous, but actually the world as it exists today.


>[they said] it was somehow ridiculous that Facebook would have some say over the terms of how their platform was used

It's less about FB's right to set boundaries, and more about what FB does when they feel the boundaries have been violated. In this case, they've perma-banned the guy and initiated threatening legal action. That action's extreme demands are NOT in FB's TOS, and reflect on FB's attitude of entitlement.

One argument against this is that FB is just doing the "standard legal thing" of demanding everything up-front, and then negotiating. That is true, but I don't think that just because every lawyer tries to bully their clients enemy means they should. And in this case FB is Goliath, swinging hard and fast at David.

And you know what? Fuck Goliath.


You don't know the full story here.

It's quite likely that Facebook sent the person an email asking him to stop violating their Terms of Service and he refused.


It's far more likely they started with ban and lawyer.


Consider it by analogy: let's say I have a fax machine at my house, and someone keeps sending me faxes on it even though I don't want them to.

I could set up some technical mechanism to stop it, such as blocking their phone number. But, if it's easy for them to switch phone numbers, then that won't work well. And I may not be able to just block a whole area code, because there may be people I want to let fax me coming from that area code as well.

My other recourse, then, is threaten to sue them, and, if they continue, to actually sue them. And I would argue that I should be able to do that. Sending me faxes costs me financial resources and ties up my fax machine, so it's hardly zero cost to me, and it makes sense to have some third party to sort out the dispute and decide where the line should be drawn.

I can imagine other worlds with gentler, more even-handed approaches to sorting out these kinds of issues. Unfortunately, most those approaches fall under the general category of "regulation", and the country I reside in, the USA, decided a long time ago to eschew that kind of approach in favor of one that relies heavily on lawyering up and lawsuits.


what if there is only one brand of fax, they are selling your phone number to advertisers and they demand you receave the faxes?

or say you have to listen to robocalls or els you cant use some unrelated monopolistic service or product?

i like the analogy but the real story is who would use such a tool. if someone feels they need such extreme measures i wouldnt dare deny them this. who in there right mind?


> let's say I have a fax machine at my house

I haven't seen a fax machine in 20 years. I'd be surprised if anyone under 40 knows what they looked like.


Analogies are always risky business ;)

Facebook has a public service and one of the options is to Unfollow; someone wrote a browser extension to do this automatically for all items.

Fundamentally, what is the difference between automating this process and doing it manually?

In your example of a fax machine, arguably fax numbers are a private entity; there is no requirement for publishing fax numbers nor is fax automatically publicly listed for everyone to see. A malicious spammer would need to either obtain the fax number from a listing somewhere or brute-force the number, and similarly, the only way to __know__ that a fax has gone is ambiguous. obtain the fax number from a listing somewhere or brute-force the number, and neither is really analogous to what a browser offers.

I think your analogy conflates a few concepts incorrectly, namely that there is some unexpected or undue financial consequence to Facebook for publicly allowing users to Unfollow Groups; if the extension __needlessly__ generated traffic, this is closer to your analogy. But as I can see how the extension works (based on archived copies found on shady sites), it's not undue traffic, it's just expediting the process of manually Unfollowing groups.

Facebook shouldn't have a recourse here as I see it; the automation causes no undue burden on facebook that isn't possible by manually clicking, an arbitrary review of the extension suggests there is no undue stress on the servers that differs in any way from the traffic one might generate if they manually unfollowed groups. Automating the process indeed might be undesirable for Facebook in some way, but fundamentally the same result is achievable with manually clicking, and I think a more substantial evidence of damage is required from Facebook to justify such a threat.

If we take it to a logical comparison, should Facebook have the right to block a mouse + keyboard automation tool that I script to react at human speeds but is pixel-perfect to unfollow groups?

If the answer to this from Facebook is "yes", then the natural question is "what is the similarity between these processes?"; if the answer is "automation", then the natural question is "why is this damaging to Facebook as opposed to me just manually unfollowing??", and I'm not confident Facebook has a reasonable/strong answer to this.

If Facebook is fine with the slower method, then the question becomes "what is the real concern with the faster method? I will skip the logical follow-ups here as the response is already long.

Facebook should __not__ have the right to sue just because they don't like an activity; no one benefits from this; quite the opposite, smaller parties are actively harmed by such behavior as they lack the financial resources or confidence (or both) to respond to such a legal challenge, and this was never the intent of law. One should not need heavy financing to secure their natural rights; if Facebook wants to position that the extension is somehow illegal as per terms of service, I think the duty is on them to demonstrate how it's significantly damaging and how it differs from a dedicated person armed with a cup of coffee and an hour of free time; if Facebook cannot make a significant distinction outside of convenience for the person, then I don't see a basis for legal recourse.


I'm not sure why you think the line is this clearcut, and in the wrong direction at that, but this gets murky really quickly.

You don't get a say in how I'm using my computer. If you're exposing your HTTP server to the world and letting users access it using their web browsers, you don't get to tell me my choice of web browser (that is, HTTP agent) is not to your liking.


The line is crystal clear.

You can do whatever you want with your computer.

But when you use your computer to access a remote service you need to comply with their terms of service.


If their terms of service say “thou shalt not reverse-engineer”, and I want to connect my Facebook to my Friendica, UK law says that I'm allowed to do so, and Facebook is not allowed to have a problem with it – any clause in a contract that says otherwise is to simply be deleted.¹

¹: Technically, I think “ignored” is more accurate; if you're prohibited from reverse-engineering in general, the general prohibition would still apply even though it has a specific exemption. I'm not a lawyer, though.


Similar situation here. I also have certain rights that Facebook tries to deny me through their contract clauses. I consulted a lawyer and was told those could be ignored.

Apparently it's a thing in the US. People can sign their rights away to these companies. Needless to say, those counter-rights clauses have become standard in every contract. Read one of these abusive contracts and you've read them all. "We reserve all possible rights while you promise not use any of yours" summarizes every terms of service out there.


That depends on the terms - not everything goes. For example, they don't get to say that you must only use Facebook while naked.

And in this case, I would argue that this is a case where they should not have the ability to restrict this kind of interaction. If the law disagrees, then the law needs to be changed (and in the meantime, ignored to the extent possible).


> But when you use your computer to access a remote service you need to comply with their terms of service.

The only moral obligation is to not crack the server and take control of it. We won't make the server's processor execute our code. That's the line. Your computer runs your code, my computer runs my code.

Anything else is fair game. Server responds to my HTTP requests, so obviously anything I can do with HTTP requests is allowed. It doesn't matter what I use as user agent since it's the company's own code that's handling those requests.

Ironically, taking over control is exactly what big tech is doing with our computers. They take control away from us and give it to the copyright industry, to the advertisers, to everyone who would very much prefer that we users remain mere passive consumers just like in the days of television. Our computers are slowly becoming appliances.


And this is in no way transgressing their terms of service since it's doing the exact same thing any HTTP agent would do. They don't get to choose which agent I use.

In other words, either the action is disallowed completely or it's allowed regardless of my choice of user agent.


It's irrelevant how you violate their Terms of Service only that you do.

If I attempt XSS or SQL Injection against a website it is still illegal regardless of whether the HTTP request uses the same user-agent or is similar to other requests.


You're missing a crucial point, which is that an XSS or SQL injection requests are different requests from those made during regular use. The intent of sending such a request is also different.

In this case, we are dealing with the same requests with the same intent, just made with a different browser. As stated previously, you cannot force my choice of browser.

Now please tell me which (real or imaginary) ToS clause this violates and how it could possibly violate it, even hypothetically.


I never read ToS. Not being a lawyer, I have no way to know which clauses are enforcible, and which aren't. For the same reason, I am not capable of putting a correct interpretation on many of the clauses.

So I never read them. I don't consider them to mean anything. If you operate a public website, that's like erecting a billboard. You can attach ToS to your billboard, but nobody's going to read your ToS; they'll just read your billboard.

I'd like a plugin that can auto-click ToS popups. Like, click any button that says "Accept" (or "So sue me, sucker!")


Please elaborate.

I understand certain terms (such as saying you can't hit the server more often than a reasonable amount), but much beyond that I push back. If the laws allow them to make such all encompassing demands of how I use their product, well, laws can be changed, and I vote.


You can vote much more forcefully by simply not using Facebook. Many of my friends have made exactly this decision and they seem fine.


I disagree. I don't think that solves the problem, at all.

One: Facebook is currently being accused of damaging democracy via misinformation and their "anger promoting" algorithm. That affects me, and my leaving Facebook doesn't solve that. Two: there is the monopoly issue (if that is the right word.... the issue I am concerned about lies on a spectrum, unlike many people's usage of "monopoly"). Prior to Facebook having dominance, I used to be in the loop of what my friends are doing, because they used phone, email, etc. Now they all use Facebook and my choice to not use it (which I don't, actually) results in my not being included in a huge number of things. In that sense, I think Facebook has become like a utility, like the phone company of old. I can't just find a social network product that I prefer, and use it instead.... my friends are not on it and other social networks are not interoperable with Facebook. (as phone providers and email providers are interoperable with one another)

Teens who use Facebook products are known to be harmed. Maybe you think they should just not use these products. That will cause even greater harm to their social lives than it causes to mine, since all their friends are using it and being connected with friends is very important to teens. Again, their simply not using the product doesn't address the problem. (and MY not using it especially doesn't help)

I think your comment is like saying "if you don't like constant robocalls, just cancel your phone plan rather than encourage laws to curtail them." Kind of throwing out the baby with the bathwater.

So yeah, I'll exercise my right to vote by actually voting. Luckily, many representatives are in agreement with my perspective on this.


> Teens who use Facebook products are known to be harmed

There is nothing special about Facebook products that make them harmful.

It's a glorified message board which facilitates the exact same harmful social interaction that is prevalent on other sites e.g. TikTok, Reddit, Snapchat.

This idea that you can ban Facebook and Instagram and suddenly the internet is safe for kids is just ridiculous.


> There is nothing special about Facebook products that make them harmful.

Sure, there is. Addiction. People are addicted to this stuff. They're addicted to likes, reactions, seeing their follower numbers increase. They're addicted to the algorithmic content feeds. Facebook is actively working towards keeping it that way. They probably want to make it even more addictive. They want people using their software at all times in order to collect data and serve ads.

Why else would they C&D an unfollow extension developer? They want people to keep following so they get addicted to the infinite content plus ads feed.

Also, nobody is excusing any of the other sites you mentioned. There's plenty of things wrong with them as well and we'll condemn them for it. We're just focusing on Facebook right now because it's the subject of this particular thread.


I think this discussion applies to TikTok, Snapchat and Reddit as well. Facebook (along with Instagram) is probably the worst offender at the moment, but any laws that apply to Facebook would apply to others.

> This idea that you can ban Facebook and Instagram and suddenly the internet is safe for kids is just ridiculous.

Who said that? I don't think they should be banned, but I do think they should be regulated or otherwise held accountable. A big part of it might be being more transparent with their algorithm, and giving tools to users to control it more.

I think where we are is akin to back in the days where people realized that, no, we aren't going to ban buildings, but we are going to have building codes. We aren't going to ban food, but we are going to have an FDA. Etc. The free market doesn't address all problems.


That is exactly how it works even with the extension. By building what ammounts to a GUI to a glorified database you are deciding on the interaction level with the database.

The fact that the extensions helps automate some tasks if a different matter. If it were an industrial level scraper that scraped anything public... that could be considered malicious and can cause tangible financial losses.

This extension on the other hand... You can't really justify sending a threat like that. You can come up with excuses but that is it.


Yeah. Imagine sending a C&D to an extension developer because their software is helping people break free from their social media feed addictions. Can't have that, it's reducing ad impressions!

It's like Facebook wants people to hate them.


Web Browser is called User Agent for a reason. It is not Corporate Agent or Facebook Agent. It should grant every right to the user with regards of look and feel of web sites, and none to the website being browsed.

Web site may merely suggest how it is best served.


I agee completely. This also extends to HTTP requests and all kinds of automation. We should be able to make a custom Facebook client if we want to. There's no reason their client must be the only one allowed to talk to their servers. Competition in this is space is obviously good for us. User agents should do what's good for us, not what's good for some company. If subverting their business interests is good for us, that's exactly what the software should do. We are its masters.

Really, the user should have all the power. These companies already have what, billions of dollars? That's power enough for them.


> There's no reason their client must be the only one allowed to talk to their servers.

In a lot of cases it's also not their client in any sense of the word. Firefox, Chromium, Safari are not Facebook's.


Yeah. Their overreach in that case is even more offensive. The whole notion of Facebook having any say in the matter is absurd. Who are they to say which extensions or scripts people should or shouldn't be able to use?


At least, every browser should include a grabber which will mirror all information it sees to store locally/in the cloud.

Facebook bans you? You still have all your data intact.

I wonder if Facebook is legally still obliged to provide all its information to the user in the EU even after banning the user, and if they comply.


> I wonder if Facebook is legally still obliged to provide all its information to the user in the EU even after banning the user, and if they comply.

If they aren't, they should be. Facebook's contracts aren't above the law which says people have a right to their data. Does the law care that the user was banned? I don't think so. Nor should a banishment somehow invalidate someone's rights.


> mirror all information it sees to store locally/in the cloud.

Browser history?


Yes, but with full content, navigable and searchable.


That would be amazing.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: