First, I suppose "scrape" is not the correct word here, if fb is just using the photos people uploaded to their servers.
Second, what was the license agreement between users and fb at the point they uploaded their pictures? I wouldn't be surprised that this is completely legal and covered by fb's TOS...?
Clearly these users opted in when they signed up for Facebook and then uploaded their photos. They had an option not to use Facebook.
I'm dead serious. I do not have a Facebook account for this and other reasons. If you don't want a company using your photos / contacts / other personal details, then simply don't give them this information in the first place.
> Clearly these users opted in when they signed up for Facebook and then uploaded their photos.
Not correct. For instance I, an Australian, had an old Facebook account that I forgot about. I never agreed to any new conditions. In June I received an email from Facebook notifying me that they would be updating their ToS and begin processing public posts and photos. I have been trying to delete the account since but Facebook is refusing to without reason. The account appears to be locked and unrecoverable.
> They had an option not to use Facebook.
That's nice, and I can appreciate this point to some degree, but people can and do change their minds. It's very American style viewpoint that contract law trumps all, e.g. arbitration clauses which aren't really a thing around the rest of the world. Privacy and consumer laws can and should trump Meta's sneaky invasion of privacy.
> It's very American style viewpoint that contract law trumps all, e.g. arbitration clauses which aren't really a thing around the rest of the world.
If you don't like that viewpoint then maybe you shouldn't entrust your private data to an American company that can alter the deal, and you should just pray that they don't alter it further.
I'm Israeli, inherently distrustful of those who come to harm me with a smile. So I don't use Facebook. But I'm not a fanatic - here I am on HN.
You are correct. I just wanted to add that regardless how the US company alters their ToS, they are still obliged to follow the law of the land where they offer service. Which is why they get all the time legal troubles in the EU. But yeah. once your photos were already processed it's a bit late to sue.
> If you don't like that viewpoint then maybe you shouldn't entrust your private data to an American company that can alter the deal, and you should just pray that they don't alter it further.
I don't use Facebook, and I think my government should regulate and fine Big Tech companies like Meta when they breach users' privacy.
Countries can enforce laws on American companies. See: X and Starlink in Brazil. You can pursue companies economically, you can use technical countermeasures preventing them from operating in your jurisdiction, and you can use the legal system if their owners, directors, or employees enter any jurisdiction your reach extends to.
That's all very much for this specific scenario, but the tools exist to enforce laws, if the laws exist. Being a fanatic is not a requirement for any idea in this comment.
I didn't opt in when FB got my friends to tag me in photos, thus providing them with image recognition training data. I also didn't opt in to FB slurping up my contact info via their import-your-contacts feature they push.
Even if the users opted in when they signed up by agreeing to some privacy policy they didn’t read, that no one ever reads, that doesn’t make it okay for facebook to do this.
If someone does not read the terms to a service and then uploaded your private information to that service, and then the service does exactly what they said that they would do with that information, how is that the fault of the service?
My parents raised me to be responsible for my actions and understand their consequences, maybe this is an unusual position?
The issues with your position is:
- Basically no one reads the terms they are “agreeing” to. This breaks the legal and logical principle that you can’t agree to something when you don’t know what you’re agreeing to.
- The agreement is basically forced. Either you agree or you can’t use the website. Sure that’s fine if you can use another website. However, for example with Facebook marketplace there is no realistic alternative in many places. Where I live our craigslist equivalent was all but killed by fb marketplace. Same for organisation of events. Also if almost every website has the same invasive policies then you don’t really have a choice but to agree. No choice once again breaks one of the fundamental principles of an agreement.
- The language used in those agreements are difficult for normal people to read and understand. This both discourages them from reading it and again, you can’t agree to something if you don’t understand what you’re agreeing to.
I’m not saying legally fb is in the wrong. Clearly the legal system as it is today doesn’t care whether you understand what you are agreeing to. I’m saying they are acting unethically, and that their behaviour is a scourge on society.
Do the HN terms of service permit me to exfiltrate and transform your comment to use near verbatim in my upcoming novel about slimy legalistic carpetbaggers grifting regular run of the mill average members of the public?
You can't agree to something that wasn't even conceived at the time. Maybe it is "legal" by the letter of the law but it is hard to argue that it is inline with the spirit of the law.
According to the article there is no law in Australia regarding this - but EU and US (probably specifically California) laws cause them to offer the option to citizens of those countries.
Banning everything because we don't like change isn't a great plan either.
The modern society is a product of the Industrial Revolution, which had many nasty side effects (destroyed the traditional structure of the society, poisoned the air and the water, led to large swaths of land being damaged by mining etc.)
Yet if, say, Prussia or France decided to ban industry early on because of the very real harms observed in England, they would have been conquered mercilessly by their foes later.
Technological bans often come at a price of stagnation. Facebook is at least a bit under Western control and can be regulated. Chinese AI labs won't be, and we will only see their results when smart missiles start raining on our heads.
Who said anything about banning everything though? That smells like a strawman.
Fools rush in is a warning against not diving headfirst into doing whatever you feel like doing just because you can, it's not advocacy for "banning everything".
Look around, we are living in a veritable vetocracy.
There is no need for anyone to advocate banning everything. It is a result of aggregate efforts of a diverse set of activists each trying to ban something.
You dislike AI training. Someone dislikes high speed rail, or possibly any new rail at all. Someone dislikes Starship. Someone dislikes new housing or rezoning of their city. Someone dislikes new biotech. Someone dislikes new factories. Someone dislikes new wind turbines. Someone dislikes new nuclear power plants. And nowadays, the fear of harm and doing harmful things is in vogue, so the standard line of defense is to argue potential harms, and given that no activity in the world is completely harmless, the result is stasis and a paper war over everything.
It is as if there was a group of activists who dislike individual letters. Even though Alice only hates H and Bob only hates Q, if enough people do this, the entire alphabet is going to be banned soon.
There is a certain irony in the fact that the Internet, a revolutionary technology in itself, has enabled various reactionary folks to organize themselves to stop their $Nemesis, for whatever value of $Nemesis. Where once were isolated naysayers, a pressure group is now easily put together.
> when they can’t even outline a real harm that the ban is intended to avoid.
Unsurprising when { something } (for various values of something) has never been done before.
The wise course of action is to not rush full bore into, say, handing out sticks of dynamite, dumping hundreds of thousands of gallons of waste, allowing third parties in other countries to digitize citizens, etc. etc. until time has been taken to look into pootential harms.
This is very much in the eye of the beholder. In my opinion, we already suffer from an abundance of suffocating safetyism, which may be a consequence of aging of our societies.
"time has been taken to look into pootential harms."
And I am also very skeptical towards the idea that a theorizing committee can predict future reliably. Which includes future harms and rewards of some technology that has barely started to develop.
The experience from the past is that we cannot tell such things in advance.
"They also don't have to be jumped into head first w/out checking for snags and clearance either."
So new technologies should be at least put on hold until someone checks them for snags and clearance to an extent satisfying for ... whom? You? An average person? A committee? The parliament? Some regulatory body?
Again, what you think of as wisdom seems like obsessive safetyism to me. As someone else here argues, you shouldn't have the right to stop or delay random things just in case, unless you can demonstrate some concrete harm.
The world shouldn't be a carefully padded kindergarten for adults. Heck, even current kindergartens for kids are too safetyist by far. Shout out to Lenore Skenazy and her free-range parenting.
Laws and regulations have downsides, too. And if you locally regulate some dual-use tech out of existence, it is well possible that you may lose a subsequent war.
In this particular instance, we are talking about AI analyzing pictures, and looking at the Russo-Ukrainian war which is developing into a massive drone-fest, you cannot ignore the military ramifications thereof.
It is possible to jam a remotely controlled drone, it would be much harder to neutralize a drone that is capable of detecting targets reliably on its own.
> a drone that is capable of detecting targets reliably on its own
Yes, I guess in the future a lot pf people will be killed by autonomous swarms of drones that just don't like the way they look, sometimes malfunction, give plausible deniability for war crimes, etc... ("who knows what the drones did, there was no communication. Also, the drones cannot possibly kill anyone who isn't a terrorist. Also, surely it was the drones of the other side.)
That's actually a great example for something I'd like to be banned. It would require international cooperation and arms control agreements, which unfortunately currently doesn't seem likely to happen.
I don't know what part of the west you are pointing at, but that's pretty much how it works here in Australia.
That's why are farmers are getting Parkinson's, and cars are too big for the roads, and there is too much sugar in everything, and social media platforms can broadcast whatever they want with no repercussions.
I was just pointing out that's how the west works. sheesh. I'll just go back to cooking my dinner in a Teflon pan, on my artificial stone benchtop, in my asbestos clad kitchen, while smoking cigarettes, and something about lead paint.
How about we empower panels of experts in their fields to make decisions based on rational and informed consensus and then revise those decisions every so often, so that we don't have to personally choose to ban everything or nothing?
Western society today, but it didn't start this way.
The nanny state is really only a couple decades old.
Government has realized it can take significant powers if it can convince a subset of the populace that it is the only way to protect them from some type of harm.
So I live in a nanny state, and I hate the fact that my freedom is restricted. Gun ownership of any kind is all but prohibited, for example.
However I really wish the discourse around this stuff would shift a little bit. I want my government to protect me from huge tech companies and the massive infringement on our privacy, community and culture. Like the government should serve the people, both by not restricting the freedoms of individuals too zealously, but also by protecting people from genuine threats that they can’t or won’t protect themselves from, such as social media companies exploiting network effects that make it basically impossible for users to make alternative choices.
If you think otherwise please explain it to me because I really, genuinely want to understand.
Most new inventions have good outcomes and bad outcomes.
The benefit of letting them play out for a while is that you can see how things ultimately turn out, and cherry pick the aspects that are worth keeping, decide what aspects are worth keeping with changes, and outlaw aspects that are harmful.
It’s a natural experiment where you keep the wins, cancel the loses, then repeat the process, making societies iteratively better.
If you just pre-emptively ban everything before any significant real world harm is apparent, then you lose the ability to be selective and keep the wins.
This is Europe’s big mistake. They’re too eager to ban on gut feel, which means they ban too much, which places their industries at a disadvantage relative to other countries where iterative innovation can continue.
I dont disagree with what you’re saying in principle. However the Facebook experiment has been going on for decades, with multiple examples of unethical behaviour and a clear lack of any respect for privacy. Same can be said about basically any large tech company
I just can't get my head around how anyone can think that posting pictures to social media in the first place reflects a desire for privacy or how using my photos as training data is a worse invasion of privacy than allowing millions of people to see them.
It is possible that the 'nanny state' is both good in some ways and bad in others. You and I benefit in uncountable ways every day from the protections granted by such a system, without realizing it.
Nuance is tough. It is easier to find a simple thing to blame and advocate against it. But nothing is simple -- governments are composed of people and those people have inside them conflicting motivations, each person's different than the other -- and most of them are not malicious. The same with business, and with the public.
Let's look at issues on their own and not try to categorize everything as part of a black and white choice.
Society benefits from allowing new inventions then banning those that turn out to be harmful, once the harm is clear.
The result is that you keep the good inventions and discard the harmful ones.
What people are proposing here is to preemptively ban new inventions even when there’s no evidence of harm. It’s not a good idea because while it will stop the bad inventions, it will also stop the good ones.
There is a concept of 'risk'. Making any choice, yea or nay, comes with risks, and they should be evaluated and acted up as per the risk tolerance of people at the time. You are saying that we should disregard any rational evaluation of potential risks on one side and only choose one option all the time. Is that a logical position to hold?
No, I’m saying we shouldn’t be so quick to jump to banning things without a clear reason.
This thread is full of people suggesting Facebook be banned from behaving this way based on little more than gut-feel. They aren’t even capable of giving a hypothesis for harm this could be harmful.
And to think if Facebook just let people opt-in we wouldn't even be having this conversation. Harmful or not, did Facebook, who is bringing in billions in profit every three months, really have to do it this way?
That's dangerous, because mitigations after the fact can be very far reaching. Not only dangerous for users, but also companies. So lets imagine it for a moment, that something like this needs to be mitigated:
Who used any resulting model to create anything at all? In which cases did something too similar to a real person result from the usage? How do we track all that down and delete it? Then we will need some IT guys coming over to the conpany in question going through their stuff, to ensure that all remnants are deleted, which only happens after looking at possibly millions of logs, to know who got those too similar results in the first place.
Making right after the fact always has a potential to be magnitudes more work than not allowing someone to use some personal data.
> But public policy shouldn’t be based on what’s easiest, it should be based on what’s best for society, and that requires considering the trade offs.
Surely that is not simply grabbing people's personal data and using it for new purposes without consent.
> Right now there’s no evidence that anything Facebook are doing causes real world harm, so the trade off favours allowing it.
If I had been living under a rock, I might have claimed this. However, FB is complicit in genocide, so naah, not so sure about that claim. I would rather we have a very close eye on them.
Isn't that just what social media is? When you sign up and begin uploading text or files they become the exclusive property and rights to exploit for commercial purposes of the platform.
> This means, for example, that if you share a photo on Facebook, you give us permission to store, copy and share it with others (again, consistent with your settings) such as Meta Products or service providers that support those products and services. This licence will end when your content is deleted from our systems.
The first image I get on Yandex is "Professor G.K. Egan" on the "staff-fake.html" page of a University website[0]. Aside from the "fake" part of the URL, it's not "the" Greg Egan we're talking about.
The second image is his profile picture on X/Twitter[1], which one can assume is fake since he says himself that his only puts fake photos (and that picture seems to be nowhere else on the Internet).
The third image is a speaker on the French-speaking Belgian radio that has one episode of his show about Greg Egan[2].
The fourth image is Knuth Berger, a journalist that has a discussion with Tim Egan about a book[3].
Did you read his claim that many photos exist that purport to be him but not one is an actual photo of the West Australian Sci Fi writer that he sees in the mirror daily?
Doesn't work if someone else violates your privacy in the first place by posting it publicly. To take it to the extreme, someone could break into your house (where you presumably do have a reasonable expectation of privacy), take photos of you and upload them publicly.
Well yes, but if you're assuming that, then that other person who violates your privacy could also choose to allow the photos to be used for AI training even if there is an opt-in or -out.
The article does not make clear whether the photos are used to train recommendation systems (eg newsfeed, ads) or generative image models. The article makes it appear like the latter, but is most likely the former.
There was a similar discussion when facebook started running facial recognition on all pictures.
But is is interesting if the image generators from facebook will be able to generate pictures of non-celebrities just from their name. Although it would be fun to see what the average Joe _actually_ looks like.
The article talks about using images for training. Your scenario would require using both images and names. Is there a reason to believe this is happening?
I don't know what counts as a "reason to believe", but they have the data, users have accepted a TOS that gives Facebook right to use all data, and all things equal, the more you know about a picture, the better training you can do with it.
So for me its more a question of "is there a reason to believe that they are not supplementing photos with other user data"?
Second, what was the license agreement between users and fb at the point they uploaded their pictures? I wouldn't be surprised that this is completely legal and covered by fb's TOS...?