I would encourage everyone to read it. My personal take is that this is a ridiculous amount of homework to do in 45 days, and is essentially the FTC asking for not only these companies' "secret sauce" but also their confidential bookkeeping. I would be astounded if none of them challenged this order.
For example, requirement 12.d.: "Submit all documents related to the Company's strategies or plans, including but not limited to research and development efforts." It is not clear the FTC has this kind of authority to essentially demand "what are your trade secrets and future business plans?"
Of particular interest is that the reason why only 9 companies were targeted is that if they had targeted 10 or more, they would have have a mandatory review under the paperwork reduction act. A review that would have almost certainly said they couldn't ask for most of what they asked for in the way that they asked for it.
On this logic and this logic alone, companies like Apple, Gab, GroupMe, LinkedIn, Parler, Rumble, and Tumblr were not targeted. Think about that.
Revealing all R&D is too much, but I don't think in general that it's too little time to answer. They would of course know the answers to questions like the one you cited.
Otherwise, from the PDF:
1-6 mostly seem fine.
Q7 is secret sauce
Q8 doesn't really seem relevant to how personal data is used
Q12 is insane - ALL business strategy (short and long term), research AND development, marketing AND sales, AND any plans to create OR cancel any existing products, as well as all presentations to any executives!
That is insane! Why doesn't the FTC stop beating around the bush and just ask to be bribed, because there is no valid reason the FTC has to ask for any of that. It's clearly just an attempt to get inside information to do trades.
Google is on the list, they have submitted to a similar agreement to be able to work in China. This reeks more like a see what you can get on the way out. 45 days from today is 1/28, one week after the inauguration. A decent time period where they will be a lot going on and confusion, that could allow underhanded tactics leeway to work.
As much as I want to find yet another reason to hate the Trump administration, this was done by the independent FTC, by a vote of 4-1, with 2 of 3 Republicans and both Democrats voting in favor it.
Curious why you're downvoted. So people think the FTC is not independent? I wasn't sure about it myself, but with both Democrats voting in favor, this doesn't seem to be pushed through by Trump.
I wonder what's really the limit in Q10a. For example companies will have huge recommendation models built, likely very automated without manually assigned labels. If they never ask for "“yes,” “no” for whether the natural Person is of Hispanic, Latino, or Spanish origin", but one of the automated recommendation dimensions is close to 1:1 match for that question, would they have to disclose that? (i.e. give statistics per-dimension)
This applies to the rest of Q10 too. 1000 top attribute values? Does "around vector (0,0,0.3,0.9,....)" count? Because barely anyone asks explicitly for the real user attributes anymore.
I think both assertions are wrong: Regulators still have no real conception of how any of this technology works, and you can watch the congressional hearings with Facebook for evidence that mostly they didn't really have a clue what to do once they had him in the room, or how exactly Facebook even makes money.
The court system only confirmed your rights to use webcrawlers within the last year, and webcrawlers have been around since the start of Google.
And as for the series of tubes guy, I'll give him the benefit of the doubt. He clearly still didn't have a good grasp on the subject, but I don't think he was being literal.
(for the record, I don’t mean to call out the fellow who actually made the series of tubes statement, it was just clear to me that even as an analogy it showed a lack of understanding of the basics of how the internet actually functioned, and that people in his position had no ongoing mandate to understand it.)
Here’s an example of what I mean by they have more understanding now:
I always thought it was a pretty good metaphor and didn't understand why he got so much grief for it. Breaking up messages into little chunks, sticking each chunk into one of those little capsules like they have at the bank, putting each capsule into a pneumatic tube, sending it down a series of tubes with switching stations along the way, and reconstructing all the chunks (which may have taken different paths through the series of tubes and may have arrived out of order) at the destination -- this is a good metaphor for the internet!
> If they never ask for "“yes,” “no” for whether the natural Person is of Hispanic, Latino, or Spanish origin", but one of the automated recommendation dimensions is close to 1:1 match for that question, would they have to disclose that? (i.e. give statistics per-dimension)
I'm going to assume yes, they would need to disclose that. That scenario sounds awfully similar to some of the terms the FTC uses when discussing the Fair Credit Reporting Act (FCRA):
> Under traditional credit scoring
models, companies compare known credit characteristics of a consumer—such as past late payments—with
historical data that shows how people with the same credit characteristics performed over time in meeting
their credit obligations. Similarly, predictive analytics products may compare a known characteristic of a
consumer to other consumers with the same characteristic to predict whether that consumer will meet his or
her credit obligations. The difference is that, rather than comparing a traditional credit characteristic, such
as debt payment history, these products may use non-traditional characteristics—such as a consumer’s zip
code, social media usage, or shopping history—to create a report about the creditworthiness of consumers
that share those non-traditional characteristics, which a company can then use to make decisions about
whether that consumer is a good credit risk. The standards applied to determine the applicability of the
FCRA in a Commission enforcement action, however, are the same.
> Only a fact-specific analysis will ultimately determine whether a practice is subject to or violates the
FCRA, and as such, companies should be mindful of the law when using big data analytics to make FCRA covered eligibility determinations.
Around here we've had a recent law mandating that any government-related decisions that use an algorithm must, upon request by a citizen, be explained in plain language, following all the steps in the algorithm.
I can't wait until they have to do that for a black box, neural network program !
It doesn't even require anything esoteric like a neutral network. This would already be very difficult for the federal reserve for instance, or the SEC. Or even the bureau of land management. Lots of government agencies crunch financial and other kinds of data to make decisions, which would be very difficult to describe in lay terms.
I don't think it's more important to protect private profits than ensure people's right are respected when their data is being processed.
Moreover, I don't think the answers will be made public so what's the problem with being held accountable to a government authority? That's just normal in this case.
It's more than relevant to get to know a company business strategy when their business model is essentially based on user data.
Being held to government authority is indeed normal. Being held accountable to government micromanagement and/or spying is not. The normal way for government to handle this stuff would be to issue a law saying that it is forbidden to spy on citizens and if you do spy on citizens you go to jail. The government going over all paperwork of a company is such a strange activity.
And how do you research for the formulation of such a law? Not really trying to defend anyone here with not much info, just it seems that asking parties for information about how the soup is made so you know how to regulate the making is pretty common, no?
> It is not clear the FTC has this kind of authority to essentially demand "what are your trade secrets and future business plans?"
They do seem to have the legal authority to make life very difficult for and potentially restrict entire actions of these companies. I imagine that if you refused to answer, and that your right to do so was likely to be upheld, that they'd just do what they were going to do before they asked the questions.
Picture this like a chance to talk your way out of trouble. Saying the minimum probably won't be enough.
And just like the real world, never try to talk your way out of trouble.
Never talk to police, remain silent, and get a lawyer. The police aren’t trying to throw you a bone, they are trying to get you to selfincriminate
I will say maybe not bad the deadline. Why? Well my theory is that legal fees and help would be higher (that's right higher) if a longer time period was given. Seems counter intuitive but that's my way of looking at it. (I guess in short maybe a version of 'work expands to fill the time available for completion something like that).
That said I don't feel bad for these companies at all. They have plenty of resources and plenty of money. And no reason they can't ask for and receive extensions of the time due there is no 'train leaving' at that time the info needs to board.
The 45-day thing is a minor tickle, I shouldn't have lead with it.
My point is that to me it feels like this Order is a huge overreach of authority, and whether you personally do or don't feel sympathy for a certain company does not change what the government should and should not be able to ask of a company in the general case.
Reading the document, the demands are wide and reaching. I would like to know how much of what they demand is actually part of the FTC's authority for what they are allowed to require companies to provide.
It's absolutely not an "overreach." The FTC is actually starting to do their job after decades of letting industries consolidate. Congress should increase their budget so they can do more of it.
Frankly, the order is one step short of "copy your Shared drive to a hard disk and mail it to us, we'll take care of the rest."
I'm genuinely interested in what US Code permits the FTC to ask for such a trove of company information and strategy. My suspicion is that they're reaching. We can talk about whether we feel it's an overreach or not - you clearly feel the opposite of me - but I hope someone can chime in with the actual mandate or code.
>The FTC is issuing the orders under Section 6(b) of the FTC Act, which authorizes the Commission to conduct wide-ranging studies that do not have a specific law enforcement purpose.
My read of Section 6(b) (from [0]) is that it requires the FTC to prescribe specific questions for a company to answer:
> the Commission may prescribe annual or special, or both annual and special, reports or answers in writing to specific questions
I do not see how the requirements of part 12 of this Order are "specific questions," in fact they are quite broad to me. But I suppose that's for a lawyer to decide.
Personal opinion (don’t @me, I have zero credibility here):
They are looking to establish a new name for themselves with the public, and right now is the perfect time to do it.
“We’re taking big tech to task for the safety of our children! You’ve seen the headlines about what they’ve let happen under their watch! The FTC is here for you!”
I am not sure what to explain here, every hedge fund is audited extensively, of course there are case's such as madoff, but in any way does this imply that virtually full transparency is not existent, what are you doing as organization, every email, every transaction can be and it is audited in many cases. Furthermore, if there are unusual trades, request will be made for explanation why this was the case.
They have billions in cash, but they can't afford to hire enough people to finish this task in 45 days?
I believe Requirement 12.d is not necessarily about "what are your trade secrets and future business plans?". Somehow FTC should have clear picture of "WHAT-WHY-WHEN" in order to come to conclusion.
We use digital social networking services every day, we still have nearly zero knowledge of how do their platforms actually work.
From the one dissenting statement by the FTC board:
--
> The 6(b) orders are rife with broad (and sometimes vague) specifications that burden analysis and oversight could have helped reduce.
"all Documents Relating to the Company’s or any other Person’s strategies or plans, Including, but not limited to: a) business strategies or plans; b) short-term and long-range strategies and objectives; c) expansion or retrenchment strategies or plans; d) research and development efforts; e) sales and marketing strategies or plans, Including, but not limited to, strategies or plans to expand the Company’s customer base or increase sales and marketing to particular customer segments (e.g., a user demographic); f) strategies or plans to reduce costs, improve products or services (e.g., expanding features or functionality), or otherwise become more competitive;"
> Such a request would be suited to an antitrust investigation. But as part of an inquiry ostensibly aimed at consumer privacy practices, it amounts to invasive government overreach.16 And that is just one of the order’s 50-plus specifications.
> The biggest problem is that today’s 6(b) orders simply cover too many topics to make them likely to result in the production of comparable, usable information.
> Such a request would be suited to an antitrust investigation. But as part of an inquiry ostensibly aimed at consumer privacy practices, it amounts to invasive government overreach.
Rather ironic to consider this "invasive government overreach" but not to levy the same accusations towards the perpetrators of surveillance capitalism.
I'd love to see the law that allows the FTC to demand broad sets of documents like this without a warrant or even a purported crime.
Unfortunately, I'd imagine it exists, but that it basically says "the FTC can do whatever it wants if it thinks a company is being 'abusive' or 'misleading consumers'" with some vague definition of what that means.
Yet another example of Congress legislating away their ability to legislate and/or the courts' ability to judge.
6(b) orders are essentially warrants under the FTC Act:
"Section 6 of the FTC Act provides another investigative tool. Section 6(b) empowers the Commission to require an entity to file “annual or special . . . reports or answers in writing to specific questions” to provide information about the entity’s “organization, business, conduct, practices, management, and relation to other corporations, partnerships, and individuals.” 15 U.S.C. Sec. 46(b). As with subpoenas and CIDs, the recipient of a 6(b) order may file a petition to limit or quash, and the Commission may seek a court order requiring compliance. If a party fails to comply with a 6(b) order after receiving a notice of default from the Commission, the Commission may commence suit in federal court under Section 10 of the FTC Act, 15 U.S.C. Sec. 50. After expiration of a thirty-day grace period, a defaulting party is liable for a penalty for each day of noncompliance. Id.; Commission Rule 1.98(f), 16 C.F.R. Sec. 1.98(f)."
Key phrases being "broad sets of documents" from the comment you're responding to and "reports or answers in writing to specific questions" in the FTC's blurb.
"Specific" doesn't mean "narrow." "What were the ingredients from every meal you've had in your life?" or "What is the current address of every person with the first name 'John'?" is a specific question.
All it means (in English) is that it's clear which information should be included in the reply, not a limitation on the range of information that can be asked for.
> Unfortunately, I'd imagine it exists, but that it basically says "the FTC can do whatever it wants if it thinks a company is being 'abusive' or 'misleading consumers'" with some vague definition of what that means.
Again, this is rather ironic given the nature of what Facebook et al are doing. Vague clauses in unreadable Privacy Policies don't really constitue an adequate defence in the mass harvesting of personal data, yet here we are.
From the dissenting opinion by Commissioner Noah Joshua Phillips:
> These are different companies, some of which have strikingly different business models. And the orders omit other companies engaged in business practices similar to recipients, for example, Apple, Gab, GroupMe, LinkedIn, Parler, Rumble, and Tumblr, not to mention other firms the data practices of which have drawn significant government concern, like WeChat. The only plausible benefit to drawing the lines the Commission has is targeting a number of high profile companies and, by limiting the number to nine, avoiding the review process required under the Paperwork Reduction Act, which is not triggered if fewer than ten entities are subject to requests.
> The only plausible benefit to drawing the lines the Commission has is targeting a number of high profile companies
It's almost as though the majority of the Commission recognizes that its resources are limited, and that going on a wild goose chase investigating a bunch of smaller social media companies would simply ensure that nothing would be accomplished.
Apple isn't even a social media company. I have no idea why they'd be included.
Gab, Parler, and Rumble are all smaller social media sites targeting the alt-right. They're so much smaller than any of the services being investigated that I feel like even mentioning them here is a dog whistle.
LinkedIn is explicitly targeted at adult professionals. That puts it a bit outside the scope of the investigation as well; one of its focuses is "how [these sites'] practices affect children and teens".
By some metrics, iMessage is the biggest messaging service in the US. Arguably, from the POV of the FTC as a US regulator, WhatsApp shouldn't be included but Apple SHOULD be.
This is of course ignoring other social features of Apple products such as Game Center, Find My Friends and Facetime.
Apple Message is a message service. Its features include a way to personalize messages. It's not collecting personal/demographic information, it's not trying to increase user engagement (addictiveness), it's not serving ads. So, it clearly is outside the scope of the information that the FTC is trying to gather.
> It's not collecting personal/demographic information
Sure is! On my iCloud account, Apple has my address, my location, the location of all of my devices, my Apple Card is tied directly to purchases, and much more!
Did you know iMessage specifically even used to be linked directly to Facebook's evil social graph?
> it's not trying to increase user engagement (addictiveness)
Citation? I would bet a large amount of money that Apple is trying to increase engagement. Why do you think they make products like memoji or features like pinned threads?
Did you know the App Store has a recommendation system to suggest apps to you? It's based on the other apps that you purchase and possibly other signals that I'm not aware of. Apple, of course, gets its 30% cut of that.
iMessage is a pure messaging service - there are no recommendations, no feed, no following friends, no status updates, no likes, no profiles, no stories, and no connection to Facebook at all. Most importantly, no advertising since they actually make money off the products, which means no incentive for gaming engagement metrics. They want you to have a good time and sell you another device next year, not glue your eyeballs to the screen.
Since there is no “Apple social network” to be spoken of, there would be little reason to include them.
> no likes, no profiles, and no connection to Facebook at all
You can absolutely thumbs up stuff. And accounts do have a profile -- a name, a phone number, a profile pic, email address, probably your apple ID somewhere in the metadata, etc. And this is not connected to Facebook, but to your Apple account. All your activity is sync'd between devices.
> Since there is no “Apple social network” to be spoken of, there would be little reason to include them.
In a sense, iCloud is the social network: https://www.androidauthority.com/green-bubble-phenomenon-102.... Beyond being able to send special effects with your chats, you can also send money via apple pay, video call via FaceTime, and, with Apple One, share your subscriptions.
> They want you to have a good time and sell you another device next year, not glue your eyeballs to the screen.
True, but what do you think the metrics execsy are thinking of when they introduce thread replies or Memoji? And with Fitness+, News+, TV+ and Apple Arcade, more and more of Apple is increasingly reliant on customers paying them for the privilege of staring at screens.
Does Fastmail limit any of the social features of its product to communications with other Fastmail users or is Fastmail an interface to the generic, open email protocols?
Well, if they did Apple could respond with a few short pages (targeting: none, ads: none, etc.) of info compared to the terabytes the social companies would need.
Apple's great luck here is that everyone thinks of social media primarily in aesthetic ways that are linked to ideas of social media that they don't like.
> there are no recommendations, no feed, no following friends, no status updates, no likes, no profiles, no stories, and no connection to Facebook at all.
So because Apple doesn't have, in your view, some of the above, extremely limited, set of things they have no social networking properties in your view. It's revealing that "no connection to Facebook at all" is of such prime importance in the determination of Apple's products not being social media that it is specifically listed here. You're arguing what makes a product social so clearly based on a narrow set of specific features that you have ill-will toward but not on questions of the actual social nature of the products.
Under your asserted set of features which define social media, Messenger from Facebook only fails on being associated with Facebook and having stories! WhatsApp only fails due to Status and being associated with Facebook.
Apple used to have a product that was formerly known as "Find My Friends". It's wrapped into an app now called Find My, where you can follow the locations of your friends throughout the world. It integrates really nicely with other apps in Apple's portfolio like FaceTime and iMessage. One of its key features is to notify you when a friend of yours enters/leaves a location. It's about as close to "following friends" as it gets. Now for you, I imagine that your argument is that you are following physical locations of your friends with this app, not broadcast digital content so it's different and is therefore not social media. This seems like an extremely narrow view of social media that ignores the "social" aspect. Hard to understand how anyone would assert an app called "Find My Friends" isn't social!
> Most importantly, no advertising since they actually make money off the products, which means no incentive for gaming engagement metrics.
Why is it that engagement can only be useful for advertising? If Apple packaged a whole bunch of terrible apps that no one wanted to use (ie had low engagement) into its OS, do you think the demand would be so high for them? I don't. Neither do you, I think, as you make clear in the next sentence:
> They want you to have a good time and sell you another device next year, not glue your eyeballs to the screen.
What do you think represents having a good time in iMessage? For me, I would probably measure it by repeated usage under some metric like sends. The more someone uses iMessage to communicate with other people on iMessage, the more likely they are to not want to leave the Apple ecosystem. This is a real phenomenon. The shame of being a green bubble is very real in some circles. If the functionality of iMessage didn't vary between in-network and out-of-network messages, it would be completely fair to suggest that iMessage is a "pure messaging app" in the sense that GMail is just an email service. On GMail, my emails go across email services and work the same way in any scenario. In iMessage, if you're a pleb on Android, I can't do dozens of things with you that I can when communicating to my contacts on iMessage.
> Since there is no “Apple social network” to be spoken of, there would be little reason to include them.
This is just false beyond your aesthetic assertions of what social media is! Game Center explicitly has a "friends" concept as does Find My. iMessage doesn't call your contacts friends but if you have an iCloud account associated to a contact what's the difference beyond aesthetic conceptions? You have a closed network where you can socialize with others.
If your definition of social media is "has the features of Facebook Apps or is associated with Facebook" then I get why Apple doesn't have social media properties. If your definition of social media is "closed ecosystem services that enable you to interact socially with chosen individuals" then Apple clearly has social media properties.
It really sounds like you’re problematising this subject. Tell us: which of Apple’s services or platforms have played a significant part in recent sociopolitical events, trends or culture in general?
That is what this inquiry is about, not musings over system boundaries or speculation about intent.
iMessage is E2EE (according to Apple). But Apple can still see who's messaging who. With that you get an intimate social graph. Similar to what the NSA has on US citizens via phone records. Apple also says they don't sell your personal info to advertisers. But they can still use that data however they like, and there's nothing stopping them from selling that data at some point in the future.
Why would they want to sell it. They could license access to it for recurring revenue, like other Big Tech companies.
The most important control that customers should seek is not control over the sale/transfer of data, it is control over how the data may be used, by anyone. For example, if the data comes with terms that say it cannot be used for advertising purposes, then what are the chances anyone can sell it to advertisers.
The "We do not/will not sell your data" line is a deliberate red herring, of no more value than "We take security very seriously".
Edited my post with some more details. While LinkedIn is a social media site of sorts, it's outside what I believe the focus of the investigation to be.
LinkedIn has a lot of teens on it actually. Very talented teens some, others are teens of average talent level. But they do have teens. They are also about 3 times bigger than discord. I think you could argue either way, because there may be a larger absolute number of teens on discord? The numbers are not broken out enough to tell. But given the nature of what we're dealing with, I wouldn't discount the idea of them not wanting anyone taking a closer look at what they are doing.
I myself am always suspicious of the "Think of the children!!!" crowd.
Somehow if my teenaged kids spent as much time in LinkedIn networking business relationships as they did in Snapchat I wouldn't quite mind as much.
Regardless, I think you've missed the part where the FTC said what they were gathering information on, e.g. "how[social media] practices affect children and teens."
I think you've missed the part where the FTC said what they were gathering information on, e.g. "how[social media] practices affect children and teens."
I didn't miss that part.
That's why I said that I don't trust people who say they are "thinking of the children". If you trust their intentions, well, we can peaceably agree to disagree.
And if your friends are the politicians in charge, then maybe that would be a good argument that I should be less skeptical about the "think of the children" claim.
But your friends are likely not the politicians in charge, and even if they were, I wouldn't take the word of an anonymous internet poster that said friends were wholly honest and forthright in this regard. I would remain skeptical of their intentions. The recipe for good democracy includes a healthy amount of skepticism. Skepticism is what motivates investigations and inquiries. Inquiries of the type the FTC studiously avoided by not triggering the 10 entity flag.
I agree that LinkedIn probably has some similar risks to the sites that have been issued response orders. But, if the goal is create legislation around those risks, I'm not sure it is necessary (at this time) to also query LinkedIn (and the many other sites with similar risks). The ones on the list can likely provide enough information for this initial inquiry.
It is not primarily a social media company but they have a big portfolio which includes social features, one example, I can link friends to my apple watch exercise app.
Parler's front page makes no mention of this. Pls source.
Or, do you just mean that they take the people who get kicked off of twitter? Because then, yeah - if you kick people off one place they'll tend to be found somewhere else.
Commissioner Phillips makes a pretty disingenuous argument. The purpose of FTC Act Section 6(b), as I'm sure Phillips well knows, is to collect information in order to inform the FTC's ability to propose policy and legislation. In fact reports written from 6(b) requests are often called "studies" or "policy research" in the media.
> These are different companies, some of which have strikingly different business models.
That's usually the goal when you're sampling how an industry operates.
And so disingenuous are the other points he makes that I honestly don't believe Phillips believes most of what he's written. But when I got to the sentence where he called the FTC's privacy staff "the most impactful privacy enforcers in the world" and the FTC's regulation of consumer privacy "an effective enforcement program" it's hard to take anything else he says seriously.
I'll admit I did get a kick out of him citing Executive Order 13892[1] which basically says government agencies must be "fair and transparent" when collecting information from regulated entities. How deliciously ironic he's willing to put his foot down in the name of unfair, confusing, and overly broad information gathering.
I see no problem focusing on the largest companies that rely on personal data.
The "10 entities limit" doesn't seem like an inappropriate use of a loophole here. You have draw the line somewhere. Look at the largest and use them to make the general guidelines (or new laws) that will govern all companies. Then look at other companies as needed
sounds funny as they just disturbed the hives of the probably best financed legal departments out there and they did it using the core issue for the companies owning those legal departments.
Hi jawzz, if you don't mind someone would like to ask you these questions:
>I have a question for this user, since he/she chose a good section to quote (it means he/she really looked into it).
>I'd be really interested in his/her opinion or thoughts, and or any other kind of feedback:
>Yesterday there was a widely reported leak of 2 million embedded agents of a foreign government (you can see it in dozens of places).
>I'd ask this user, does he/she think that this FTC order could be by one of these embedded agents in order to gather private information about citizens, for the purposes of censorship?
>I noticed that a few days ago [5 days ago], YouTube has a controversial censorship policy, [it generated 3055 comments when discussed on Hacker News]. Videos discussing political opinion on a certain topic without calling for violence are removed.
>I'd like to just keep it to the facts of whether this FTC action could, in this user's opinion, be part of such a thing. [Gathering information on users for the purposes of censorship]
How about if I see an ad I should be allowed to see who bought the ad?
How about how they chose to target me? Is it because of my race or interests or gender or age?
How about the option to opt out?
How about the option to configure your future ad targets / attributes?
How about no special black box algorithms? Past the addiction issues, how can these exist while guaranteeing no racial and other types of discrimination?
How about no algorithm opt outs?
How about no more quasi-curated trending sections and the options to see a purely organic opt-out results (Twitter moments, Twitter comments, Facebook feed/news, Google Search, YouTube trending, etc...)
Finally... How about anti-consumer reparations in some form or jail time for executives?
I still would like to go back to the days where not all my feeds on things like YouTube, Twitter, Facebook, etc... all became just an algorithmic feed of what they think I like. It would be nice to go back to that day in age when it was chronological. I'm all for also having a recommended feed on things like YouTube because I do want to find similar content I may enjoy, so I appreciate that YouTube still has a normal chronological subscription feed I can view too.
How would you know that they haven't done illegal things, without actually looking up if they did? I think gaining insights into that blackbox is what they are after, and see if what happens in that blackbox is illegal.
I get that you are using algorithm as a short-hand here; but can you elaborate and describe what type of algorithms you suggest be opt-outable?
Algorithm is a super general term, but everyone uses it to describe something super-specific. I feel like everyone sorta has their own personal definition. I am in favor, at least in principal, of legislation that would curb some of the more abusive uses of datascience; but the idea of trying to legislate "algorithms" seems... fraught.
Agree it is hard but important to do to avoid discrimination. It’s less “regulating algorithms” and more “protecting personal data and access”.
Google and other big data co’s should not be allowed to profile unregulated and that’s exactly what they are doing.
I think a reasonable solution is that if a company is using someone’s “personal data” (legal definition pending) to configure/curate/personalize/uniquely-alter their experience in any way, that they allow the consumer or user to change those settings.
Configuration variables will open up transparency and is the first step to having a personal data right.
If an “Algorithm” is too complex to allow a user to configure (black box), they need to provide a way to opt-out to a general organic / main audience one then.
“race/gender/x/y/z is a protective sensitive category prohibited from advertising to directly”
Does anyone actually believe they don’t provide ways for advertisers (including nefarious ones) to do so through inferring it?
Or, that their own internal tools and algorithms to curate personalized content follows these rules? Even if they try and only discriminate a tiny little bit and are continuously working on it how on earth is that allowed or okay?
I agree with a lot of these but the option to opt out would be your choice already by stopping use of the service. Or are you thinking that by opting out you could pay for the service instead?
Had a good laugh at the bottom of the release: "Like the FTC on Facebook(link is external), follow us on Twitter(link is external), get consumer alerts, read our blogs, and subscribe to press releases for the latest FTC news and resources."
This isn't a direct comment on this case in particular, but if you are preaching the death of capitalism (is where this pasta arises from) while using not only the freedom to do so and a platform delivered to you solely by a market, without any self-reflection then you bet I'm going to point that out.
Not the parent commenter but the only product I spend any money on whatsoever anymore is Benzedrex so beyond that any product being delivered to me is entirely at the expense of whatever company is involved in it's dissemination.
Do you... You realize that your entire argument is essentially "Oh you are going to critique capitalism on a website funded through capitalism? Curious! I am very intelligent." Like... do you not have self-reflection?
It might be useful to put TikTok in parentheses for ByteDance, since that's their flagship product and most people (myself included) do/did not know that ByteDance is the company behind TikTok.
Why list Facebook, then list WhatsApp, but not Instagram? Is Instagram not part of it then? If WhatsApp wasn't there, I'd assume they're both under FB but listing WhatsApp separately is strange.
This is heartening news. We are finally approaching the moment where addictive social media is handled with the same seriousness with which big tobacco was treated.
It's funny to think that people might look back at how much time and attention people from 2010 to 2020 spent on their device and think how backwards we were.
Like how "dumb" past generations were for putting lead in gasoline.
Seems more likely to me that all the overreaction to social media will be what is viewed as silly and old-fashioned. History is absolutely full of very similar examples of people freaking out over how new mediums like novels or bicycles or radio or rock music or whatever are a waste of time and/or are destroying the children's minds.
Right, history is also full of examples where enough reaction was non-existant and where this precise argument was used to allay concerns. History is probably also full of examples where it made sense.
There are also examples of when ignoring addictive behaviour -- or just pretending there isn't a problem -- leads to disaster.
The world wide web was the new medium, social media is the attention grabbing (and addictive for some) hook that enables a couple of large corporations to charge more for advertising impressions.
I wonder which is worst for health. I’d say tobacco, but social media probably causes more obesity than Coca Cola itself. Also causes people to procrastinate instead of engaging in intellectual activities, especially school or work, so the cost to society might be enormous. I wonder whether this is quantifiable, but it is definitely visible.
But if it slows careers, we also have to reckon it creates careers and side-jobs (« Instagram model », « journalist », ...).
> Also causes people to procrastinate instead of engaging in intellectual activities, especially school or work, so the cost to society might be enormous.
I doubt it. Before social media, people were playing solitaire on their computers. Before that, they were standing around the water cooler gossiping. Slacking off and procrastinating are nothing new.
The distraction aspect is due primarily to having so much information available instantly. That is a result of the internet, but smartphones supercharged it as they add constant availability to the mix.
The issue with social media is that it is specifically designed to distract and grab attention, and the information they use is mostly gossip crowd sourced by their users to sell advertising against.
There needs to be some real education about how to use social media responsibly during schooling, and the companies also need some strong regulation to stop the exploitation.
The harm done is similar to gambling in my view and needs to be regulated in a similar way. Some people have the self restraint to log off and focus on real life, for others it is an addiction.
Did the water cooler company hire people to see if they could keep people at the water cooler longer? Did solitaire copy what’s in your copy/paste pad and secretly send it to Microsoft?
Did solitaire buy and kill FreeCell to prevent its popularity from surpassing it’s own?
I doubt it, much of this is just going to be finger waving. Case and poin the FTC sued the Match Group for fraud, but I can guarantee you nothing's going to come of it. If you could pay a $5 million fine after increasing your valuation by billions, you'd do that too.
If anything I think there's a massive need for individuals to regulate their own social media. Since I know these things can actively harm me, I don't use social media or online dating. But I also wouldn't pass laws stopping people from using this stuff.
It's like eating junk food, sure the world would be a better place if no one ate McDonald's, but everyone has a right to eat as many big Macs they want to
Something big is brewing. I think there is going to be a big push to reign in social media.
I've thought long about this and can only think of one way to regulate them. I would propose that people are allowed x degrees of separation. As a regular user I can only go n+2 connections. After that the content is not seen no matter how popular. It will kill virility but should reign in stupidity being broadcast. To increase your separation score you must merit it with a reason and stick to it.
Personally I'd like to go back to an age before social media. It seems social media has had limited substantive success for anyone and has really created more problems than it's worth.
Historically I've avoided expressing my views on abolishment but now people are trying to develop fear-mongering strategies around decentralization, so here I am back at abolition. I'd love to find a happy medium but I don't think it's possible. Social media companies, search companies, national news (who is dependent on social media for income), politicians (who are dependent on social media for likes and shares), and law enforcement working in coordination to do an arbitrarily good thing (disrupt the flow of misinformation) was both eye opening and terrifying. These tools can be used for good but they're also the keystone tools of autocracy and are used largely for reasons that require interpretation and narrative following.
My point is that we don't need these kind of problems. We have space exploration ahead of us, major advancements in technology are still needed, and we need a continuum of people who are willing to work together despite whatever differences they may have ideologically and otherwise. I don't think limiting scope or killing virility will achieve these ends.
So I like social media, and get a lot of benefit from it. I like that it's free and supported by targeted ads, rather than a paid service.
Since we disagree, and I imagine there are many people with the same opinions as both you and me, why not make it opt-in, so you only encounter social media if you choose to make an account, and then still only see it if you go to it and/or install it intentionally? Wait, that's how it is now...
Social media isn't opt-in though. Facebook has routinely collected data on non-Facebook users, my name/likeness/content can appear on social media without my consent, and there are hardly controls for protecting such things in any meaningful sense.
I deleted Instagram months ago and I have friends that have routinely told me that profile is still up, just in a limited capacity. How is this opt-in?
An even better example of it's non-opt-in nature is that I went as far as to ban all social media via my VPN and router. The result was unexpected when looking at American news, it often quotes from and derives sources from social media. Reading news articles that were mostly Twitter statements or reactions made it hard to discern what the articles were even about in the first place.
> I deleted Instagram months ago and I have friends that have routinely told me that profile is still up, just in a limited capacity. How is this opt-in?
Did you temporarily disable your account instead of deleting it? Deletion appears to only be available from web/mobile web, which is not great but deletion is only three steps. [1]
You're talking about 3 very different things here.
1) You're saying social media isn't opt in because they're tracking users off of their social media platform. This doesn't have anything to do with social media, but rather the pages you're going to have chosen to use a service provided by Facebook to track users. They could use another behavior targeting service, but chose Facebook's.
2) You're saying Instagram deletion didn't work. It sounds like you deactivated it instead of deleted it. I haven't heard this be an issue (it'd be widely reported if it were)
3) You're saying it's not opt-in because people refer to it in the news. In that case, nothing is opt-out because people have the right to refer to anything they want. Making this opt-out would mean censoring other people from saying things online or referring to things online, because of your views.
1. That is still Facebook violating my privacy. How about if I create a virtual profile of you on a page of my website, list it in a search engine, and provide unverified (and likely false) information about you that can also be damaging? Would your feelings change or would you continue to let me run your public profile out of principle as you've described here?
2. I have already answered this. I deleted it. I'm happy to share the email that shows I deleted it, not deactivated. I additionally deleted my data as well, so I have two emails to share if you so desire.
3. There are websites that do this and it's not considered censoring. Reddit regularly has rules against doxxing or naming as well as proper attribution. People have gone as far as to setup bots and alerts to remove information or give proper attribution. This idea is not revolutionary and your presentation of it is maximalist at best.
I didn't see the other post and I believe you. If you say you deleted it, and they didn't, I think we'd both agree that's a problem they should fix or change.
I think the issues you're describing don't have anything to do with the social media service aspect, but more like bad practices of an online company. So I'd agree we should push for online tech companies to fix many issues, including security issues and offsite tracking and proper attribution, etc.
But I don't believe in your original argument that we should abolish social media, because a lot of people want to use social media and it's not your right to stop them.
Sure, there are probably multiple ways we can improve the ecosystem with the same net effect.
> But I don't believe in your original argument that we should abolish social media, because a lot of people want to use social media and it's not your right to stop them.
I'm happy to give up decentralization or abolishment if someone can find a way to ensure that social media is treated like the extension to free speech that it is, that privacy is ensured strongly, and that virality is a thing of the past.
How do I opt-out my young child, who is part of a generation that if they don't opt-in are subject to intense social stigma? I mean I know how the mechanics work, but we are talking about applying "old curmudgeon who doesn't know modern life" views on a new generation. I don't think you succeed here (I am not weighing in on whether or not we SHOULD do this btw) without going all-in.
Firstly, as others have pointed out, these companies collect information on people who have never opted in.
Secondly, they purposely set up a walled garden such that you can't see some things unless you make an account. Like say, the thousands of businesses with no website but only a Facebook presence. Or reading political discourse directly from politicians on Twitter. Or, as is very often noted, the easiest place to get support from companies seems to be on social media.
The latter isn't really easy to solve. One can say who cares, don't join then. But it can put you at certain disadvantages.
Platforms like Facebook are not realistically opt-in. Thats actually why social networks work. Its a self-reinforcing mechanism that makes it punitive to leave.
It will kill virility but should reign in stupidity being broadcast.
Why do they have a responsibility to 'reign in stupidity' in the first place? Who gets to decide what's stupid and what's not?
I understand removing illegal content. I understand providing guidelines around content that may not be acceptable for minors. I understand removing (or compartmentalizing) content that is "off topic" (such as a platform dedicated to eSports not wanting traditional sports content). But every attempt that can be made to 'reign in stupidity' means censorship of one form or another and will inevitably be abused.
> For example, I doubt you and I are two degrees away. I would have never seen this.
Well, then, you'd never have seen this terrible idea, and nor would basically anyone else. So, the terrible idea, paradoxically, works, by keeping people from seeing terrible ideas!
I had a feeling someone would point this out. But I would contend me seeing this was still good. I was able to provide feedback to the proposer, and others were able to comment and think about it as well.
At the end of the day we've all expanded our minds a bit.
Exactly, if you restrict access based on degrees of separation, you're essentially restricting access to information by geography, which is sure to reinforce regional differences and radicalize people further by hiding dissenting opinions
Pre social media, we lived in a world where information was generally separated by geography. Your comment suggests that by returning to such a state, it would result in increased radicalisation. The implication is therefore that social media has decreased radicalisation. I’m not sure I agree.
Social media has both increased and decreased radicalization at the same time. For those who already had radical ideas, who previously couldn't find others with the same ideas, they are now more radicalized (think conspiracy theories and so on).
For those with open minds, it's allowed them to learn new things from people far away with very different ideas.
> For those with open minds, it's allowed them to learn new things from people far away with very different ideas.
It can do that, but that can also backfire because the current social media platforms aren't trustworthy or transparent.
Advertising, "influencers", fake likes, bots, and all of that let small fringe groups amplify and normalize their ideas, so when open minded people come along, they see thousands of likes and hundreds of people promoting some idea, and they can get the impression it's a much more popular or common idea than it really is.
In other words, social media makes it very easy for a vocal minority to push their views on people, and it's nearly impossible for other users to discover when that's happening.
As someone who considers himself well read and reasonably curious, I’ve found myself falling victim to “fake news” through social media. I’m quick to admit I was duped when fact checked by friends during conversations, but I’m nevertheless still longing for the days when I didn’t need to find at least 3 sources for any vaguely controversial thing I learn.
I don’t think the FTC is (or ever will) reign in how “far” content can spread. The FTC is a consumer protection agency so they are most definitely focused on advertising and data collection, not user-generated content or misinformation.
> This would have totally suppressed all the BLM content this summer.
How do you know that? Civil rights and protest movements in general have traditionally not relied on social media or even digital communication of any sort. Couldn't the mechanisms once employed to organize activists still work today? The answer is yes, they can. I know for a fact the Baptist church down the street from me participated in organizing many of the protests in my city this summer.
> This is also great for hiding and suppressing organization of social uprising. This would have totally suppressed all the BLM content this summer.
I'm not so sure about that. That kind of policy would definitely put the breaks on uncoordinated viral peer-to-peer spread, but that just might lead to resurgence of coordinating organizations acting as clearinghouses for such things.
That could be good, at least in places with decent civil liberties, since I believe broadcast speech is bit like code: it's better when there's some peer review and sanity checks before it's distributed.
Strongly doubt that would have been a net effect considering the amount of corporate attention and support was received. Ferguson in 2014 might be a better example case, where there was little to no corporate involvement. Your point is true though, no doubt.
People often retort that no one cares about decentralization, well we're going to find out if that's really true over the rest of the decade.
This seems interesting - clearly the friction in sharing everything with everyone is far lower online than before. Before, you had to consume finite resources (time, energy, postage stamps, etc.) to tell someone else about something. This gave a natural rate limit.
As a thought experiment, what if the degrees of separation were less rigidly enforced, but people's "power of voice" decays the further it goes from those who directly follow them, resulting in a similar result? Then for every user, there's a probability they'll see or not see a given post (significantly reducing as the degree of separation increases). Clearly this isn't ideal and there's compounding factors (if you have 3 friends share it, should that probability be additive?)
Share rate limits in WhatsApp led to people imploring others to copy paste fake news to share it to others, so clearly this isn't as simple as limiting normal shares.
Friction is exactly what I believe the end answer is for social media, fake news, viral video, etc.
Prior to the Internet, all communication included substantially larger amounts of friction. Broadcasting your opinion required acquiring TV, Radio, or Newspaper time/space. Sure there were low cost alternatives like standing in the public square and shouting your beliefs, but that is a far cry from tweeting out something hateful from the comfort of your couch while you eat pizza and watch unlimited streaming video content.
The friction was both a drag on productivity AND a safety mechanism to prevent mass spread of inaccurate or low quality content. I feel like the Internet removed all of that friction which is great in many cases, but particularly bad for accuracy of information. It's like the gossip that used to spread through towns by word of mouth, can now become truth within days just by going viral. And once enough people hear the same thing, isn't it now true?
Perception is reality. Just look to the divisions in the world. Each side when talking amongst themselves has one truth while the opposing side has a completely different one. Doesn't matter which truth is accurate when both sides are unified in their belief that it is their side's story that is correct.
Friction is necessary, to at least slow the contagion of inaccurate information. Right now the incentives are misaligned. Viral content that makes the consumer feel good provides the most economic return for companies like Google and Facebook. While the system is setup to reward this, how could you expect a profit seeking business to do anything else but optimize for it?
> The friction was both a drag on productivity AND a safety mechanism to prevent mass spread of inaccurate or low quality content.
I don't think this is correct. I remember a bunch of outlets from the seventies and eighties which had all of: far reach, inaccurate and low quality content. It was just restricted to few influential people with enough money. I think this is the reason why social media are fought that hard: these people lose their influence (not that I am fond of social media). The past was not that rose as we like to paint it.
I agree that it wasn't perfect, no doubt about that. It definitely restricted the ability to mass communicate into fewer hands.
In general, the reduction in communication friction is THE big advance brought about by the Internet. Instant, effortless, and basically free communication for everyone everywhere.
Businesses like Facebook, YouTube, and Twitter are economically incentivized to push as hard as possible to get you to consume and create more so that they can show more ads. That means that when a video catches the attention of 50 people long enough to show an ad, then they can quickly encourage the spread of that video very rapidly to maximize that attention grabbing clip.
Nowhere in that equation is there an evaluation of the benefit/harm of that content to the end user. The evaluation is solely on whether it grabs attention. Because the product now being sold is human attention and its being bought by advertisers. The humans being influenced have no say and don't actually reap any gains from the transaction.
Car wrecks are attention grabbing and slow down traffic even the other side of the highway as people gawk. But of course they are not great for anyone....UNLESS you somehow have a business model that makes more money when there are more car wrecks. What happens then?
Good points but all of this holds true for traditional media either.
Just recently the Spiegel (a somewhat reputable weekly journal in Germany) had a series of completely or partial made up stories. The maker was able to place them because they fit so well into Spiegel's narrative. Such that (almost) nobody asked questions.
It's a good point - the past was far from perfect. Long before the current "user generated content" fact-level problem, I remember (some) traditional internet forums offering the same problem as we have today - inane parroting of plausible sounding content, almost as mantras, to anyone who asked a question.
I learned from those days that you had to be careful as what you read on a forum was highly likely (at least in some circles where expertise is limited, the topic is complicated, and questions are plentiful) to be plain nonsense, or pseudo science.
On the other hand though, the increase in friction arguably reduces the scale of the problem you cite - making it financially based is far from ideal, but it at least reduced the breadth of the problem space to some extent.
I think one of the biggest challenges with modern social media is the ability for anyone to immediately engage in "broadcast" communications.
Absent the centralised media of old, broadcast simply wasn't an option in the past - at best you could try to multicast by shouting in the corner of a bar at as many friends as you could muster, or stand in the street and talk to groups of people.
Unicast was scalable and easily accessible - just pick up the phone, or write a letter. But the average person probably couldn't gain access to a significant multicast group without first gaining the "backing" of their group. And they could then attempt to grow from there. Clearly that's far from perfect as echo chambers are an issue today as well! But it does feel to me like the lack of friction, coupled with nearly instant access to a global, un-namespaced platform, and the ability to broadcast, has just removed a little too much friction. I guess the big challenge is working out how to add enough appropriate friction to improve the situation. Or do we need to revert to a more namespaced internet again?
Not disagreeing, but I do wonder how the first amendment sits with regards to no guaranteed audience - you can likely be free to say something, while still having others free not to amplify it indefinitely.
To an extent I think the big digital platforms already do this, just in a non transparent way, to try to curate the most engaging and click-worthy content, rather than the most thought provoking.
If that was a legal rule for the internet it would make it illegal for most people to communicate with each other. I don't think this idea is well thought through. That would actually be the government regulating speech and arguably freedom of assembly.
The first amendment also protects freedom of association. You don't have a right for people to hear you, but the government doesn't have an unlimited power to restrict your ability to actually reach people.
If government prevents people hearing you speak, that's a pretty serious curb on freedom of association.
You have the right to be heard by people that want to hear you.
Nothing says you have to be broadcast or promoted and nothing says you can impose on people who don't want to hear you, but the government telling people they can't talk to certain other people is not the same thing.
Also, in another comment you said "This is also great for hiding and suppressing organization of social uprising. This would have totally suppressed all the BLM content this summer."
Is this sarcasm and you are contradicting yourself or do you actually think it would have been great to suppress BLM organization and protests?
Thats basically the same thing. For example, if the government made newspapers illegal, that would technically be "limiting the audience", but it would still be unconstitutional.
> I think the flaw is to compare old medium to today's technology.
Courts apply laws to new technology all the time. And in this situation, it is very obvious that the courts would rule government restrictions on social media, that make it illegal for people to broadcast on social media, would be unconstitutional.
> What can we try then
Well we can't ban people from using social media, or for broadcasting to many people. That would very obviously be unconstitutional.
So all content that is not in the "stupidity" classifier (who decides that?) also loses its chance to be surfaced? I think this needs to be thought through some more.
I've always thought that their algorithms are the biggest problems - when someone else gets to choose what you see. (That's also why I think it's far to go after their common carrier status if they do this.)
It would be better to be able to manage your own feed, things like chronological order, filtering, etc.
I'm not sure that would do much to reign in virality given the number of mega accounts there are- how much of twitter is n+2 from Donald Trump, for example?
Amazon.com, Inc. [twitch], ByteDance Ltd., which operates the short video service TikTok, Discord Inc., Facebook, Inc., Reddit, Inc., Snap Inc., Twitter, Inc., WhatsApp Inc., and YouTube LLC.
The FTC is seeking information specifically related to:
how social media and video streaming services collect, use, track, estimate, or derive personal and demographic information;
how they determine which ads and other content are shown to consumers;
whether they apply algorithms or data analytics to personal information;
how they measure, promote, and research user engagement; and
how their practices affect children and teens.
>how social media and video streaming services collect, use, track, estimate, or derive personal and demographic information;
Facebook can definitely figure out your race , and the race of who you've dated.
I told this story here before but back in the early days of facebook, I had dated a Korean girl for a brief period of time and then the moment we broke up via facebook messenger, I start getting spammed with ads for dating websites featuring Asian women ( maybe the algorithm wasn't advanced enough to figure out my ex was Korean specifically).
I think a lot of this is just posturing, I don't see Joe Biden being as harsh on social media as Trump has been. This might be a final hurrah against social media as conservatives have been very upset with them lately.
Doesn't each administration appoint the chairman? That seems similar to saying the Supreme Court is independent. Maybe on paper, but certainly not statistically or in practice.
The commissioners are appointed for seven year terms, there have to be at least two members from each party, and both Democrats voted yes on this action. So while you're right that it's not entirely independent, in general (and definitely in this specific case) it's not right to see it as an extension of partisan politics either.
I'll admit when I was wrong, let's hope this does lead to some actual change. I do think social media is absolutely cancer but I've sort of accepted it's going to be with us.
> there have to be at least two members from each party
Not true. There must be at most three members of any political party. There aren't required to be at least two of one party, and nothing in the law requires members of either major party. In practice, it's always 3-2 with either D or R being 3 and the other 2, but that's practice, not law.
It could be 2 Republican Party, 2 Libertarian Party, 1 Constitution Party, under the law.
Of course, if a President nominated toward that pattern, even if they had a friendly Senate to confirm them, they'd need a majority strong enough to be able to hold together without defections on other legislation, because the breach of convention would be punished mercilessly by even the least ideologically opposed members of the opposite major party.
It is “independent” not of partisan politics, which is impossible for any organ of government, but independent of direct Presidential control despite operating within the executive branch.
Of course, but haven't Trump and especially the more rabid of his followers like Alex Jones and Steve Bannon proven why there is a need for at least some form of gatekeeping?
Someone needs to filter out demagogues from public discourse. Otherwise you'll end up with a population where large parts believe that the election was "stolen" or "fraudulent", with all the consequences (like the potential for open, massive violence) that entails.
I'd like to agree with you since I dislike the current president, problem is the gatekeepers have used 4 years to prove they are almost as bad:
from blatant misquoting of Trump via the whole Russian collusion on one side to mindless glorification of Trump and misquoting of Biden on the other side they've (almost?) all managed to make fools of themselves.
Quite a lot of Trump associates were jailed for Russia-related activities. The only thing that was never proven was collusion between Trump himself and Russian agents.
"While the Senate report established broad bipartisan agreement about what happened in 2016, Democrats and Republicans could not agree whether those facts added up to collusion between the Trump campaign and the Russian government."
... Obviously the republican side were never going to conclude there was collusion, but the alleged go between Manafort did indeed go to jail.
You call it making a fool out of himself, I call it winning the Republican primary, getting 63 and later 74 million people to vote for him, winning one election, and coming within 100,000 votes of winning a second one, despite being a walking embodiment of anti-democracy. If it weren't for COVID-19, he might well have won.
You shouldn't be laughing at him, and people like him. You should be scared of them.
Yes, that is the country we live in, where someone like Trump can actually get that many people to vote for him. It's mind blowing to me and very shameful.
I agree with a sibling comment written by vkou, but I also want to add that Trump's social media strategy is to make people believe that they are smarter than him. He leverages this strategy to get consistently free advertising on pretty much all of the news agencies of the US. News agencies leverage him as a cash cow, because people love to read about stupid things done/said by public people, hence more ad revenue.
I don't think you need to be Trump or a Trump supporter to find personalized ads offensive and unwelcome. I think there's pretty bipartisan agreement it's getting a little out of hand. For instance, when I was under 21, I often got ads for alcoholic beverages, even though those services knew my exact date of birth. Why was I given those ads? At the time I had no way of learning why.
I'm with you on adblockers at least. I believe it was mobile youtube ads, which I've seen be highly personalized before. I do remember the video was one that minors were more likely to watch (gaming). My concern is that the alcohol companies were actually targeting an underage demographic, which is actually not illegal, since their industry "self regulates" their advertising.
Interesting, they have discord there. It's monetization strategy has been quite different than every other entity listed there. So far it's been mostly trying to sell stuff to their users.
Pretty much every other company they list displays ads. Like even whats app now has plans for Ads.
Maybe they think Discord will provide an example of proper corporate behavior, so they'll have answers for the "but we have to make money somehow!" and "everybody's doing it!" canards.
I am also curious why they decided to also include discord considering so far there monetization strategies so far has been to sell stuff to users, instead of sell their uses.
I'm also a bit surprised about the inclusion of Discord here. Not only is their monetization different, but their daily userbase is pretty small compared to the rest of the list (I believe Reddit would be closest).
It makes me wonder if there are some concerns about what Discord is collecting/using in particular.
My totally unsubstantiated gut feeling is that they might suspect that Discord harbors more "radicalizing" potential, compared to other social networks. Probably because discord servers are private, self-moderated, and demographically very skewed towards young men (since it's commonly used for gaming chats). I know that some of the more "troublesome" subreddits use discord to discuss sensitive topics without worrying about reddit admin censorship, so communities like those could be causing a bit of FUD around Discord as a whole.
These social media companies are the most effective channels for any organization to reach the widest audience. Which validates why the FTC has social media accounts and why the FTC is investigating them.
The goal of FTC order P205402 is stated as assisting “a study of such policies, practices, and procedures”. With that in mind, the somewhat arbitrary and limited choice of entities[0][1], hard deadlines[1], and scope of requested data[2] to me personally somewhat make sense.
[0] Given it’s a study, it seems overwhelmingly likely that the results will be extrapolated and generalized to the rest of the industry.
[1] The more time passes, the less representative the results would be. If there were more than nine entities chosen, a drawn-out review process would be essentially an advance warning.
[2] Yes, they are asking for a lot, but reporting being unable to provide certain type of data should be useful for the study still as absence of certain type of data is a meaningful result in itself.
I don’t have a strong opinion on other aspects (and it’s definitely no small feat to gather all the data), but at first approach—assuming good faith on the side of FTC—this order looks akin to a quiz of a random selection of school students with the aim not to praise or reprimand individuals but to assess the education system in general. Given enough advance notice, students would be drilled to ace specifically this quiz, negating its point.
As much as I despise social media and the effect it is having on the general population, after reading the (50-page) request and the dissent, this request by the FTC is completely bananas.
This is essentially how it reads:
"Foreach social media and streaming service, provide an audit of all electrons passing through or near the service. Account for the precise position & momentum of said particles, the method by which they are tracked, and the person or persons responsible for auditing bias of said electrons against members of the current political party. You have five days to complete the report."
I would really hope there is more to this. Social media companies are private entities, and as such can be as deliberately, or not, biased and full of censorship against other persons or groups as much as they want. More so, they can openly declare such bias against their best interests without risk of immediate consequences thanks to Section 230. They are legally protected by law.
I really wish the current administration would stop crying about the big bad world and how unfair it is. This only appeals to really insecure people. If they really wanted to “fix” the problem the appropriate regulatory body could provide new regulation against activity not already protected by legislation.
So pathetic. Despite the vast amount of documentation the FTC demands (which seems wildly overbroad), I can answer their question in pretty complete substance here - for free.
1) how social media and video streaming services collect, use, track, estimate, or derive personal and demographic information;
Everything is tracked, and they have a very good idea of your demographic information. They track all interactions on the platform in giant data lakes, including clickstream activity showing sequences of actions, they collect all browser features, ip addresses etc, they collect all referral information. If their code is used by other websites, they collect all the clickstream and profile data from those services. They use publicly available datasets, census tract information, and datasets the govt provides such as DMV databases and voter registration files to enrich the data. They may synthesis what are called "features" from this data.
This is used for social media updates, fraud control and ad targeting among other purposes.
2) How they determine which ads and other content are shown to consumers;
Based on a machine learning models which look at the qualify and performance of ad copy related to folks who have been a) targeted by advertiser by specific criteria they then b) develop and estimate of which other users would also be interested and their level of interest based on all data points they maintain in the user profile, scored along an economic axis and user engagement and satisfaction targets (ie, all ads vs no ads).
3) Whether they apply algorithms or data analytics to personal information;
Yes - they do on a huge scale, and AWS / Google and others are busy designing custom chips to help speed up this type of analysis.
4) How they measure, promote, and research user engagement; and
They make their platforms as addictive as possible, with notifications, stimiulations. A popular book for these companies might be "Hooked: How to Build Habit-Forming Products" if the FTC actually cared about being educated at all in this space.
5) How their practices affect children and teens.
Affect them significantly, children are much easier to "hook", be it with computer games, e-cigs, or social media or even sugar or lack of sleep. The effects of all of these are very negative. Parents can have a power impact on these items.
Does anyone think social media companies do not use personal information and monetization of that to provide their hyperscale "free" services?
TikTok already released details of its recommendation algorithm [1] so it's out there in public domain and free accessible. I'm not a lawyer but some of this information that FTC is demanding are trade secrets of these companies and they are not willing to give them just like that.
> Commissioner Noah Phillips, who voted against the study, issued a separate dissent.
> The orders are "an undisciplined foray into a wide variety of topics, some only tangentially related to the stated focus of this investigation," Phillips said, arguing that the probe was a waste of FTC resources and would not provide the public with valuable information or address issues of consumer privacy.
I would love to read YouTube’s big reveal on how they are targeting children. It does not take an expert to see it. Just search up any kids shows and wait for the ads to come in. Should kids be targeted by any ads? I don’t know but feel probably they shouldn’t. But even if they should be shown ads I highly question the type of ads I have seen while watching shows with my kids. Honestly I can’t think of a single example ad off the top of my head but while watching clearly kids shows have seen enough that I have often thought “what the hell youtube”. A part of me believes they want the ads to be offensive to the parent so they consider the premium account.
I feel bad for the people that will inevitably need to cancel their time off work in one of the worst years on record to be able to meet this 45 day deadline.
>HN should have a submission statement, that’s pinned at the top of the comments. So we can review that without visiting the source site.
No, stop being lazy or paranoid or whatever and just RTFA. The entire premise of this forum is to engage with the content presented and to discuss it, to gratify intellectual curiosity. Pernicious habits like only reading the title or wanting a Cliffs' Notes version just sucks what little soul this place has left out of it.
Why is the FTC cranking up activity in the waning days of the administration? Is there a sense that Biden won't investigate these companies and they want to kick off the investigations before he takes over?
I wonder how Trump reconciles this with his promise to severely reduce regulation?
If it was initiated by Congress, it would not be surprising: the GOP-controlled Senate is not nearly as anti-regulation as Trump. (And certainly the Democractic controlled House is very pro-regulation.) But the FTC is (typically) is under direct control of the administration, at least to the extent that its leadership is appointed by the White House.
And this is happening under Trump folks. He doesn't seem to be making a fuss of it either way, and he's probably had enough time to install his folks in the FTC, so I wonder what/if there's a political angle to this.
There is literally no mention of 'FTC' on either CNN or Fox landing pages right now, which is a little disturbing given the days events.
If someone has material information to add about the real background on this that'd be great.
You'll notice they've also neglected to mention the 250M people on strike in India right now and for the last several weeks. Certain reports do not harmonize with the corporate narrative.
American news outlets generally don't report on issues outside their borders unless it's relevant to them.
But as of late, there's been a lot of noise about FAANGS, and even some political issues around content, it's just odd that there isn't more widespread coverage.
I'm really curious as to the underlying impetus for this, it's not clear to me.
Zuckerberg at 'hearings' being grilled by GOP Senators about ostensible 'bias' - that is understandable, at least politically.
> American news outlets generally don't report on issues outside their borders unless it's relevant to them.
American news outlets generally don't report on anything that they aren't getting press releases and phone calls about from companies and organizations that they're friendly with. They only do it if so many other outlets are reporting the story that they'd look bad, and then they just copy the story directly from those other outlets.
They don't report negatively on sponsors, other parts of their very large and diverse companies, or any other companies that their owners or important employees are associated with, unless forced. If forced, they copy what has been reported and add nothing but mild apologia and FUD.
American news outlets generally don't report on issues outside their borders unless...
...they think there's a chance to start a war. Compare the coverage of protests in India to those in Hong Kong. In India they want the government to coddle big business and rich people less: zero coverage. In Hong Kong they wanted to embarrass Chinese communists: wall-to-wall coverage. Often we can see this phenomenon in a single nation. When a few relatively wealthy capitalists protested the duly elected government of Venezuela a couple of years ago, we heard all about it. Two days ago Maduro's government won reelection in a landslide, and we haven't heard a peep. The fact that large majorities of Venezuelans support their current government makes it more difficult for us to start the war that John Bolton wants, so that won't be mentioned on CNN.
The election was widely discredited as fraudulent, remember Putin as 85% approval ratings and if China held an election literally tommorow, Xi would win, after all, nobody is allowed to say anything about him.
But it's moot: the US President is Tweeting something and there's a court saying something about the elections which is 100x more important than anything about Venezuela, at least in America.
It'd have to be the slowest news day in the US for news about any kind of strikes to make the US news. It's not on the Guardian, BBC, CBC either.
The observation is not that certain nations are not covered; that would be pointless. The observation is that those nations that are covered, are only covered when coverage increases the chance of massive USA "Defense" Department spending.
Kind of weird to see WhatsApp on that list but not Netflix. I wonder how much Netflix has managed to stave off scrutiny simply by creating the Social Dilemma documentary.
It's a list of companies which do not charge end-users. Netflix requires a subscription to view anything, complies with regulations about region-specific copyrights, and has zero deals with third parties.
Their business model is simple. They're TV, on the internet. They buy or make tv shows, and display them to you for a fee. They're not a "platform", they're just a cable TV service.
We all pay for Amazon Prime, and yet Amazon is on there.
Netflix is a video streaming service which will "collect, use, track, estimate, or derive personal and demographic information". It will "determine which ads and other content are shown to consumers". It will "apply algorithms or data analytics to personal information". It will "measure, promote, and research user engagement". And "their practices affect children and teens".
Netflix is responsible for an immense amount of screentime, and its algorithms are designed to maximize that screentime to reduce subscriber churn.
From the order "Identify each Social Media and Video Streaming Service provided or sold by the Company". It doesn't seem to be limited to only Social Media sites, it also includes any "Video Streaming Service". The broadest interpretation I could come up with from this statement also include the cloud video streaming/transcoding offerings Amazon provides.
Amazon is on there because they run an affiliates ads system over millions of sites, and are a marketplace for other people to sell their goods. In comparison netflixes operations are very simple.
Uh why would it be on there?
It doesn't have ads, and no user generated content. Boundlessly maximizing individual engagement is hardly in their interest, whereas with the ones on the list it's literally the entire business plan.
This comment is baffling... Netflix isn't social media. It knows far less about its customers because your interactions are limited to chosing what to watch.
I don't even know that documentary you mention, and I can assure you it has nothing to do with this decision whatsoever.
(Because it's really a rerun of West Wing that got the FTC into high gear here. "Hell yeah, let's do some governing", someone shouted, and they all started briskly walking)
WhatsApp is a messaging service, which would barely fall under "social media". Netflix is very clearly a "video streaming service" - it's its core business moreso than for Amazon.
I would encourage everyone to read it. My personal take is that this is a ridiculous amount of homework to do in 45 days, and is essentially the FTC asking for not only these companies' "secret sauce" but also their confidential bookkeeping. I would be astounded if none of them challenged this order.
For example, requirement 12.d.: "Submit all documents related to the Company's strategies or plans, including but not limited to research and development efforts." It is not clear the FTC has this kind of authority to essentially demand "what are your trade secrets and future business plans?"