> Sen. Mark Warner of Virginia told BuzzFeed News that the revelation of a Facebook employee being bribed to reactivate scammy ads was further evidence of the unaccountability of platforms and the corruption endemic to digital advertising markets.
I'm confused. If anything, this seems to me like evidence that Facebook is holding people accountable for violating regulations on the platform.
I'm confused. If anything, this seems to me like evidence that Facebook is holding people accountable for violating regulations on the platform.
This would have more weight if Facebook had discovered this on their own, but it was an outside investigation that uncovered it. Firing people once they are outted by an external agency is pretty weak accountability of their platform. And note that this was a contractor so should have had higher scrutiny than a full-time employee, but I know from personal experience that it's often the opposite - the contract company signed a contract saying that they'll abide by all of the rules, so no need to monitor them, after all, they signed a contract.
A company spokesperson confirmed that an unnamed worker was fired after inquiries from BuzzFeed News sparked an internal investigation
> “For over four years, I have raised concerns to the [Federal Trade Commission] that behavioral advertising markets are rife with fraud – not just in the form of clickfraud but, exploiting the scale of large platforms, in scams and criminal schemes that directly exploit American consumers,” he said in a statement. “Because of Section 230, neither the victims of these schemes nor state [attorneys general] can seek to hold the platforms accountable for their continued facilitation of these frauds.”
Which suggests that he does not want to rely solely on Facebook policing itself for accountability.
For a Senator (or any other public figure), the Constitution and First Amendment case law do that (and in a superset of the circumstances to which Section 230 applies) so much that Section 230 has negligible impact, and Warner isn't (unlike, say, Donald Trump) particularly litigious on the kinds of things people say about him whether or not 230 is applicable such that there would be a reason to think he'd be seeking every minute fraction of a legal advantage on that kind of issue.
I never understood why firing is seen as sufficient punishment for bad actors inside a company. There are so many situations where the payoff is still much higher even if you get fired.
It sounds like commercial bribery, which is illegal in most states. Sometimes the federal mail and wire fraud statues are also used to prosecute such things.
Yes. For taking money/profiting. They didn't do it just it by 'mistake', or by incompetence.
How do you think mafia operates in Italy? They infiltrate their solders in every part of the government (especially local government), or even private companies where they have interests.
In this specific case, you might be able to throw in conspiracy to violate the CFAA.
Using new laws:
I would be perfectly happy to have a "corporate espionage/bribery" law that made it criminal to accept money personally to change your behavior as an agent of a company. I would prefer it if that law only applied when you and the company opted into it via a contract to minimize disruption to any existing schemes where that is considered acceptable behavior.
Until of course being hired by competition is seen as "corporate espionage" and you've suddenly made a noncompete something that is built in to every employment. Somehow we've succeeded in making the US even more dystopic.
> when drafting the law basic care needs to be taken
You are right, and this was more or less my point. Drafting a law is hard work - but the kind of handwaving armchair advice that you often see to "just make a law for this" does not recognise that it is very hard to do so. It's even harder to do it in a way that is right for everyone and with the political landscape being the way it is and lobbying working the way it does it's far from a foregone conclusion that such an effort will yield the desired result.
I want a law that would make Facebook liable for the actions of their staff. The goal of such a law would push the company to implement measures to make individual bad actors have very little power to do bad things. In this case, simply having a requirement that several independent people (maybe in different offices) review a case before unbanning an account wouldn't be completely unreasonable.
> simply having a requirement that several independent people
You're asking for government mandated bureaucracy.
Here's how I've seen this play out in the context of the world's largest bureaucracy, the Department of Defense.
Let's say the law requires someone in a powerful position, like a VP, to approve the un-ban request so you can hold someone "accountable". The following chain of events will occur:
1. The VP will be inundated with un-ban requests and become increasingly annoyed at requests that should obviously remain in effect.
2. The VP will delegate a subordinate to check that each request is sufficiently nuanced to require the full attention of the VP.
3. Repeat steps 1 and 2 until you reach a management level that's too over-burdened to add another review layer. You should have 2-4 layers of review at this point.
To properly route a request, each layer queues the request and runs some cheap processing and sends it up the chain where the process repeats. We can borrow some concepts from networking to model this process:
- buffer bloat: queues are typically unbounded in the real world. Requests are rarely dropped. Instead they accumulate.
- back pressure: the VP goes on vacation. All requests will pile up at the previous layer.
- latency: certain days have exceptionally high latency, notably Saturday, Sunday, and most parts of Friday.
- network partitions: often the transport layer (usually an intern to shuffle paperwork) will become unavailable as the scheduler (boss) repurposes the intern to higher priority tasks.
With a bit of tuning, a well-established bureaucracy can increase latency from minutes to months per request.
I’d want that extended to include joint responsibility of the highest level employees who were aware of criminal acts or fraud. After all the common refrain justifying huge incomes for high level employees/execs/board members is the “risk” in their jobs.
That risk should include personal responsibility. As long as the only people who suffer meaningfully are normal ICs, customers, and small shareholders there’s no actual reason for the boards to change behavior. Fines are passed on as increased costs (PG&E), reduced wages (every company), etc - I’m sure once individual executives can be sent to jail for allowing criminal acts the behaviour will change. (Until they buy a law to remove their liability)
It seems like every couple of weeks we see a founder or professional locked out of their accounts on some platform they depend on to make a living. Until that problem is solved I don't think we want to make it harder than it is to unban someone.
Likely the bad actors would still find a way around it, too. What's a little criminal collusion amongst coworkers for profit.
I mean there’s a real problem in that many people are arranging their livelihoods around single platforms.
As much as I hate crowder for example, he ensured that most of his income comes from advertising his own store and products, so being demonetized for repeatedly violating YT policies didn’t actually cost him much.
The corporation itself is avoiding any culpability or liability for the offence, enacted by its staff, on its systems, through its capabilities, in its name. And to its (partial) benefit. By virtue of its own failed or abesent oversight and detection mechanisms.
The phrase "corporate veil" is a metaphor describing how someone or something is getting away with something by hiding behind the corporation they are part of. Like, the corporation is acting as a veil, over whoever is responsible.
Business corporation generally is a risk avoidance structure.
What the "corporate veil" generally protects are investors -- only the actual amount invested is at risk, rather than more, or all, of an investor's assets.
The veil can also be used to shield specific executives, another common complaint.
But a third mode is when a sacrificial scapegoat, often relatively low on the hierarchy, is identified and blamed for problems. That shows up in government as well as the "one bad apple" excuse, which both fails to address true accountability and justice, and massacres the metaphore, which is "one bad apple spoils the barrel".
A lone actor should not be able to behave in such a manner, and is quite probably not acting alone. The oversight, detection, and cross-checks which should be required to be in place clearly aren't. That would include the individual's business unit and management chain, as well as the company as a whole.
Note that I didn't use the term "corporate veil", and I'm not entirely certain it applies here (see one definition: http://www.businessdictionary.com/definition/corporate-veil....), though in the sense of shielding the larger part of the corporation and individuals within it, the argument could be made.
Understanding business as a sort of "risk shedding engine" may help. The corporate veil is one mechanism for this, but another is the creation (or post-incident assignment of) what's effectively an ablative heat shield -- some component of the corporate structure, often a single individual, up to and including a CEO, though business units, subsidiaries, contractors, or largely-controlled corporate charities and trade organisations are also used -- which can be shed or discarded as needed.
So the "turnaround CEO", the management consultant organisation, the "rogue employee" (anywhere from the front line to the executive suite), the subsidiary, the spin-off, the "charity" or "trade organisation", all fit this bill.
For CEOs, see Albert "Chainsaw Al" John Dulap (obituary: https://www.nytimes.com/2019/02/05/obituaries/al-dunlap-dead...), or Martin "Pharma Bro" Skrelli. An argument could be made that many major politicians operate in this mode -- the argument might be made for a Boris Johnson, Mitchell McConnell, Fritz "The Senator from Disney" Hollings, who serve as the public exposure of their respective interest groups. "Trade organisations", particularly with an enforcement arm, such as the MPAA, RIAA, and BSA, largely represent firms in the cinema, music recording, and software industries, respectively. Various "think tanks" such as those in the Atlas Network (https://www.atlasnetwork.org/partners) allow specific interests, usually business, industrial, and generally the wealthy, to engage in activities at a slight distance. Many of the Atlas partner organisations are strongly associated with the Kochs, Scaifes, Bradley, Searle, Walton, DeVos, and others. (See: https://www.sourcewatch.org/index.php?title=Atlas_Networkhttps://www.sourcewatch.org/index.php?title=State_Policy_Net..., and related articles.)
In this case, Facebook are avoiding corporate liability, legal risk, and goodwill erosion by blaming a "rogue employee". That strikes me as an incomplete fault analysis, and one that's overtly and obviously self-serving to Facebook, its management, and shareholdes. Most of which are synonymous with Mark Zuckerberg.
See related: Fujitsu's involvement in the UK Post Office's highly-flawed "Horizon" accounting system, responsible for destroying multiple lives and careers over two decades:
In an effort to make sense of the phrase, what you are saying seems to boil down to "it's a veil as in the veil is protecting the investors". Who thinks the root cause was a conspiracy among the shareholders of FB?
That's not quite what I'm saying. Or possibly more accurately, I don't think that's what I'm saying.
1. I'm still working through my thoughts on this.
2. The key point is not "corporate veil" but "legal and operational concept of corporate structure as a risk externalisation engine".
Under that second, the "corporate veil" is a part, but not all, of the externalisation mechanism. And would make the short response to your reframing: "No".
More subtly, it's not essentially necessary for the externalisation to be a deliberate strategy -- a conspiracy -- though that probably is often the case. There are emergent phenomena a and behaviours, and given that risk externalisation is, in both the short and medium terms, generally, profitable (that is, it decreases costs and increases revenues), there's a natural self-selection among firms, managers, and behaviours toward such structures and behaviours, and those who follow them, consciously or not.
If you're looking at specific legal or risk concepts, you'll probably want to examine the notions of moral and morale hazard, attractive nuisance, negligence, malfeasance, and the like. The notions of willful ignorance and motivated reasoning as well.
These issues get less play than they should, and comprise a major weakness to the market-capitalist model. Though of course they're also present in other organisational models of central control. Organisational models which are immune or resistant to centralisation and concentrations of power, ownership, and/or control would probably fare better, though these are difficult to arrive at and sustain.
The reason this challenges the general notions of markets includes both generally understood principles, and possibly some that are novel, or at least less considered:
- The correspondence between wealth and power, best captured in Smith's uncharacteristically suscinct quip in Wealth of Nations: "Wealth, as Mr Hobbes says, is power."
- The dual problems of principle-agent and regulatory capture. Though often viewed independently, I see these as largely the private- and public-sector variants of the same underlying behaviour: individuals acting for personal gain rather than institutional benefit. Corruption generally.
- Classic informational asymmetries: At a given point in time, two (or more) agents having unequal amounts of information concerning a transaction or state of offairs. Akerloff, "The Market for Lemons".
- Temporal informational asymmetries: The development of fuller understanding, particularly as concerns unforseen consequences, emergent phenomena, or latent (as opposed to manifest) properties or aspects, over time, to all agent (though also often with an imbalance between agents). Robert K. Merton, etc.
- The risk-immunity of size. If an organisation has both resources and cost or operational structures to survive negative circumstances, then in a period of contraction, less-capable organisations will fail whilst the larger survive. Size does not always correspond to the capacity to absorb risks, but often does.
- Motivated asset inflation or value assurance. The tendency of those holding some valuable or income-generating property or system, to seek to further appreciate its value in ways that reduce social wealth growth. Bernhard J. Stern's "Resistances to the Adoption of Technological Innovations" (1937), and NIMBYism, are key examples.
- Various blame- and liability-shifting practices, including as described above. NDAs, non-competes, anti-poaching practices, and the like, would be others.
- Practices generally seen as immoral, unethical, or illegal: coercion, product bundling and tying, exclusive dealing, product dumping, and the like.
Sorry that's not a short answer, though I feel it's more accurate.
This isn't anything but the company firing someone for their own reasons.
The issue at hand is a) us knowing that fraud was perpetrated and b) appropriate jurisdictions being able to try that individual for fraudulent actions.
Neither is present today. Facebook polices its own employees and if those employees were important enough you'd bet they would not release that information nor hold those employees accountable.
How about more preventive measures rather than just reactionary ones? Maybe stronger security measures that make it much harder if not physically impossible for one person to do much damage in the first place?
As for prosecution (whether civil or criminal) - I don't know, if there's a law that can enforce it, should that not be on the table? And if there isn't, then perhaps there should be?
It shouldn't be, but it's not Facebook's job to prosecute criminal bribery and embezzlement charges. They should and probably would refer the employee to the authorities, but that's the full extent they can go.
> I never understood why firing is seen as sufficient punishment for bad actors inside a company.
It's seen as sufficient, often, because:
(1) even if a civil cause of action is available, the cost of litigation far exceeds any plausible benefit to the company, and
(2) The company can't direct criminal prosecution and the high bar of proof and other competing prosecutorial priorities mean even if the conduct the company believes occurred which justified the firing would also be criminal, it often won't meet the criteria a public productor will apply before prosecuting.
> There are so many situations where the payoff is still much higher even if you get fired.
Maybe, but that doesn't mean that there is a correction to that circumstance which doesn't itself have social costs that outweigh it's utility.
A very cheap way to appear to be holding yourself accountable, that has been used since the dawn of civilization, is to offer a scapegoat as a sacrifice to the gods, then you can carry on as usual.
What incentive is there for this company to continue to behave, now that you have been pacified with this token gesture? Laws are stronger incentives because even though you still can't see what the company is doing, they are a real deterrent. Because the company knows that a disgruntled employee might one day blow the whistle if they carry on as before...
I think this pertains to the fact of how easy it was for people / organizations to directly contact people inside Facebook to do things at their bidding. FB does hold their staff accountable but the Senator wants more safeguard so that it is very difficult for such an event to be carried out successfully.
Reactive versus proactive, which where I believe the Senator is speaking to. If Facebook were proactively monitoring for this the way most organizations monitor PII they would be more accountable.
In this case it appears Facebook held the person accountable only after journalists started asking questions. That suggests Facebook either didn’t know what was happening or (less likely) didn’t care. Any system that relies on reporters to identify policy violations is weak to nonexistent.
In this case accountability would have involved periodically reviewing and catching banned accounts that have been reviewed and making sure they had a valid reason to be revived.
“Ya,” Burke replied, punctuating his message with the sack of money emoji.</Quote>
Heh, amateurs. Let me tell you guys a story from my country, from 90's. So in those years cell phones started to become the norm. You know, those Nokia brick types (which had like 1 week of battery in them before requiring charging) but the service was like this. You would call your contact then around 4 or 5 rings will give you plenty of time to close the call, and after that a voice mail message would inform you the contact was not answering and inviting you to leave a message. Problem was that if you went that far, then you'd pay for those seconds when the automated voice mail be initiated just like if your contact would've answered. So far so good. But one CEO of a cell company decided he wanted money. So randomly, he would enable the automated voice mail message to enter just after one ring. Now, individually that was not expensive, around few cents for each subscriber, but on the whole network this would mean for every hour this trick was pulled the company would win one million dollars (yes, you read that correctly).
I'm not doing the math, but I do recall that time on those phones was _very_ expensive. Dollars per minute.
And, in those days if you didn't press the 'end' button to terminate the call, regardless of the other party hanging up, it kept an open line. I remember my dad getting several hundred dollar bills related to saying 'bye' and tossing the phone onto the passenger seat.
Eastern Europe country, so when we finally broke off Russia we went to implement very fast Western technologies. This also coupled with the fact that during Communism era fixed phone were something to wait years to be approved off made cell phone penetration extremely fast. People were starving for communication so it didn't matter that we barely had food, we still wanted cell phones.
And not ten of millions, just ten is enough. Call your family and few friends under this trick and oops, you're good to pay an extra dollar at the end of the month.
I said tens of millions since you said it was a few cents for each subscriber, but one million dollars for every hour. (Perhaps I did not read that correctly?)
Yeah, I meant that each time you get that voice mail you'd pay a few cents. If my memory serve me correctly was 5 cents for initial 20 seconds and then 1 cent for each 10 seconds after that.
As a side anecdote, cell companies started a war among themselves with different features to lure customers from one network to another, and one of those features was at one time that initial 3 seconds were free, and after them it kicked the normal fees. So here I was a student in University campus and absolutely every single student had a cell, but you'd call your friends something like: "Hi X, it's me Y" and click, close the phone. Then call "I need this course" and click, close the phone. And so on and so forth until you'd finish your conversation. And at the end of the month when cell company was issuing the bill, you'd receive a very thick envelope that would have like 50 pages in it and the majority of it would read like this for each row: <destination number> - <begin time> - <duration> - <price>, where <duration> would be something like 1 second or 2 seconds, and <price> would be "free". It didn't last, cause companies understood they were losing money by printing those pages so it was a feature for like couple of months, but oh boy that was fun. Good times.
OK it's not surprising that rogue insiders – whether it's a regular ads employee, manager, or a site-reliability engineer – can attempt this kind of tampering. It is surprising that FB – who, if anything, is excellent at data and tracking – seems to lack seemingly basic access controls and detection mechanism to prevent this. According to BuzzFeed, FB only launched an internal investigation after BF asked questions about this employee:
> A company spokesperson confirmed that an unnamed employee was fired after inquiries from BuzzFeed News sparked an internal investigation. The employee in question was based in the company’s Austin office, according to information obtained by BuzzFeed News.
The article doesn't say exactly [0], but potentially a BF reporter might have noticed that scammy ad accounts that BuzzFeed previously reported on [1] – which FB then banned in response – were inexplicably active again. But why doesn't FB have an internal flag/bit for these egregiously scammy ad accounts (and their unique identifiers) in their systems? A flag that would trigger an automated notice/audit when that account was reactivated for any reason, by any employee?
[0] It's possible the article under discussion was completely sparked by insider leaks to the BF reporter, e.g. "chat messages obtained by BuzzFeed News". I'm just saying that it's possible that the rogue activity potentially seems to have been observable by any outsider, i.e. completely observable and preventable if Facebook had proper access control.
> OK it's not surprising that rogue insiders – whether it's a regular ads employee, manager, or a site-reliability engineer – can attempt this kind of tampering.
In this case, it was a "contractor". Next time, it will be an "intern" etc etc.
Actually this story did make me think about an even older article about Google [0]: in 2010, when a site reliability engineer was fired after allegedly spying on teens' Google accounts. At the time, it really underscored to me how much power – and potential to abuse it – that insider employees have. Particularly site reliability and security engineers, who inherently need to have this kind of power and access do do their jobs:
> "If you're an SRE, for instance, on Gmail, you will have access to mailboxes because you may have to look into the databases," the former Google SRE—who did not work with Barksdale—explained to us by phone. "You'll need access to the storage mechanisms," he explained, pointing out that in order to determine the cause of a technical issue with Gmail, an SRE might have to access emails stored on Google's servers to see if data is corrupted.
I'm assuming in the years since, Google's SRE have mechanisms in place to audit and log the use of sudo-level access and powers. Twitter, on the other hand, seems to not have implemented this, at least as of 2015 [0]
I can imagine FB having decent auditing and logging when it comes to internal access of private data of user accounts. But not when it comes to ad accounts, considering the level of scammy activity these ad accounts had when BuzzFeed News reported on them [2]
Also surprised. I've been involved with designing and implementing "customer service portal" functionality for a couple of services that were quite small, and yet this is something we very much considered. For example we added rate limiting such that even though a CSR could override user account state via the portal, they couldn't do it that much without triggering a block on their ability to do it again.
It's common for these tech companies to set different ethical standards for paying customers.
... as another example, Indeed does not action/review companies that have been flagged as having inappropriate postings (i. e. MLM scams) as long as they have actively billed job postings.
This is evidenced by the fact that when such a company removes their automated payment and/or pauses all job listings, all the flags then get reviewed and actioned (speaking from personal experience - not MLM, but commission based job listings).
Note that bicycle helmets are not very useful at improving outcomes. Especially for cycling within cities, MTB or something completely different is likely to have a different risks and helmets may be a clearly good idea there.
I hope that pedantry about cycling safety aside, my point is clear. Feel free to swap out "cycling without a helmet" for any other risky activity you might observe day to day if you pay attention to the world around you.
Why do people break the speed limit on wet roads just to get home a minute sooner, when they could slide off the road into a ditch and never make it home at all? Obviously, because they think that won't happen.
Fair enough. My point however is that risk taking is common in all walks of life. A facebook worker who accepts a $5k bribe doesn't think he's going to get caught and the cyclist thinks he won't hit his head.
If $5k is worth the risk of losing their job, they're not making much money at Facebook, nor think there's potential for much growth in the role. Losing that job isn't going to be a big deal for them, and the article doesn't suggest any legal action may be taken against them.
It's another argument, but I think it's on Facebook entirely that they don't pay their employees enough, keep them happy enough, to prevent this sort of thing from happening. When you leave content moderation to your lowest paid employees, you are only encouraging people to figure out how to game the system for more.
I think you're making the mistake of assuming people always act rationally. There are countless anecdotes of well paid, ostensibly happily employed people losing their job over juiced expense reports or similar small (in dollar terms) schemes.
We run ads for small beauty clinics & spas that cannot afford their own staff to do that kind of work. This includes facebook ads which have high return value for the businesses.
I cannot tell you the amount of frustrating time we had to spend to chase Facebook to stop banning us because of an algorithm that thinks the belly buttons and the ladies in our ads resemble adult ads (it's not in any way nsfw, it just shows more skin than usual because that gets interested clicks)
Sometimes I wish there was someone to talk to at facebook to make the automated process less painful, but we are in an Asian country with no facebook representative.
Ironically, since I interviewed for a Facebook position a year ago, I can't post my website link anymore (hosted on GitHub page - programming/photos, basically) there or on Instagram. It probably happened because I sent my link to too many friends too fast to ask for opinions, and Facebook marked it as spam. I tried to ask for help multiple times by using their 'review' feature, with no answer.
This is super common, I knew a guy who knew a guy who could get similar stuff done at another famous social network. A company I know got its short url because the employee snagged it from some nobody and gave it to the company. I don’t feel that this is illegal as it is private company that can do what it want, but a low employee doing this unilaterally is sketchy.
I don't use Facebook, but I would think a good transparent way to deal with this is open a page to the public that lists the ads a company is running along with statistics like how many times the company has violated policies, how many ad runs they've accumulated and how many dollars have been spent to run those ads.
So just for scale, a single ad campaign could have thousands of individual ads. Times a few dozen campaigns at a company. Times millions of buyers, many of whom buy through an ad agency or network reseller.
The scale of this problem is too big for transparency. Hypertargeting of advertisements should just be illegal.
Think of an ad as something like a 1¢ payment from me to the website. Suppose there was no such thing as ads, and the website wanted me to make the 1¢ payment directly. What are our options for doing that? Is there any way to do it without paying fees much larger than 1¢?
If you want ad-supported businesses to go away, invent a convenient way for me to pay Google 1¢. (Convenient as in, I don't need to create an account with a password.) It's extremely difficult, both for technical and legal reasons.
I didn't know anything about the ad industry, but it seems like you may be off by an order of magnitude. Click through rates average 3+%, and it costs around $2.70 per click.[1] That seems to mean an impression is worth in the ballpark of 8-10 cents. Based on my recent web browsing history tonight, I seem to have visited around one page per minute.
So, the problem is not how to create the infrastructure for micropayments, but rather that explicit charges would be very high - the effective cost of browsing the commercial web is about $6/hr, which is about twice what it cost in the mid 90s to use AOL.
I'm not sure of all the implications, but I think this should substantially change a person's views.
Good call... Been thinking about this same system for a while. Give me a cookiejar I can fill with a small amount only from mybank / paypal etc and instead of accepting the flipping cookies, I click a button and throw a couple of cents in the jar of the website. I like what brave is doing, but I want more control over it.
By "break the ad industry" I don't think they meant "make it so you can't inform others that you have a product that you are selling".
Now, the following hypothetical is a rather stupid hypothetical, but imagine if the government made it so that, as part of jury duty, you had to select a couple categories of product, and be presented with a random selection of around 5 descriptions of products in each of those categories (companies could bid to have their product listed I guess) ( the descriptions would have to be straightforward descriptions of the product and optionally some of the ways in which it is demonstrably different from competition),
and also, at the same time, banned banner ads on websites.
This would be stupid and tyrannical, yes, but it seems to me like it would "break the ad industry" while also not really "doing away with capitalism".
To be very very clear : I do not advocate implementing the policy that I just described; I think it would be a very bad policy.
This doesn't break the ad industry at all, it just obfuscates the process. Like, how do you narrow the list of 300 watchmakers down to 5? How does one watchmaker establish a reputation to allow him to be successful that doesn't lead to recreating the ad industry?
The only way to get rid of advertising is to just commoditize everything, which is what communism aims to do.
The selection of the 5 from the 300 is randomly selected for each time each person goes to jury duty. (If it is in a category that few people select, perhaps prioritize options that hasn't been shown to people as often)
So, I don't see the issue with the "narrowing down the list of 300 watchmakers down to 5".
This ensures that some people become aware of the different new products.
If they find any of the products that they see to be remarkable, they might purchase it, and quality can spread by word of mouth.
Also, there could still be people who do reviews of products, they just legally couldn't receive any compensation from the people selling the products.
When I opened the page it had a banner at the bottom that said "impeachment, something's about to get real :peach emoji:". Pulitzer or not their editorial style doesn't inspire confidence in me.
I hate this line of reasoning so much. It completely ignores how trust works. As an organisation, you cannot simultaneously build a reputation for worthless anti-journalistic click bait, and serious real journalism at the same time. It would be as if Facebook launched some pro-privacy NGO and then people like you went around saying everybody should absolutely take it seriously.
And yet this reasoning works for every newspaper with an editorial/opinion section: The opinion and editorial columns are not held to the same standard as the rest of the paper.
Actually, if you look at the source for that claim[0], it says that some were from BuzzFeed News and some from BuzzFeed; they're not the same.
Besides, it's an almost meaningless statistic on its own without some point of comparison, isn't it? How (un)critical were they of other public figures?
Personally, I think Reuters does a pretty good job. They tend to clearly distinguish between news and opinion. Mostly, but not always, their news reporting is just facts and little interpretation.
Too many networks either blur or outride hide the distinguishment between news and opinion. E.g. you'll get a bit of both in the same article. For example, CNN, NBC and FOX. Although, and I'll probably get downvotes for this, FOX News (online at least) probably does slightly better on this. Opinion pieces are generally visibly labelled as such at the top of the page, but they dont always make it clear on the front page in the leading headlines. And on actual news stories, they tend to present facts as is, without interpretation or opinion (again, not always the case, but I think they try harder).
I used to read CNN, NBC as well as FOX, but was driven away from CNN from their flagrant bias in reporting news. NBC drove me away with their site redesign when every article had to have a large picture associated with it on the front page. Made it too hard to figure out what the articles were about and loaded at a snails pace. FOX has done some similar things, but the front page loads fairly snappily. Reuters is probably the best though. Site is responsive, mostly text links, except for the main article in a section.
If they where making millions, then this guy sold himself cheap at 5K. He was literally the decision maker between them and millions of dollars. That is why they jumped at the retainer price. He could have hit them for 50k and they would not have blinked.
I was not commenting on the morality of what the individual did. Rather that he was shitty at sales. I personally would not have done what he did and find it reprehensibly, I am fully aware of how the court systems work. The point of my post is how these people always seem to undersell their illicit services when said services are a rare commodity and are valuable to the purchaser. My comment was specifically about the seller of services not knowing how to price his market. The morality and legality issues have been covered in several other comments so I figured no need to beat that horse.
I'm confused. If anything, this seems to me like evidence that Facebook is holding people accountable for violating regulations on the platform.