VC funding allowed them to move quickly enough that they got to a scale where they could afford legal and lobbying protection when challenges eventually happened.
How would that work exactly? And are there things that need to be protected in the online world but not the physical/real world?
When it comes to YouTube censorship freedom of speech seems applicable except for the fact that YouTube is a private platform with terms of service you have to agree to for them to host and distribute your content.
At a practical level, once a platform like YouTube becomes The Commons, then certain rights become necessary for us to live in a free society. No idea how you implement that legally though.
Perhaps something like utilities where a company operates at sort of a midpoint between private and public?
One way it could potentially work is for Congress to legislate that large online platforms operate as common carriers and refrain from censoring any legal content. I think this is worth considering. It would create challenges for content moderation but there are ways to deal with that by giving users better tools to filter out objectionable content from their feeds.
> It would create challenges for content moderation
Honestly it sounds like this problem would be gone entirely. The government already defines speech that is so bad as to making it illegal despite the first amendment. Treating social media as a common carrier means they get to skip moderation entirely and just have to follow what is legally allowed.
Forcing social media to act as a common carrier could very well mean algorithmic feeds aren't allowed as they impede speech and still toe the line of censorship. Without algorithmic feeds we could be back to seeing just from those in your circle, meaning moderation shouldn't be nearly as big of a concern anyway.
I had the same experience with DHL while I was in the Netherlands. Here in the US its been fine, my best guess is that it has to do with how customs and duties are handled.
Why do you think the party allegiance will make a difference?
I generally consider the republicans to be more likely to reach for military action, though the democrats have seemed pretty war hungry in the last decade or two as well.
>Why do you think the party allegiance will make a difference?
I think I could replace "democratic president" with specifically Biden or Harris and would still believe the chance of military confrontation with them is "unlikely" unless directly attacked, but with Trump it is zero
I wouldn't be surprised if the response to an invasion of Taiwan looked very similar to the response to Russia's invasion of Ukraine.
The world did nothing for about a week and it seemed as though leaders were willing to sit on their hands for a week to see if it ended quickly. When it didn't they moved from vague, hand wavy statements to economic sanctions.
If China tries to invade we very well could see a weak, hollow political response from world leaders unless China falters and is stopped initially.
Ukraine was (and is) a very small economy literally right up against russia that had long been in Russia's sphere of influence if not under its direct control. Ukraine's fall would have had little meaningful impact on western powers other than losing some face in countering Russian aggression. Specifically to avoid losing that face, western leaders made it very clear from the get go that they would not step in to defend Ukraine, specifically so that they could conserve their strength in case they needed it against China. The universal assumption was that Russia, which was believed to have one of the most capable armies in the world would steamroll the Ukrainians and the country would fall in days if not hours. Only when the Russian advance stalled and it became clear that Ukraine with moderate support could hold out did the west start providing that support, and only after Ukraine made some impressive gains that demonstrated it could not only hold out but potentially drive the russians back did the west start sending serious aid.
Conversely, Taiwan is extremely integrated into the global economy and is a key part of America's pacific power. We have been backing Taiwan for decades. Taiwan is an island, and one with very few appropriate landing sites, making its invasion extremely technically challenging for any power, even one with a strong navy. China, despite its recent shipbuilding spree, still lacks naval and amphibious combat experience, and it does not have anywhere near the fleet size necessary to fully leverage its army's main strengths. We are all freshly aware of lessons learned from Ukraine's invasion: that the strength on paper of countries like Russia and China do not correspond to force projection capability, that providing substantial aid early on is critical, and that modern military equipment is not so powerful as to collapse an otherwise functional country in hours. The amount of aid Taiwan needs is less, and the willingness to give it is greater. Only a major shift in US behavior would cause it to not support Taiwan.
A bit off topic, but Russian field hospitals and blood supply was only the last, most obvious indicator.
The build up of troops could have been written off as sabre rattling, they did the same a year or two earlier. Sending a bunch of naval assets the long way around Europe was a much more clear sign, at least for me that's when I knew they were actually going to invade (again).
> Exploitation of vulnerabilities of persons, manipulation and use of subliminal techniques
techcrunch simplified it.
from my reading, it counts if you are intentionally setting out to build a system to manipulate or deceive people.
edit — here’s the actual text from the act, which makes more clear it’s about whether the deception is purposefully intended for malicious reasons
> the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm
> In addition, common and legitimate commercial practices, for example in the field of advertising, that comply with the applicable law should not, in themselves, be regarded as constituting harmful manipulative AI-enabled practices.
Broadly speaking, I feel whenever you have to build into a law specific cave outs for perfectly innocuous behavior that would otherwise be illegal under the law as written, it's not a very well thought out law.
Either the behavior in question is actually bad in which case there shouldn't be exceptions, or there's actually nothing inherently wrong with it in which case you have misidentified the actual problem and are probably needlessly criminalizing a huge swathe of normal behavior beyond just the one exception you happened to think of.
Funny, I took away pretty much the opposite. That advertising is only "acceptable" because it been here for too long, but is otherwise equally ban-worthy for all the same (reasonable) reasons.
Right now, for 10 years at least, with targeted advertising, it has been completely normalised and typical to use machine learning to intentionally subliminally manipulate people. I was taught less than 10 years at a top university that machine learning was classified as AI.
It raises many questions. Is it covered by this legislation? Other comments make it sound like they created an exception, so it is not. But then I have to ask, why make such an exception? What is the spirit and intention of the law? How does it make sense to create such an exception? Isn't the truth that the current behaviour of the advertising industry is unacceptable but it's too inconvenient to try to deal with that problem?
Placing the line between acceptable tech and "AI" is going to be completely arbitrary and industry will intentionally make their tech tread on that line.
> What I don't see here is how the EU is actually defining what is and is not considered AI.
Because instead of reading the source, you're reading a sensationalist article.
> That can be a hugely broad category that covers any algorithmic feed or advertising platform.
Again, read the EU AI Act. It's not like it's hidden, or hasn't been available for several years already.
----
We're going to get a repeat of GDPR aren't we? Where 8 years in people arguing about it have never read anything beyond twitter hot takes and sensationalist articles?
Sure, I get that reading the act is more important than the article.
And in reading the act, I didn't see any clear definitions. They have broad references to what reads much like any ML algorithm, with carve outs for areas where manipulating or influencing is expected (like advertising).
Where in the act does it actually define the bar for a technology to be considered AI? A link or a quote would be really helpful here, I didn't see such a description but it is easy to miss in legal texts.
The briefing on the Act talks about the risk of overly broad definitions. Why don't you just engage in good faith? What's the point of all this performative "oh this is making me so tired"?
Maybe if the GDPR was a simple law instead of 11 chapters and 99 sections and all anyone got as a benefit from it is cookie banners it would be different.
It is a simple law. You can read it in an afternoon. If you still don't understand it 8 years later, it's not the fault of the law.
> instead of 11 chapters and 99 sections
News flash: humans and their affairs are complicated
> all anyone got as a benefit from it is cookie banners
Please show me where GDPR requires cookie banners.
Bonus points: who is responsible for the cookie banners.
Double bonus points: why HN hails Apple for implementing "ask apps not to track", boos Facebook and others for invasive tracking, ... and boos GDPR which literally tells companies not to track users
Consent is never "needed". Consent is one of many legal bases that allows for data processing to take place. If other legal bases than consent do not apply, the industry can use "consent" as a get out of jail card. Consent as a legal basis was heavily lobbied by Big Tech.
If even consent does not apply, then the data shall not be processed. That's the end of it.
It's called dark patterns and malicious compliance
.
The annoying banners in particular, were designed by IAB Tech Lab, which is an industry front for adtech/martech companies.
Oncehub removed tracking cookies from some of their meeting invite pages in the EU and stopped showing a banner, because they thought it looked offputting.
They got a few support tickets from people who thought they were still tracking, but just removed the banner.
It's (at least in some cases) malice, not stupidity.
By putting cookie banners everywhere and pretending that they are a requirement of the GDPR, the owners of the websites (or of the tracking systems attached to those websites) (1) provide an opportunity for people to say "yes" to tracking they would almost certainly actually prefer not to happen, and (2) inflict an annoyance on people and blame it on the GDPR.
The result: huge numbers of people think that the GDPR is a stupid law whose main effect is to produce unnecessary cookie banners, and argue against any other legislation that looks like it, and resent the organization responsible for it.
Which reduces the likely future amount of legislation that might get in the way of extracting the maximum in profit by spying on people and selling their personal information to advertisers.
Which is ... not a stupid thing to do, if you are in the business of spying on people and selling their personal information to advertisers.
It doesn’t matter what it requires, the point is as usual, the EU doesn’t take into account the unintended consequences of laws it passes when it comes to technology.
That partially explains the state of the tech industry in the EU.
But guess which had a more deleterious effect on Facebook ad revenue and tracking - Apples ATT or the GDPR?
The EU just prioritises protection for its citizens over tech industry profits. They are also not opposed to ad revenue and tracking; only that people must consent to being tracked, no sneaky spying. I'm quite happy for tech to have those restrictions.
The EU right now is telling Meta that it is illegal to give users the option of either ads based on behavior on the platform or charging a monthly subscription fee.
> The EU right now is telling Meta that it is illegal to give users the option of either ads based on behavior on the platform or charging a monthly subscription fee.
And? With GDPR the EU decided that private data cannot be used as a form of payment. It can only be voluntarily given. Similarly to using ones body. You can fuck whoever you want and you can give your organs if you so choose but no business is allowed to be payed in sex or organs.
That’s just the problem. Meta was going to give users a choice between paying with “private data” or paying money. The EU won’t let people make that choice are you saying people in the EU are too dumb to decide for themselves?
But how is your data that you give to Facebook “private” to you? Facebook isn’t sharing your data to others. Ad buyers tell Facebook “Put this ad in front of people between 25-30 who look at pages that are similar to $x on Facebook”
You cannot barter with fundamental human rights, which right to data protection is (as per Charter of Fundamental Rights of the European Union), the same way you cannot barter yourself into slavery, even if you insist you are willing and consenting. By what precedent? By the precedent of the state being sovereign in enacting law.
WeChat would exit on Android if you didn’t give your contact list to them, but this behaviour wasn’t allowed on iOS by our Apple overlords and Im quite happy about that.
> Well, per GDPR they aren't allowed to do that. Are they giving that option to users outside of EU? Why Not?
Because no other place thinks that their citizens are too dumb to make informed choices.
> What about sex and organs? In your opinion should businesses be allowed to charge you with those?
If consenting adults decide they want to have sex as a financial arrangement why not? Do you think these 25 year old “girlfriends” of 70 year old millionaires are there for the love?
> I didn't give it to them. What is so hard to understand about that?
When you are on Facebook’s platform and you tell them your name, interests, relationship status, check ins, and on their site, you’re not voluntarily giving them your data?
> Are you saying that your browsing data isn't private to you? Care to share it?
If I am using a service and giving that service information about me, yes I expect that service to have information about me.
Just like right now, HN knows my email address and my comment history and where I access this site from.
There's a fundamental difference I think in the European mindset on private data and the American.
From the European mindset: private data is not "given" to a company, the company is temporarily allowed to use the data while that person engages in a relationship with the company, the data remains owned by the person (think copyright and licensing of artistic works).
American companies: think that they are granted ownership of data, just because they collect it. Therefore they cannot understand or don't want to comply with things like GDPR where they must ask to collect data and even then must only use it according to the whims of the person to whom it belongs.
> Because no other place thinks that their citizens are too dumb to make informed choices.
In case of Facebook (or tracking generally) you had no chance to make an informed choice. You are just tracked, and your data is sold to hundreds of "partners" with no possibility to say "no"
> Just like right now, HN knows my email address and my comment history and where I access this site from.
And that is fine. You'd know that if you spent about one afternoon reading through GDPR, a regulation that has been around for 8 years.
A distinction without meaning. Here's your original statement: "no other place thinks that their citizens are too dumb to make informed choices."
Questions:
At which point do you make informed choice about the data that Facebook collects on you?
At which point do you make informed choice about Facebook tracking you across the internet, even on websites that do not belong to Facebook, and through third parties that Facebook doesn't own?
At which point do you make an informed choice to let Facebook use any and all data it has on you to train Facebook's AI?
Bonus questions:
At which point did Facebook actually start give users at least some information on the data they collect and letting them do an informed choice?
> At which point do you make informed choice about the data that Facebook collects on you?
You make an “informed choice” when you create a Facebook account, give Facebook your name, date of birth, your relationship status and who you are in a relationship with, your sexual orientation, when you check in to where you have been, when you click on and buy from advertisers, when you join a Facebook group, when you tell it who your friends are…
Should I go on? At each point you made an affirmative choice about giving Facebook your information.
> At which point do you make informed choice about Facebook tracking you across the internet, even on websites that do not belong to Facebook, and through third parties that Facebook doesn't own?
> the EU doesn’t take into account the unintended consequences of laws it passes when it comes to technology.
So, the companies that implement these cookie banners are entirely without blame, right?
So what is your solution?
Reminder: GDPR is general data protection regulation. It doesn't deal with cookies at all. It deals with tracking, collecting and keeping of user data. Doesn't matter if it's on the internet, in you phone app, or in an ofline business.
Reminder: if your solution is "this should've been built into the browser", then: 1) GDPR doesn't deal with specific tech (because tech changes), 2) when governments mandates specific solutions they are called overreaching overbearing tyrants and 3) why hasn't the world's largest advertising company incidentally owning the world's most popular browser implemented a technical solution for tracking and cookie banners in the browser even though it's been 8 years already?
> But guess which had a more deleterious effect on Facebook ad revenue and tracking - Apples ATT or the GDPR?
In the long run most likely GDPR (and that's why Facebook is fighting EU in courts, and only fights Apple in newspaper ads), because Apple's "ask apps to not track" doesn't work. This was literally top article on HN just yesterday: "Everyone knows your location: tracking myself down through in-app ads" https://timsh.org/tracking-myself-down-through-in-app-ads/
Meta announced in their earnings report that ATT caused a drop in revenue after it went to effect.
They made no such announcement after the GDPR.
What’s my solution? There isn’t one, you know because of the way the entire internet works, the server is going to always have your IP address. For instance, neither Overcast or Apple’s podcast app actively track you or have a third party ad SDK [1]. But since they and every other real podcast player GET both the RSS feed and audio directly from the hosting provider, the hosting provider can do dynamic ad insertion based on your location by correlating it to your IP address.
What I personally do avoid is not use ad supported apps because I find them janky. On my computer at least, I use the ChatGPT plug in for Chrome and it’s now my default search engine. I pay for ChatGPT and the paid version has had built in search for years.
And yet they make no move against Apple, and they are fighting EU in courts. Hence long term.
> There isn’t one, you know because of the way the entire internet works, the server is going to always have your IP address.
Having my IP address is totally fine under GDPR.
What is not fine under my GDPR is to use this IP address (or other data) for, say, indefinite tracking.
For example, some of these completely innocent companies that were forced to show cookie banners or something, and that only want to show ads, store precise geolocation data for 10+ years.
I guess something something informed consent and server will always have IP address or something.
> What I personally do avoid is not use ad supported apps because I find them janky.
So you managed to give me a non-answer based on your complete ignorance of what GDPR is about.
> Again, read the EU AI Act. It's not like it's hidden, or hasn't been available for several years already.
You could point out a specific section or page number, instead of wasting everyone's time. The vast majority of people who have an interest in this subject do not have a strong enough interest to do what you have claim to have done.
You could have shared, right here, the knowledge that came from that reading. At least a hundred interested people who would have come across the pointing out of this clear definition within the act in your comment will now instead continue ignorantly making decisions you disagree with. Victory?
I went through a round of interviews the second half of last year. Interviewing felt the same as it had over the last 5 or 10 years honestly.
I had a few coding challenges, all were preinterview and submitted online or shared in a private repo. One company had an online quiz that was actually really interesting to take, the questions were all multiple choice but done really well to tease out someone's experience in a few key areas.
For what its worth I don't use LLMs and the interview loop went about as I'd expect in a tough job market.
I've had the same experience lately, though I think I might be getting lucky with a few of these interviews. The leetcode questions, in particular, have been softball. I do appreciate that...
The irony here is obvious, but what's interesting is that Anthropic is basically asking to not give then a realistic preview of how you will work.
This feels similar to asking devs to only use vim during a coding challenge and please refrain from using VS Code or another full featured IDE.
If you know, and even encourage, your employees to use LLMs at work you should want to see how well candidates present themselves in that same situation.
I'm out of context here as I'm not applying to Anthropic, not surprised at all if I'm missing details of the full process!
If this is just for a written part of the process or something, maybe I get it? But even then, if you expect employees to us LLMs I'd really want to see how well they interview with the LLMs available.
I still don't know how to quit Vim without googling for instructions :P
As an anecdote from my time at uni, I can share that all our exams were either writing code with pen on paper for 3-4 hours, or a take-home exam that would make up 50% of the final grade. There was never any expectation that students would use pen and paper on their take-home exams. You were free to use your books and to search the web for help, but you were not allowed to copy any code you found without citing it. Also not allowed to collaborate with anyone.
I mostly connect through Signal. I do technically have a phone number that my close friends and family have, but its a random VoIP number that I usually change every year or so. Surprisingly no one has really cared, I send out a text that I got a new number and that's that.
How? Most of the services I use, from Walgreens to banks to retirement accounts, require a phone number either for 2FA or just to verify that you’re you when signing up. After changing my phone number this year and having to go through the rigamarole for each service, I decided never again.
I've had limited luck feigning ignorance with a bank recently. "I don't know why I'm not getting a code" "No, I don't have another phone number" "I still can't log in to the web portal". They dropped the phone number requirement in favor to sending the OTP to email in the end, but it took way more effort than is reasonable. I tend to include a request to the CS person to pass along a request for TOTP/authenticator apps but given the request for a phone number is likely intentional I doubt the feedback is getting too far. In my naive mind, if enough people do the same, maybe they'll get the message.
Yeah, companies are not dumb, and they know when you have VoIP number vs a full account with an "accepted" company.
I can kind of see why not allowing 2FA to a number that could be easier to loose, but that's weak argument. Of course they don't want someone from .ru to get a US number with all of the baggage that would entail
There are flaws to their methodology. For half the companies, to change your number from A to B, you first must verify a NONCE with A, then verify a NONCE with B. This just means you have to possess two phone numbers for a period of time — Weeks, or in reality, months — while you change the long list of services over to the new phone number.
There is a simpler/better way and that is to verify you have your email address before allowing you to do a NONCE with B.
VC funding allowed them to move quickly enough that they got to a scale where they could afford legal and lobbying protection when challenges eventually happened.
reply