Hacker News new | past | comments | ask | show | jobs | submit login
Google, Meta, others will have to explain algorithms under new EU legislation (theverge.com)
360 points by niklasmtj on April 23, 2022 | hide | past | favorite | 211 comments



That's going to be interesting to see if the whole credit scoring industry will have to disclose their algorithms as well.

Looking at you, Schufa.


My understanding regarding credit score is that it's one of the most regulated and explainable algorithms.

It's not transparent to the public, but it is auditable due to anti discrimination regulations. Am I wrong on this?


We're doing credit scoring at the Danish bank i work at. One of our requirements is that the model and architecture has to be able to provide explanations for why yohr rating is whatever it is. Both to regulators, internal auditors, and customers.

Personally, i thimk denying people a loan is a pretty impactful decision on peoples life. They deserve a reason.


Fellow dane here. I guess i shouldn’t be surprised, it’s just not something I’ve encountered personally yet (that I know of). Do you know if a bank is obliged to give you your credit score if you ask for it?


I actually don't know if we have to give you the final "credit score". I'm more on the technical infrastructure side. For our project though, it's my impression that we aren't aiming to provide the number to the customer (It's a ranking from 1-10, binned arbitrarily based on internal rules, so practically meaningless to any individual). The credit score can be manually overwritten by individual advisors, and may be adjusted to fit sector wide government mandated risk levels, so its use beyond internal calculations is limited.

We will rather try to provide explanations for specific decisions, so if we deny you a loan we want to be able to tell you what pieces of hard data we are basing that decision on. Try to make it actionable, instead of the opaque numbers that lead to a feeling of "the computer says no." It's still an open question of if we will be able to provide that insight.

Legally I don't know if the credit score, or any of the components (LGD and PD) count as "personal data". If it does, we are obliged to provide it. Basically no customers make those kinds of requests so there's not really any internal discussions around those questions. As far as my bank is concerned, I'm sure we'd just tell you if you asked. We're bad at secrets.


Well, 90% yes and 10% no, in my opinion.

Yes, because of the obvious ramifications, as you've alluded to.

No, because it's a private company you're asking the loan of. If they don't want to lend to you, then they don't have to lend to you. I believe they should be able to come to this conclusion however they like, except based on the obvious factors like culture, gender (or lack/fluidity of), etc.

but mostly yes. I think the credit economy is an important thing to understand and people should be able to access credit so they can actualise their lives.


> I believe they should be able to come to this conclusion however they like, except based on the obvious factors like culture, gender (or lack/fluidity of), etc.

Really? How do you figure? I genuinely cannot tell how you are reconciling these two points of view. While should a private business be able to do business however it wants, but somehow be restricted when it comes to this list of 3-4 factors? Can you elaborate on that?

More specifically, how do you reconcile this opinion with the fact that many algorithmic models do perpetuate existing discriminations against these groups that you listed, despite these things supposedly not being parameters of the algorithms?

(Note: I am not condoning discrimination. In fact, I personally don't think that private business should be able to do whatever they want, and that they should be much more restricted than they are today in many, many different respects.)


> Really? How do you figure? I genuinely cannot tell how you are reconciling these two points of view. While should a private business be able to do business however it wants, but somehow be restricted when it comes to this list of 3-4 factors? Can you elaborate on that?

I don't get how you're confused by this notion. A business should be able to operate however it likes, within the confines of the laws of society. What don't you get about that?

> More specifically, how do you reconcile this opinion with the fact that many algorithmic models do perpetuate existing discriminations against these groups that you listed, despite these things supposedly not being parameters of the algorithms?

I'm sure there are many examples of laws not being obeyed, ethics being dodged, or morality being misaligned, but that doesn't take away the simple fact that a business should be able to do business however it likes provided it's obeying the law. If a business is not obeying the law, then it's prevented from doing business. If it's not prevented despite the obvious legal breaches, then it's corrupt or the government is corrupt.

Is there something about this you're missing?


What about immediate proxies for those things? I can determine your culture, gender, etc based on things like your shopping characteristics. Maybe i just deny loans to people who don't ever buy sunscreen? Or people who prefer menthol cigarettes? Or people who donate to the wrong charities?

Saying that private businesses can just do whatever they want is basically incompatible with Western society as we know it. It only works at small scale precisely because little shops don't have access to data about their customers and they are legally barred from using the obvious clues like "wears a turban" or "has short hair and cargo pants"


> Saying that private businesses can just do whatever they want is basically incompatible with Western society

But I never said that, though. I said they can operate however they like within the bounds of society's laws. You've turned something I've said into something I did not say, and I expect an apology on your part for placing me in a negative light.

I believe in socially responsible, healthy business practices that add positive value to the economy they operate inside of, not detract from it. Laws should be obeyed.


There are three basic reasons people don't get a loan:

- They already have too much debt

- They don't earn enough to pay it off

- They have a history of not paying off their debts.

This should not be hard to explain to most people.


> - They have a history of not paying off their debts.

If only. In practice it's more like "They don't have a history of paying off their debts", which crucially means if you don't have a history of being in debt then you have no history of paying it off, and therefore you're considered high risk. Thus you get otherwise-nonsencial behaviours like taking out a loan only for the reason of paying it off.


Having to have had debt to prove you can pay it off is a weird thing that's going on in America. Here in Europe we don't really do that. If you have avoided loans most of your life that's a good thing.


You'd think so, but banks aren't "all or nothing" businesses. It's not like you can either get a loan for any amount you want, or you can't borrow at all. There's a bunch of intricacies that make is much more interesting.

Say you lend $200 to a guy who also has a house worth $100, you're the only creditor. In the event of a default, you can take that $100 house. That means you only lose $100, not he $200 you're actually out. This is called the Loss Given Default, or LGD. That number encapsulates the you first point, and half of point two.

The other half of point two, and the entirety of point three is covered in what we call the Probability of Default, or PD. The chance that a given debtor is going to default.

I hope you can see how these two number interact, especially when you then also take into account the upside of giving the loan. Providing loans might entice a large counterparty to do more business with you, or it might provide you with access to a new network. It might make sense to make a risky loan if the downside is very small, or conversely it might not make sense to make a pretty safe loan, if the downside is huge. In practice you can multiply these two numbers together to get an "expected cost" of proving the loan.

Now when someone comes and asks you why you aren't going to lend him $300, you then have to be able to trace all that data back to the source.


At least in the United States, anyone can create a credit score. There is VantageScore (the one you get for free from CreditKarma or your bank/credit card), at least 20 different versions of FICO, JSS Scorelogix, LexusNexus RiskView, Equifax FICO and RISK, ChexSystems Consumer Score, PRBC, and a million others.

You have a right as a consumer to the underlying data from the credit reporting bureaus, but not the proprietary algorithms that determine risk.

I understand the desire for transparency, but at its core credit scoring is fraud prevention. It is like asking Visa to explain what criteria they use to determine if a charge is fraud, which predominately helps the people trying to do the frauds.


For one of them operating in Germany that’s not the case. They are - allegedly - taking the area you live in, marital status, etc. into your score and have so far refused to disclose their algorithm and methods.


[flagged]


I didn’t think I would see the day with a troll on HN. I doubt it’s an AI too because of the coherency of what they’re saying.


Trolling whom? Trolling whom?

These guys send letters every month about artificial debt they self-inflicted and demanding payment under threat of theft, and I'm the troll?

International debt was under 1% before Ukraine, these dudes were asking 20% a year, like the sixteenth power of what the international market was asking. They can perfectly well get credit at under 1%, they can perfectly well function with lower rates.

They just love usury.

They love money.


"coherency of what they’re saying"

Bollocks - spelling and grammar are both rubbish.


It’s against the rules of this forum to accuse people of being trolls.


If you read the article they're asking for more than explaining algorithms. Overall they want the the tech providers to be responsible.

Explaining algorithms could, in theory, give away a competitive advantage. However fairness to users seems to be a priority in this decision.


>> "Large online platforms like Facebook will have to make the working of their recommender algorithms (e.g. used for sorting content on the News Feed or suggesting TV shows on Netflix) transparent to users. Users should also be offered a recommender system “not based on profiling.”"

Both of those seem like good ideas and progress. The non-profiled recommender system option especially!

It's also really bothered me that tech companies of sufficient size can discriminate against legally-protected classes because "algorithms are complicated" and government regulators haven't pushed.

I'm not a fan of regulating design or use, but I'm a huge proponent of requiring transparency and detail on demand.

We'll see how willing the EU is to levy fines for breaches.

It's no doubt a consequence of most huge tech companies being American, but it's been refreshing to see the repeated "We have a law; You clearly broke it; Here's your fine" follow-through thus far from EU enforcement.


> It's also really bothered me that tech companies of sufficient size can discriminate against legally-protected classes because "algorithms are complicated" and government regulators haven't pushed.

Care to elaborate? Discrimination in terms of what ads are displayed perhaps?



> We'll see how willing the EU is to levy fines for breaches.

It has been very slow with GDPR, I expect it to be even slower here.


Google is rolling out one-click cookie rejection as a result of gigantic fines threatened by the French CNIL. Having already been slapped with 90M and 60M, seems like there's not much of a need for fines. They know Europe isn't playing around.


> They know Europe isn't playing around.

Haha, look how long it took. That's Billions gathered and I wonder/never heard anyone is asking the money back from these A-Z companies that make their money on the grey web.



Things like "fairness" aren't defined in the legislation and will be determined in smoke filled rooms by shadowy moneyed interests.

Ordinary users will get censored. By the courts, by unelected regulators, and by Big Tech AI zealously nuking content to avoid arbitrary fines. It's content ID on steroids.


I agree that it could get out of hand. We'll have to wait and see how it turns out. Since this is an EU law I wonder if it applies to content hosted on EU servers only, or any content that shows up in their users' results.


Platforms are responsible for everything shown to a user inside the EU.

I suspect that Google and Facebook will not offer country specific blocklists like they do for Nazi content in Germany. If Hungary bans LGBTQIA content, it'll disappear in France. Europe can then have an argument about how they "really really not really" believe in free speech.


I am worried about the term "disinformation" since that can be really subjective. On the other hand anti-vax content is harmful, to me, so there's no easy answer.


If they do business in the EU. Otherwise this is without teeth entirely.


I mean, yes? That seems obvious?

EU law applies to companies which operate in the EU.


I would love to first see a technical definition of fairness from EU that can be used to evaluate algorithms. That is a non-trivial detail often overlooked from these discussions.


This is 2022, you have this information at your fingertips. https://eur-lex.europa.eu/legal-content/en/TXT/?uri=COM%3A20...

> Article 29 Recommender systems

> 1. Very large online platforms that use recommender systems shall set out in their terms and conditions, in a clear, accessible and easily comprehensible manner, the main parameters used in their recommender systems, as well as any options for the recipients of the service to modify or influence those main parameters that they may have made available, including at least one option which is not based on profiling, within the meaning of Article 4 (4) of Regulation (EU) 2016/679.

> 2. Where several options are available pursuant to paragraph 1, very large online platforms shall provide an easily accessible functionality on their online interface allowing the recipient of the service to select and to modify at any time their preferred option for each of the recommender systems that determines the relative order of information presented to them.


Not technical, but fairness is the opposite of "our algorithm is so complicated that we can't prevent it from penalizing you even if you are not at fault. Unless you reach the top of HN, in which case we will manually intervene to fix things."


Nevada Gaming Control Board requires source code of all the casino games

Easy to see this concept expanding


This deserves more attention as it does set a good precedent


"Explaining algorithms could, in theory, give away a competitive advantage."

Why should anyone care if they have a competitive advantage?

If anything I want them to have a disadvantage, lose money, and go out of business.


> Explaining algorithms could, in theory, give away a competitive advantage.

Which is good. We could use some more competition on the market.


The new regulation requires a hand-wavey style explanation i.e build a retrieval / ranking / matching algorithm that learns from customer clicks and considers blah blah.

There will be no explanation of the actual algorithm.


Probability ranking principle will never not be the foundation for recommender systems. Any statistical model flowing from PRP will be more or less the same regardless of whether the architecture is neural, boosted trees, etc.

However, if regulation required companies to disclose all of the data goes into those models, how they acquire it (tracking browser/app behavior, purchase from 3rd parties), and so on, that would be the real game changer for consumer privacy and protection.


No explanation - no market. If you don't like it, you can run your business in another contract jurisdiction. I hope these companies will taste their own medicine, we desperately need competition


the person you're replying to is saying that their explanation will be trivial and thus useless in determining the responses.

I don't know if that's true or not myself though, since I haven't read myself.


Six percent annual turnover for non-compliance seems too low.

Should be six percent for first offense, 12% for second, 25% for third, etc.

Until the company fixes it's compliance or becomes insolvent.


This opinion is terrifying to me. Why has our culture become so authoritarian?


The alternative is that corporations can (or rather, can continue to) factor in selectively breaking laws as a line item in their operating costs, which is unappealing for certain people for a variety of reasons.


Escalating punishment based on repeat offense until we find a level that is convincing enough to stop you from offending is not a particularly new idea.


If an individual continues to defy the law they get sent to jail. Why should corporations get away without similar escalating consequences?


because companies knowingly violate laws then get a slap on the wrist when they get caught and no one goes to jail when they are caught.


Fines are fine balance between motivational and liquidating.

I think 6% is quite a lot, even if one has 40% margin. Investors will be highly distraught and seek remedies from the current management. But for instance at 20% they will blame the regulators and push the company to fight in courts.

In any way, government wants to motivate change in behavior not taking companies out of business.


> In any way, government wants to motivate change in behavior not taking companies out of business.

Taking maliciously noncompliant comanies out of business can be a way to motivate others to not try to skirt the law.


Requiring an alternative to algorithmic sorting (chronological) is good even though most sites do it already. "Explaining the algorithms" sounds like an impossible-to-implement, feel-good clause.

Requiring transparency for bans and censorship though will probably have a major effect if people start asking nosy questions and exposing corporate and government abuses of power. Many EU governments will regret that users can expose them , that will be fun to watch. It will also make it very hard for companies like reddit to function: could reddit be legally liable for actions of its moderators?

the other clauses are the typical wishful thinking by EU legislators who think that you can legislate the solution to unsolved or unsolvable tech problems


To add to this, I wonder how "explaining the algorithm" will work with algorithms that are trained with ML. Essentially they are black boxes, right? So as a tech company, would I have to just say what my best guess is on how it works?


As a tech company you know what outcome you want your ML algorithm to produce and you presumably have some way of figuring out whether or not it’s producing that outcome. Presumably you also know what’s being fed to the ML algorithm as training material.


I guess you always have a set of weighs that the system tries to work towards to.


"gradient descent" should be enough


> “Dark patterns” — confusing or deceptive user interfaces designed to steer users into making certain choices — will be prohibited. The EU says that, as a rule, cancelling subscriptions should be as easy as signing up for them.

This is an excellent addition.


I am very tired of the cookie/tracking popups on many websites that don't have option to "reject all" but just "accept all" and "customise". Main example being Google Search.

Looking at this, I am hopeful but not too optimistic.


> I am very tired of the cookie/tracking popups on many websites that don't have option to "reject all" but just "accept all" and "customise". Main example being Google Search.

And The Verge on this very article :)


The people writing the articles are different from the MBAs forcing the financial and technological decisions.

“Integrity” has different meanings for each group. For the latter, the meaning is likely closer to “bring in enough revenue to keep the publication running.” Applying dark patterns does not conflict with this.


Well it's fine with me. I only open Verge links when they're on HN and the title feels interesting. Which is pretty rare.



Wow, best UX change from Google in sometime now.


Don't let the prose fool you. They're doing this because what they did before was in violation and the walls were closing in.

This reminds me of supermarkets in Germany loudly announcing that they would abandon plastic bags to save the environment ... a few weeks before legislation came into effect banning them from selling plastic bags.

Why wait until you're potentially facing fines if you can move slightly ahead and sell it as a voluntary good thing you do for your users/customers?


That's basically WAI though. Plastic bags were banned because of the environmental impact. For some particularly naive customers they might think it's a feel-good initiative from a brand they now love a little more, but at the policy level it's really not important how various implementers want to spin it. What's important is that the outcome is a reduction in plastic bag use.


Oh, I'm not saying any of these are bad. I'm just saying you shouldn't thank Google (or the supermarkets) when they try to spin this as charity. Some people might not want to hear this, but the thing doing this is not corporate volition but regulations.


Exactly. It's rich for the EU to regulate dark patterns when their own laws is the reason they're so rampant on the web today. I guess it's good that they're finally waking up to this, but they're so far behind what's going on in adtech that their actions seem almost intentionally slow.

The solutions to this aren't regulatory, but technical first. Monetary fines to tech giants are mere slaps on the wrist. We, and by that I mean the web developer community, need to make technical solutions that make it impossible for companies to infringe users' rights. I guess we should first start by defining what those should be on the web. Those solutions then need to be presented to lawmakers and companies forced to adopt them. This is not rocket science; there are already solutions to these problems that just aren't adopted (e.g. the {ab,un}used Do Not Track header).

All this "behave this way or else" regulation is just reactive, and usually takes years to even pass into law, by which point tech giants are way ahead of it anyway.


"their own laws is the reason they're so rampant on the web today"

Really? Which dark pattern would bever be created if eu didn't exist?

"We, and by that I mean the web developer community, need to make technical solutions that make it impossible for companies to infringe users' rights. I guess we should first start by defining what those should be"

So, in this process, most of the population will get told what their rights are?

Since you complain that the regupation is slow, any ETA when the technofix will be ready?


Your snark is annoying, but I'll play along.

> Which dark pattern

The cookie consent forms that were a direct response to EU laws.

> most of the population will get told what their rights are?

Internet users need to be a) educated about the value of the data they produce (and ideally compensated for it[1]), and b) be provided with tools that safeguard this data and give them absolute control over it. So, yes.

The web should be user friendly, not hostile and scammy at every turn. It should be impossible for companies to abuse user data, and regulations are clearly too slow and ineffective.

> any ETA when the technofix will be ready?

Some already exist, and others can be built. The incentives are just not there, as tech giants rule the web and law makers are both influenced by and playing catch up to their schemes.

[1]: https://www.forbes.com/sites/forbestechcouncil/2020/10/30/sh...


> It's rich for the EU to regulate dark patterns when their own laws is the reason they're so rampant on the web today.

No. The law is not the reason. Companies that knowingly and willingly break it are.

Those annoying popups? The vast majority of them are illegal under GDPR, which parasites like IAB are very well aware of: https://www.iccl.ie/news/gdpr-enforcer-rules-that-iab-europe...

Edit: changed article URL


Without the law they would take your data without telling you.


Cookie consent forms were a response to the "cookie law" passed in 2009[1].

If they're now illegal, that's on the EU for making them vague or not strict enough.

But my point is that fighting this with laws is:

- too slow, since by the time governments catch up that something should be done, a lot of harm has already been inflicted upon users. And by the time laws do come to pass, tech companies have grown in power and already have alternatives to keep growing. Governments are constantly playing catch up, which was a problem even with Big Tobacco/Pharma, but the speed of innovation of Big Tech is unparalleled.

- too ineffective, as breaking these laws is too slow/difficult to prosecute, and even when companies are fined, it's mostly symbolic to even matter. I.e. to them it's just the cost of doing business.

[1]: https://gdpr.eu/cookies/


> Cookie consent forms were a response to the "cookie law" passed in 2009

> were a response to the "cookie law" passed in 2009[1]

Your link clearly states: "Receive users’ consent before you use any cookies except strictly necessary cookies".

For everything else you need to ask for consent with "No"/"Reject" being clearly labeled and being the default option.

Yes, it's that easy.

> too slow, since by the time governments catch up that something should be done, a lot of harm has already been inflicted upon users.

So, what eactly is your proposal except "law is bad"? How do you propose law should work to minimize harm?

To be clear: I think that EU is too slow and too lenient when prosecuting things illegal under GDPR, and that they should pick up the pace. However, "omg this law makes the web bad" is in itself is a very bad take. Because it takes responsiility from those who are actually responsible for making the web bad. They are now exposed... but managed to persuade people that it's not their behaviour that is blatantly evil, but that "the law exposing them is bad".


I'm not saying "law is bad". I'm saying that the process of prohibiting a behavior after it happens is too slow and ineffective for Big Tech. As it was for many other industries before, but particularly now for Big Tech.

What needs to happen is for privacy-minded tech people to propose and lobby solutions to governments that make it impossible for companies to violate these rights in the first place, and then governments making it a law for this technology to be used by all companies. E.g. the DNT header could've been one such solution, but the fact it was never made part of a law is what led to it being abused for ironically tracking itself, and now abandoned altogether.

We're in this mess because governments fundamentally don't understand technology and how to police it. Either that, or they're willfully complacent with the status quo because it benefits them as much as the corporations.


> I'm saying that the process of prohibiting a behavior after it happens is too slow and ineffective for Big Tech

Do you realize that all laws happen after something happens? Even your proposed solution of tech people coming up with something would also happen after the fact?

> What needs to happen is for privacy-minded tech people to propose and lobby solutions to governments that make it impossible for companies to violate these rights in the first place

Ah yes, the magical technical solution that is impossible to violate.

Good thing that you mentioned DNT. Do you know that DNT ended up being used for browser fingerprinting and hence tracking?

Had DNT been codified into law, you'd be complaining on HN that the law is bad and governments don't understand technology.

> Either that, or they're willfully complacent with the status quo because it benefits them as much as the corporations.

wat. GDPR is literally aimed against the status quo. I wish it was more rigorously enforced, of course.

Also, it doesn't apply just to the web. It asserts right to privacy as a fundamental right.


The « i dont care about coockies » plugin for firefox is superb at geting rid of that problem


I wouldn’t count on sites not tracking you until you actually saved your “custom preferences”.


Of course you can't.

My recommendation:

1. Install "I don't care about cookies"

2. Install "Temporary containers"

This requires that you use special containers for things you do wish to have cookies for such as HN for the login. Other than that, you can safely click accept for all websites, since it won't persist anyways.


Extensions aren't a reliable solution to privacy and security. Why would you give access to all websites you visit to a 3rd party? I barely trust browser developers these days, let alone some 3rd party developer.

And, no, I can't be bothered to review their source code if it's available, or to trust that I'm actually running said code, that it won't become malicious eventually or bother with building it myself. Unless it's run on demand and for a single purpose, I suggest avoiding extensions altogether.


Containers is a Firefox native feature. The temporary containers extension simply creates and destroys them on demand.

I understand the criticism though - increased attack surface. But the Web is pretty much a lost cause anyways.

https://drewdevault.com/2020/03/18/Reckless-limitless-scope....


use a vpn, any incognito browser, stop using google. simple.


Remember when Xbox let you sign up for Live online, but you had to do a 3 hour interrogation on the phone to cancel? And calls would cost like 25 cents a minute or something crazy.

Or the auto renewing subscriptions that either cancel your service immediately the second you turn off auto renew, even if you paid for the current time allotment, or they just prevent or ignore your request to not renew.

I feel like reverse charging didn’t exist back then.

There’s also entitled devs that say your email domain or VOIP number isn’t good enough when signing up for their service. There’s no reason for anybody to use an email from their perfect in test whitelist of gmail or Microsoft domains… And why would anybody ever have a voip number unless they were a terrorist?


Or DirecTV recently:

“Hey we couldn’t process your card due to a temporary error so we went ahead and cancelled your $59 for AllTheThings plan you had for the last 10 years as a loyal customer. We’re very much not at all sorry that plan isn’t available any more. Now AllTheThings costs $129, but don’t worry, just click to reactivate, we’ll try your card again.” … “AllTheThings processed successfully for $129, thank you for your custom.”


There's also this classic attempt at cancelling Comcast service. [1] What a nightmare.

[1] https://m.youtube.com/watch?v=yYUvpYE99vg


You know, I’ve heard this a lot and I hate Comcast for their abysmal upload speeds + their pricing, but I’ve cancelled Comcast at five different residences now and the experience took less than 5 minutes each time.

“Why do you want to cancel?” “I’m moving.” “Would you like us to transfer service to your new address?” “No.” “OK”


Amazon's attempts to get users to unwittingly sign up to Prime is one of the most egregious examples I encounter on a regular basis. As a European I cannot wait to see it gone.


If it's easy to accidentally sign up for something, does that mean it has to be easy to accidentally cancel something? Because that would be hilarious.


I’m imagining a scenario where you’re about to check out, and decide not to finish the transaction because you wanted to add something else to your very first.

So you click the cancel button.

Only you find out you’ve cancelled Prime.


After the last iOS update, Apple nagged the shit out of me to setup Apple Pay, for two days. No way to say ‘fuck off’ - only ‘remind me’. No obvious way to stop the nagging. Finally I gave them just the tip, and then pulled out before the money shot, and that seems to have shut them up for now.


I have no idea what you're talking about. Apple asked me about Apple Pay a long time ago, I decided I didn't want it and I've never been bugged again.


Apple Pay is legitimately useful though. You can use it to pay at physical businesses if you forget your wallet (double click and face ID to turn your phone into a "tap to pay" card basically). There are also lots of apps/sites that support it so you don't have to type in your card number or even your shipping info sometimes.


That you think it's useful isn't relevant; he doesn't wanna use it and yet nevertheless has continuously been nagged about it. That's not a good experience.

Fwiw I don't use Apple Pay either. There's a lot of things I don't use, for various reasons, and "you should just give in and use it" isn't the right response.


Amazon prime is also very useful. I would say it's more useful for more people than Apple pay. But that doesn't excuse the bad behaviour.


How is that even relevant? Your phone was trying to onboard you to a _free to use_ feature. If you can’t see the difference here, then I suspect there probably was a button labelled “fuck off” and you didn’t see that either. Honestly.


Windows regularly tries to nag me into free features I don’t want too.

At no time has the term ‘dark pattern’ ever been necessarily dependent on getting you to pay money.

Your argument is that I sound stupid, so I must be wrong?

There’s no button.

https://www.cultofmac.com/538999/apple-under-fire-apple-pay-...

https://www.wsj.com/articles/apple-insists-iphone-users-enro...

My other peeve is when streaming apps put a button in the bottom-right of an ad, same size and style as the ‘skip’ button one reflexively clicks. Except it turns out to be an ‘engage even moar’ button.


Apologies for the implication you’re stupid, I didn’t really mean that and it was uncalled for at any rate.

I don’t disagree regards dark patterns, your example just felt a bit irrelevant to the specific topic being discussed (Amazon pushing a paid for product / cancelling a paid subscription).


I can understand why you would make the distinction. Making distinctions is good, in general. However from my perspective as a frustrated user being antagonized by ‘my’ devices, it’s all the same battle to me.


Paid vs not-paid is not an important distinction.

People who think that money is the only thing that other people want are doomed to be repeatedly exploited by people who understand that there are more forms of exploitation than directly monetary.


How an Earth are you being exploited by putting a copy of your bank card on your phone?


"Free to use", but presumably comes with a user agreement that opens you to some financial liability. There's a (granted small) chance that a bug, security incident, or fraud lands you in a Kafkaesque debt nightmare.

I had a bit of a nightmare where one of the credit reporting agencies was convinced my residential address was inside my bank. Their online system referred me to their phone system or sending them mail. Their phone system referred me to their online system or sending them mail. I sent them mail 3 times and got no reply. An online cheat guide for getting to an actual human through their phone system didn't work, and I eventually just started hitting random keys in their phone system and got to a human who was able to sort it out.

You can't even get a secured credit card (backed by a cash deposit) without a credit check (I looked into it), which is going to fail if your residential address is wrong.

Opening a financial account that might misreport something to a credit agency shouldn't be taken lightly.


There ought to be penaltues for negligebce causing damage to you


Apple makes money off the interchange fee. It might be "free" to the end user, but the corporate motivation is the same as Prime's -- money.

And please don't ad hominem attack people you're responding to.


Yeah you’re right, there was no need for the last bit. I’m still struggling to see the relevance though, trying to get me to buy things is very different from trying to get me to use a feature you profit from (in my opinion). You also have to bare in mind that HN represents the more technical users, plenty of people probably do need the popups to discover these features. Saying that, a “no thanks, don’t remind me again” button would be a nice inclusion - perhaps with a secondary confirmation.


Feature you profit from?

You feel you profit from facebook tracking as well?

Regardless, these dark patterns are truly disgusting and how some can defend them so mindlessly just because they apparently found a use for a product is quite disturbing.


“You” in that context was the entity pushing the feature. I’m not sure what point you think you’re making about Facebook tracking tbh, but I don’t use Facebook so you’re asking the wrong person; to claim I am mindlessly defending dark patterns is nonsense.


You don't think amazon thinks prime is beneficial to you? Or that facebook doesn't think tracking is beneficial for their users?


You’re clearly determined to twist my words, I don’t particularly care what Amazon or Facebook think and have never claimed otherwise. Hell, I haven’t even defended either of your examples.


And I'm still struggling to see any difference.

> I’m still struggling to see the relevance though, trying to get me to buy things is very different from trying to get me to use a feature you profit from (in my opinion).


I feel like I remember it being pretty easy to cancel Prime, though. Have things changed?


Doesn't excuse tricking people into it.



It’s a great idea but my understanding is that they have not yet defined the term. And that sounds very hard to define.


"I know it when I see it."


I’m just not a big fan of laws where just about everyone could arguably be breaking them in some small way. That’s a lot of faith to put in regulators to always act honorably.

Vaguely worded laws can also lead conservative corporate counsels to make decisions like geoblocking all of the EU


in common law systems I know it when I see it is good enough, but I believe most of EU is under a Napoleonic system where you should define what you mean.


Nitpick: most of it falls under the Civil law system, some of which is Napoleonic. Wikipedia has a pretty nice map of the breakdown: https://en.wikipedia.org/wiki/Civil_law_(legal_system)#/medi...


The requirement of "subscribe is as easy as unsubscribe" is a metric which you could argue about in court, but would be very hard to game.

i.e. if signup is "email and credit card number" then you're going to be hard pressed to explain why a similar option to cancel does not exist and isn't accessible in as many clicks, with equivalent screen real-estate usage.


> i.e. if signup is "email and credit card number" then you're going to be hard pressed to explain why a similar option to cancel does not exist

So you argue that to cancel a subscription, you should have to provide your credit card number again. If a check on the credit card fails for some obscure reason, you cannot cancel your subscription.

This is what "subscribe is as easy as unsubscribe" also means.


I mean, that is precisely the court’s job to interpret the law. Your take is just a deliberately twisted one, it wouldn’t stand a chance in a court setting the same way as a willful offense can’t be defined that precisely, yet there is generally no problem with it.


Sure: but if it fails, then the card is invalid, and the card no longer can be billed. Again - you wouldn't get away with saying "well we couldn't verify the number" as standard practice - you'd just get sued and then punitively fined if it was found to be a lie.


Sounds extremely subjective. How do you measure it and where do you draw the line? All marketing is somewhat coercive.

How do you get economic and business growth (things which are good for people - jobs and employment) without marketing and advertising?


It's objective its just very widespread. Amazon is probably the greatest offender, but most of the platforms and BigTech is just dark patterns all the way down.


Windows is fuckinh the worst at this. The whole system experience is at some point „ please login to the mircrosoft produkt you never signed for“ or whatever new noninteresting feature they have. That there isnt something like a windows version striped off that stuff is a shame .

Oh and firewall or defender that puts a big !! Everywhere so it seems that my system will explode anytime

Are they aware that people use it for working?


I hope this makes Google Pay app subscription cancellations actually cancel them instead of postponing them for 3 months or so


It will be fun to see Google's algorithm for ranking search results.


"Here's 100PiB of unlabeled neural net weights. Knock yourselves out."


>"Here's 100PiB of unlabeled neural net weights. Knock yourselves out."

You need to give the user an explanation on why you blocked his account, but if Google is kind enough to add on top the secret neural network then some people would be happy to have a look at it and find even more garbage in it.


>Violation of guidelines and community standards.


Which is itself detected by a 80PiB neural network, based on the 60TB output of new rules that another neural network spits out every week based on the temperature outside of the corner office and the taste of Sundar's coffee this morning.


So glGoogke results went to shit because the ciffee became terrible?


The coffee roast temperature and grind is decided every day by yet another ML algorithm, as Google has effectively an unlimited army of ML researchers and infinite computing power. A rogue PhD on a cocaine binge unfortunately tuned the parameters too high once, and the results have been getting worse ever since, as a result of Sundar being increasingly disappointed by the coffee, but not being able to do anything about it because "it's the algorithm"


They want to know how the algorithms work, not the data itself.


"Bad programmers worry about the code. Good programmers worry about data structures and their relationships."

-- Linus Torvalds


Linus never had to deal with hundreds of petabytes of search data, nor ML black boxes, to be fair.


Google doesn't know what the algorithm is anymore. The whole site is a black box.


Same at FB as far as I could tell while I was there. "The algorithm" is a misnomer, popularized by the press but really kind of silly. There are really thousands of pipelines and models developed by different people running on different slices of the data available. Some are reasonably transparent. Others, based on training, are utterly opaque to humans. Then the weights are all combined to yield what users see. And it all changes every day if not every hour. Even if it could all be explained in a useful way, that explanation would be out of date as soon as it was received.

I'm not saying that to defend anyone BTW. This complexity and opacity (which is transitive in the sense that a combined result including even one opaque part itself becomes opaque) is very much the problem. What I'm saying is that it's likely impossible for the companies to comply without making fundamental changes ... which might well be the intent, but if that's the case it should be more explicit.


What needs to be shared is a high level arch not nuts and bolts.

At a broad level:

what are the input sources like IP address , clicks on other websites etc you use to feed the model.

What is the overall system optimized for , like some combination of engagement , view time etc, just listing them if possible in a order of preference is good enough

Alternatively what does your human management measure and monitor as the business metrics of success .

I want to know what behaviors (not necessarily how ) are used , I want to know what is feed trying to optimize for , more engagement, more view time to etc

This is not adversarial, knowing this helps as modify user behavior to make the model work better.

Users already have some sense of this and work around it blindly , for example YouTube has heavy emphasis on resent views and search . I (and am sure others) would use signed out user to see content way outside my interest area so my feed isn’t polluted with poor recommendations. I may have watched 1000’s hours of educational content but google would still think some how to video I watched once means I need to only see that kind of content.

Google knows it is me sure even am signed out, but they don’t use it change my feed that’s the important part and knowing that can help improve my user experience


> Google doesn't know what the algorithm is anymore

You are an insider?


They haven't talked much detail since Matt Cutts left, but over time they did sort of outline the basics. That the core ranking is still some evolution of PageRank, weighting scoring of page attributes/metadata and flowing it down/through inbound links as well. But then altered via various waves of ML, like Vince (authority/brand power), Panda (inbound link quality), Penguin (content quality), and many others that targeted other attributes (page layout, ad placement, etc).

Even if some of that is off, the premise of a chain of some ML, and some not ML, processors means they probably can't really tell you exactly why anything ranks where it does.


It's clear the public and lawmakers like the idea of knowing how the algorithm works, but what you posted is about as deep as people can reasonably understand at a high level. I don't think they realize how complex a system built over 20 years that's a trillion-dollar company's raison d'être can be.


Those sound like awesome potential features. Allow users to assign 0-100% weights for each of those scoring adjustments during search,and show them the calcs (if you can).


Supposedly there's thousands of different features that are scored, and those are just the rolled-up categories that needed their own separate ML pipeline step.

Like, maybe, for example, a feature is "this site has a favicon.ico that is unique and not used elsewhere" (page quality). Or "this page has ads, but they are below the fold" (page layout). Or "this site has > X amount of inbound links from a hand curated list of 'legitimate branded sites'" (page/site authority).

Google then picks a starting weight for all these things, and has human reviewers score the quality of the results, order of ranking, etc, based on a Google written how-to-score document. Then tweaks the weights, re-runs the ML pipeline, and has the humans score again, in some iterative loop until they seem good.

There's a never-acted-on FTC report[1] that describes how they used this system to rank their competition (comparison shopping sites) lower in the search results.

[1] http://graphics.wsj.com/google-ftc-report/

Edit: Note that a lot of detail is missing here. Like topic relevance, where a site may rank well for some niche category it specializes in. But that it wouldn't necessarily rank well for a completely different topic, even with good content, since it has no established signals it should.


> and those are just the rolled-up categories that needed their own separate ML pipeline step.

AKA ensemble models.


I doubt it, they should know what the various algorithms are, especially the most important ones that drive most of the ranking. But their competitive advantage would be on the line.


Data is already an algorithm


Google manually adjusts its results for censorship reasons. This is probably why google has gotten so much worse, they don't want information to be freely accessible, they only want things they approve of to be seen.


I reckon you're right, but I doubt that it's manual or under Google's control. Google is too important a tool of control to be left in the hands of Silly Valley idealists.

I've always wondered why Sergey Brin and Larry Page retired when they did, it coincides almost exactly with the beginning of the SERP quality decline. Wonder what sort of conversation they had with intelligence to quietly walk to the door, cash out, and say nothing about the company since.


What happened was they got what they wanted: full control of running the business. Then they quickly learned that was actually a lot of work and not very much fun, made some fairly unpopular decisions (business, product and policy) with a fair amount of public backlash, put Sundar in charge and backed away.


IANAL, so happy to be corrected, but my understanding is that EU and US law work in quite different ways. EU law sets general rules, and law courts decide what that means with reference to existing legal precendents. US law is very, very specific about what each clause means and how it should be interpreted.

Every time I see these kinds of discussions I wonder if quite a few of the disagreements are due to e.g. US commenters worried by the relative lack of specific details.


Did you flip EU and US in your comment? My understanding is the exact opposite of what you wrote:

- US, common law, https://en.wikipedia.org/wiki/Common_law

- EU, civil law, https://en.wikipedia.org/wiki/Civil_law_(legal_system)

Citing: Civil law is a legal system originating in mainland Europe and adopted in much of the world. The civil law system is intellectualized within the framework of Roman law, and with core principles codified into a referable system, which serves as the primary source of law. The civil law system is often contrasted with the common law system, which originated in medieval England, whose intellectual framework historically came from uncodified judge-made case law, and gives precedential authority to prior court decisions.


The US has federal law that apply to all the states. The EU has binding resolutions (the general rules you mentioned), and then each nation passes its own implementation. It's similar to the US federal RealID act which set standards for licenses that the states could implement however they wanted.


Before people get overly excited about this - it will be very important to see how exactly it's worded in the legislation itself.

Anti-discrimination legislation has already made black-box algorithms illegal if they are deciding on anything that a user might take objection to - so for most use cases this is not a big change.

As for - the recommender systems will have to not be based on profiling - unless we're talking about removing recommender systems based on data altogether - it will be interesting to see what the legislation considers profiling. If I tie your recommendations to the last viewed piece of content (content contextual recommendation), is that profiling? It's arguably worse for the user and for society more than profiling recommendation. If the recommendations are based on your explicit categories is that not profiling? Yet it's the principle used in news aggregators for the last 30 years.

The wording is going to be important here.


> The greater the size, the greater the responsibilities of online platforms

> as a rule, cancelling subscriptions should be as easy as signing up for them

Overall I like these principles, but we'll see in a few years how they're enforced in practice. It's been 4-5 years since we've had GDPR and I still see sites that require tens of clicks to disable all advertising cookies (and the most I've seen was 300+ clicks). Even Google only this week announced they'll add "reject all" button to their cookie banners.

I expect it'll be similar in this case, companies will do bare minimum to try to stay compliant with the regulation, and it will take a few years to see real differences, but I hope it's at least a step in the right direction.


4-5 years is nothing for the law. You have murders from 15 years ago still in process in courts. But eventually things settle. It just takes time. It's a bit like ents


> as a rule, cancelling subscriptions should be as easy as signing up for them

Before I sign out for any service this is the first thing I check.


What actually constitutes a full explanation of the algorithm? Article doesn't get into this enough, it mentions a high level overview is required but not much else. I can imagine that it's not going to require sharing the codebase or IP, of course.


I see a vast technical writing documentation project in their future.


But they must be short and easy to understand by users. Like this:

"Our algorithms use gradient descent. Data flows through our connected tubes, slowly wiggling their size until the data starts flows back and forth faster."


That's quite awesome. It's like Dr Seuss.


Regarding the rules surrounding fake information, I wonder why the be EU hasn't taken a similar stance against Fox News equivalencies?


Rules around fake information scare me because they're a limit on speech, and as Russia has recently shown, fake information is anything a dictator doesn't approve of.


Russia has recently shown, fake information is anything a dictator doesn't approve of.

This isn't an issue of "limits on speech", but rather, another reminder that one shouldn't enable folks to become dictators. Not having some reasonable limits on actual misinformation makes us all less free, however, because we cannot put our trust in some organizations.


This sounds pretty good in principle, but I feel like Marine Le Pen and her voters should not be given this power, and that is enough for me to say no one should have it.


Let people decide what type of new (fake or not) they want to consume, is not up to the government to decide what the content that people watch/read/listen to. If you see something online that you don't like, look the other way.


> Not having some reasonable limits on actual misinformation makes us all less free

This is a step towards "freedom is slavery."


If I had a penny for every time someone referenced George Orwell without understanding his politics or even having read his books, I would have a lot of pennies.

I take it you believe the tolerance paradox also gives off 1984 vibes?


Because we wouldn't have any media whatsoever to consume.


I am curious how would they "explain" AI algorithms where it is impossible to explain how / why the decision has been made.


Not that the algorithms are impossible to explain but in some cases the real explanations might require explanations, too. But I think companies will probably get away with hand-wavy explanations like you get this recommendation because you watched this movie neglecting all the sourcing/ranking/filtering workflows.


In principle looks good but lots of potential for going wrong.

Just hope this doesn't backfire. The cookie law was also a thing the EU created with good intentions after some politicians decided "omg cookies are bad" and we ended up still using cookies but pop-ups in every single website basically forcing you to accept the use of cookies.


"They have electrolytes."


If I had to guess it probably similar to SEC or NYSE required explanation when you do suspicious trades.


"We run it through our deep learning model. Here's 50 gigabytes of neural net weights."


> Here's 50 gigabytes of neural net weights.

.. from few months ago. Weights change daily, most likely updated by another NN.

I guess it's nice that lawmakers understand that at some point these companies used algorithms to search or sort stuff, but industry has already moved to another level. We might be able to explain specific result of neural networks (Shapely values or something like that), but the actual algorithm (=NN)... no way.


Well then they can explain things like what inputs to they are using (with concrete examples four each user), what metrics they are optimizing their NNs for, how their product success is measured, what their internal research is focused on etc.

I feel a lot of people on HN are looking at this from a technical standpoint while lawmakers are more interested in how these companies plan and position themselves. Explain how they "maximize profits and shareholder value" would be more accurate in my opinion


unless you can reproduce this is not going to cut. I dont think this is going to help. It only creates more useless software jobs


"Linear Algebra"


Good. More.


Do you think the EU will enforce the law for non-US and non-EU companies like TikTok will disclose them? It will be interesting to see if they will uphold the law equally to all.


All the big guys roughly know what kind of pipelines every one has , they hire from each other etc.

The level of disclosure is not going to break a lot of competitive advantage.

basically need to say what input sources and feedback they use and modular blocks on what different steps go into the pipe, nobody is asking them to expose the actual weights of billion parameter ml model they all probably have .

Even if hypothetically they did expose that level of detail it is useless for regulators as they don’t have resources to run the model , and testing a model for side effects in depth is hard .


A good start. However let's go further, simply ban personal tracking and personalized algorithmic feeds. This would combat the echo chamber effect and social media can become a broad community experience, like TV and newspapers. It would also cripple tech advertising revenues, thus redressing the balance with traditional media.


I’ve seen the difference between what YouTube presents to me when I’m logged in v.s. when I’m on a clean computer it can’t associate with me, and I do value the personalisation — when not logged in it shows me a hundred duds for every one thing I care about, and logged in it’s about 50:50.

How much of this improvement is a mysterious machine learning algorithm and how much is it just looking for new things from my subscription list, I’m not sure, and that’s important: being trapped in a torrent of self-reinforcing falsehoods is something I fell for in my teenage-goth-New-Age phase, which Carl Sagen condemned in The Demon-Haunted World, and which people in general have been falling for with every sychophant and propagandist from soothsayers to tabloids telling them what they want to be so.


> I’m not sure, and that’s important: being trapped in a torrent of self-reinforcing falsehoods is something I fell for in my teenage-goth-New-Age phase, which Carl Sagen condemned in The Demon-Haunted World.

Genuinely curious here: how can you tell you've escaped one set of self-reinforcing falsehoods while being sure you haven't fallen into another, different set?


You can’t be certain of not falling into a different set of false beliefs, but you can look for inconsistencies in your beliefs and be less and less wrong.

Spotting inconsistencies in my beliefs is what pushed me out of New Age mode, and ironically what pushed me into it in the first place (from Catholicism).


An alternative might be for personalisation to be opt-in rather than opt-out even when signed in with an account (which shouldn't even be necessary for many services anyway)


One of the things that frustrates me about discussions of censorship here on HN is that there’s a lot of intense focus on censorship via deleting a tweet or Facebook post, but no focus given to the more insidious problem of censorship by algorithm.

I am wholeheartedly in favor of a free marketplace of ideas where (we would hope) good ideas win out over bad, but as it is, once you’re deemed by an algorithm to be susceptible to a certain category of extremist information, that’s all you’re ever going to see again; the competing ideas are never going to have a chance.

Algorithmic distribution of ideas is sorta like distributing ideas via gasoline-powered leaf blower directly to the face. I am free to speak my competing ideas, and so technically I haven’t been censored, but no audience is going to hear me over the leaf blower.


We need to get some level of control over the criteria for ranking and filtering. A third one is the UI - it is the place where all sort of dark patterns hide.

I'd like to see the browser put in a sandbox and its inputs/outputs sanitised and de-biased before being presented to the user. Could also protect privacy more. We need more browser innovation. A neural net should be in every browser ready to apply semantic rules.


I don't think people should be forced into the public square by law. If you want to live in an echo chamber, you should be able to. We don't forcibly close convents. If I want to choose the "Smart Feed™, although I can choose not to, that should be my choice to make.

I don't know TikTok, but people seem to like its choices.


I think the distinction is that generally it should be a conscious choice to be in the echo chamber - and not the easy unknowing default choice (if you even have a choice) for smart feeds.


> personalized algorithmic feeds

But there are good uses, like for music. I can’t really think of a downside for music tbh, it’s not like music tends to spread extremism, and on the upside lesser known artists have a better shot at being discovered through the algorithm.


I usually have few reasons for being concerned about the future.

I think wars (even with the on-going war that Russia started), climate issues (even with the high consumption present today) and poverty (even with many countries still in it) will all have a trend of declining. However, this echo chamber fueled with miss-information is one of the things I care for.

I am so happy the EU has power and will to make good changes that gives mutual benefit to everyone when other parts of the world does not.


I think it's fair to use personal data collected on the same site. Without it, most sites would be rendered useless.


Indeed - that would be a legitimate use of cookies etc. In fact, if that were enforced, we can get rid of the annoying cookie warnings.


How does a ban on personalized algorithmic feeds work if each user is subscribed to a different set of others?


"news from friends" might be ok if it is presented without algorithmic curation - i.e. strictly on time order or similar.


Strict chronological runs into the issue that some people post far more frequently than others and can flood the feed. Perhaps some minimal algorithm to up-rank infrequent posters would be helpful without being engagement-driven.


So should I be prevented from following only anti-capitalistic people on twitter or only following right wing subreddits? Should people also be banned from subscribing only to a single newspaper versus a mix of newspapers with different political leanings? What about looking at only socialist web pages that link to other such web pages. Should web pages we forced to link to pages with other political leanings?


No, that's not what I meant. More that it should be a conscious decision to knowingly consume content with a particular bias - as you do when you pick up a certain newspaper or turn over to a specific tv channel, as opposed to being algorithmically presented with a stream of content that may veer in a direction without you being aware.


EU is becoming more like an authoritarian state. They put constraints on companies but allow governments to have full control and surveillance over their citizens. It's so hopocrit


I don't see how it's hypocritical to give democratically elected governments different possibilities that random private companies.

It's government's job to put constraints on companies, stopping them from becoming the absolute assholes they become if they have no limitations. That does not make them authoritarian.


Some people are sad that they can't set up businesses which exploit the populace as easily in the US I guess?


Sure. On the other hand you can usually chose which companies to interact with. When it comes to governments the relationship is not optional. Your government usually have more ways to affekt your life than most private companies has (like no fly lists).

A few years ago the true extent of the Swedish program for tracking left wing sympathisers became known. It ran from the sixties up until 1998. For example, if your car was seen outside of a left wing publication you could end up on a list somewhere. That caused you to be automatically excluded from 5-10% of all jobs without you never finding out about it until 20-30 years afterwards. Imagine wanting to become a police officer, a pilot or and engineer and never understanding that the reason you didn't get an interview was because you had parked in the wrong spot one day years before. Or that your sister briefly dated a left wing journalist at some point.


I find it funny the right has so much fear that whats happened to the left for generations will begin happening to them.


The tyranny of majority is a real threat. Power should be "shared" (or under contention). Companies just want to take your cash, governments can take your freedom.


We live in a society with money is inherently connected to the freedoms you have, and companies want to draw you in so that you have to use their product. Basically every job that doesn’t require a bachelor’s degree nowadays requires you to have a cellphone to communicate with them. So if you don’t have a cellphone (so realistically being under the control of Apple or Google) and you don’t have the qualifications to get a college degree, you can’t get a job.


As a EU citizen, I fully expect my country and the EU to "put constraints" (i.e. regulate) on companies, so everything seems to work as expected.


You're saying governments shouldn't have the ability to govern its people, or inquire as to how global scale companies impact their citizens? That doesn't make any sense whatsoever.


"... allow government to have full control and surveillance over their citizens."

This is hyperbole.

This suggestion is the logical next step of the part of GDPR where it says that citizens should be able to understand how automated decisions are made about theme and their data. This is about transparency for citizens, not governments dictating how algorithms should work.


Not trying to defend the EU here, but isn't this true of every government?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: