Hacker News new | past | comments | ask | show | jobs | submit login
IMF researchers: digital footprint yields better credit assessment (imf.org)
110 points by da_big_ghey on Feb 12, 2021 | hide | past | favorite | 165 comments



> the type of browser and hardware used to access the internet, the history of online searches and purchases ... once powered by artificial intelligence and machine learning, these alternative data sources are often superior than traditional credit assessment methods, and can advance financial inclusion, by, for example, enabling more credit to informal workers and households and firms in rural areas

Here is one of the most sensitive applications of ML. This one, like face recognition, can directly harm people.

These guys only seem to think about the "superiority" of their method in minimizing lending risk. But the other party into this deal can also have something to lose.

What's the difference between this and the Chinese social scoring system? They want to use private search logs. "Your privacy for a credit score?"

Maybe the model is biased, how do we know? Is there an independent bias assessment process? Is there a way to sue for unfair rating? What are the values of the people working on the training set? Is there a "speak to human" step in the process where you can plead your case? I don't want an automated social discrimination machine.


Having to trade away your privacy for access to financial services (where this is headed) is not OK just because the spying is unbiased.

This is a common red herring, like the proliferating surveillance cameras and face recognition - somehow it is always implied the main reason we should worry is the racial bias, while everyone being followed 24/7 by a digital private investigator is a secondary concern.


Well said. I would also add that computing such a score requires the same kind of intensive surveillance that many people here on HN (including me) are morally opposed to.


> Maybe the model is biased, how do we know?

There are multiple mutually incompatible metrics of fairness, being unbiased in one makes it biased in others. See e.g. https://www.chrisstucchio.com/pubs/slides/crunchconf_2018/sl...


Chinese social scoring system is much more extensive. It includes jaywalking and speaking evil of the government. It affects internet speed and boarding on trains.

Sure the “western countries” are heading that direction, but still have a long way


The blog is itself an advertisment for a longer (32 page) report released in August 2020, downloadable PDF available, largely an emerging trends report.

"Financial Intermediation and Technology: What’s Old, What’s New?"

https://www.imf.org/en/Publications/WP/Issues/2020/08/07/Fin...

The prospect of Fintech credit scoring from digital footprints is from "Onthe Rise of FinTechs – Credit Scoring using Digital Footprints", by Tobias Berg, Valentin Burg, Ana Gombović, Manju Puri, July 2018. 51pp.

https://www.fdic.gov/analysis/cfr/2018/wp2018/cfr-wp2018-04....

That was discussed at the time on HN (46 comments, June 22, 2018):

https://news.ycombinator.com/item?id=17377967


Hacker News headlines of the future:

Chrome extension which changes your user agent request header to get you a better mortgage

Googling "Fun activities that don't cost money" increases your interest rates, study finds

Samsung partners with Equifax to offer better car loans to users who upgrade to a new phone


Would information about CEO behavior be useful to investors? That's worth considering. Browsing history? Phone call patterns?

It's being worked on at the Harvard Business School.[1]

"We measure the behavior of 1,114 CEOs in six countries parsing granular CEO diary data through an unsupervised machine learning algorithm. The algorithm uncovers two distinct behavioral types: “leaders” and “managers”. Leaders focus on multi-function, high-level meetings,while managers focus on one-to-one meetings with core functions. Firms with leader CEOs are on average more productive, and this difference arises only after the CEO is hired. The data is consistent with horizontal differentiation of CEO behavioral types, and firm-CEO matching frictions. We estimate that 17% of sample CEOs are mismatched, and that mismatches areas associated with significant productivity losses."

[1] https://www.hbs.edu/ris/Publication%20Files/17-083_b62a7d71-...


What will be measured, will be gamed


Goodhart's law: when a measure becomes a target, it ceases to be a good measure


Ha! Exactly


I'm not a fan of the traditional credit reporting system. It is often a black box that makes it difficult to understand how the actual credit scoring algorithm works. Although one of the important aspects of the current system is that all the inputs are known. We don't know exactly to what degree each input matters, but we know the inputs. A new system needs to keep that aspect or else is going to cause pain for a lot of people who are financially punished without being aware of why. This type of transparency is also important for fairness. As other commenters have mentioned, some traits like race would be a good indicator of credit worthiness due to a variety of systemic socioeconomic issues. However that is generally considered unfair and is illegal to take that into consideration.


Important to consider that credit scores weren’t a thing until 1989


That is the content of a meme circulating recently which is imprecise at the most charitable (and IMO pretty close to entirely wrong)

Engineer Bill Fair and mathematician Earl Isaac (the “F” and “I” in FICO) started their company in 1956 and had a bank credit card scoring system available in 1970.

Equifax was founded in 1899 (not a typo). The original name: “Retail Credit Company”.

https://www.opploans.com/blog/a-brief-history-of-credit-scor...

https://www.myfico.com/consumer-division-of-FICO.aspx


This seems ridiculously simple to abuse.

Set up a 'headless' chromium container with your social media cookies, set the user-agent to the very latest $5,000 Mac, and have it google stuff like "most exclusive golf clubs in xxx city" and "best winter tires to optimize insurance rates"

Few days of that and you'll qualify for a mcmansion mortgage in the monoculture part of the suburbs where all the dentists live.

Future is bright!


Not so future ... some 6 years ago I have been advised by a recruiter that to apply to a certain SV company, I must have a Facebook profile with a minimum of 100 friends. Two days of playing some stupid Flash game got me 400 friends and a position at a company.


> set the user-agent

I remember a uni network required me to install some anti-malware software. After changing my user agent, voilà, no requirement.


Online-behaviour-profile as a service!


Guess I'll work on a browser plugin that makes lots of responsible sounding financial queries.



Cue arms race


People ask me why I use cash, why I use bitcoin, why I think Uber and Deliveroo and their ilk are poison - this is why.

Take your data and shove it up your arse. If you work on this stuff, you're a cancer cell. You just haven't realised it yet.

I don't need your "credit". I don't even need you. No-one does.


How does Uber violate privacy? People are carrying their phones at all times, so THE BIG BROTHER knows where everyone is anyway.


Not everyone carries a phone, not everyone activates it at all time. An one massive unconscionable data leak doesn't justify others.

But since you ask, there was "God View":

"'God View': Uber Allegedly Stalked Users For Party-Goers' Viewing Pleasure" (Kashmir Hill)

https://www.forbes.com/sites/kashmirhill/2014/10/03/god-view...


Uber is a phone app ... so everyone who calls Uber does carry a phone.


Well then, there's one of the problems.

Again: not everyone carries a phone.

"An one" in my earlier reply should have been "And one"


Editorialised title, despite a clearly buried lede.

"What is Really New in Fintech" is original.

"IMF says credit score should incorporate browsing history" was submitted.

Buried lede:

Fintech resolves the dilemma by tapping various nonfinancial data: the type of browser and hardware used to access the internet, the history of online searches and purchases. Recent research documents that, once powered by artificial intelligence and machine learning, these alternative data sources are often superior than traditional credit assessment methods, and can advance financial inclusion, by, for example, enabling more credit to informal workers and households and firms in rural areas.


Banks tracking your digital footprint made me vomit in my mouth a bit. Can we please not normalise surveillance culture


If you want to reach out to unbanked people, one way to start would be to start prosecuting the predatory practice of reordering transactions and charging large overdraft fees rather than simply declining transactions. It should be illegal.


[Edit: no longer relevant, see replies] No it doesn't? This is the only part of the post relevant to the headline:

> The second and most complex problem is that certain kinds of people, like new entrepreneurs, innovators and many informal workers might not have enough hard data available. Even a well-paid expatriate moving to the United States can be caught in the conundrum of not getting a credit card for lack of credit record, and not having a credit record for lack of credit cards.

> Fintech resolves the dilemma by tapping various nonfinancial data: the type of browser and hardware used to access the internet, the history of online searches and purchases. Recent research documents that, once powered by artificial intelligence and machine learning, these alternative data sources are often superior than traditional credit assessment methods, and can advance financial inclusion, by, for example, enabling more credit to informal workers and households and firms in rural areas.


Submitted title was "IMF says credit score should incorporate browsing history". That was egregious editorializing. Submitters: please don't do that! Cherry-picking the most sensational detail and making that the title is definitely not ok, and fabricating something the article doesn't say is right out! https://news.ycombinator.com/newsguidelines.html

When a title buries the lede in favor of bland corporate rhetoric, it's fine to replace it, but you should replace it with something accurate and neutral, using representative language from the article itself. I've taken a crack at that above. If anyone can suggest a better title by these criteria, we can change it again.


That wasn't intended to be editorializing or picking a sensational detail. Rather, I found that to be the most relevant point and believed that it would be most salient to HN. I certainly wouldn't call it fabrication; it explicitly states that this data could be used to qualify many people for better credit, doubtless via some credit score system or similar. I think this quote best lays out that position:

> Recent research documents that, once powered by artificial intelligence and machine learning, these alternative data sources are often superior than traditional credit assessment methods, and can advance financial inclusion, by, for example, enabling more credit to informal workers and households and firms in rural areas.

Where "these alternative data sources" includes digital habits such as browsing history.

Edit: I would suggest "IMF researchers find that digital footprint yields better credit assessment" as an alternative. These aren't fintech data sources inherently; they've been used for advertising for years.


Rather, I found that to be the most relevant point

That's what editorializing is. You can point out the thing you found salient in a comment, you can find another piece that better represents the interesting thing, etc. You can't make up the title, though.


Without "making up" a headline, and were I to have used the original headline of "What is Really New in Fintech", I would not have considered this a useful submission due to the complete lack of information conveyed by the headline. The article contained important info but the headline was meaningless, so I did my best to come up with something. I guess my first attempt fell short.


The HN submission guidelines do allow for changes in title, though usually it's to combat undue sensationalism.

Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.

https://news.ycombinator.com/newsguidelines.html

In this case, the problem is an excess of bureaucratese, euphemism, jargon, and vagueness.

As Orwell wrote in "Politics and the English Language" on five badly written English passages:

[T]wo qualities are common to all of them. The first is staleness of imagery; the other is lack of precision. The writer either has a meaning and cannot express it, or he inadvertently says something else, or he is almost indifferent as to whether his words mean anything or not. This mixture of vagueness and sheer incompetence is the most marked characteristic of modern English prose, and especially of any kind of political writing. As soon as certain topics are raised, the concrete melts into the abstract and no one seems able to think of turns of speech that are not hackneyed: prose consists less and less of words chosen for the sake of their meaning, and more and more of phrases tacked together like the sections of a prefabricated hen-house.

https://www.orwell.ru/library/essays/politics/english/e_poli...

It's very common to see this tactic used in communications from* organisations or institutions about matters of significant import. Take the recent " Email from Jeff Bezos to employees", as a classic example:

https://www.aboutamazon.com/news/company-news/email-from-jef... (https://news.ycombinator.com/item?id=26006656). The title entirely fails to adequately describe the actual subject.

As is the case with the IMF's blog entry.


The title is neither misleading nor linkbait. If you think the article is bad, you can't fix it by making up the title, just find a better article. It's really much more straightforward than you are making it out to be and does not require analysis of Orwells works. Although it is a somewhat curious choice on your part given that you seem to be trying to show that plain editorializing (i.e. "I found that to be the most relevant point") is not, in fact, editorializing. It's editorializing.


The original IMF title is a lie by vagueness.

The article is evil in large part because it proposes an absolutely Orwellian regime ... well, pretty much precisely as Orwell suggests such things happen. In vague, political, bureaucratic, anodyne language, with no awareness or consideration of consequences, damage, or alternatives.

As a reference to the evil, the article is invaluable. It is the primary source, straight from the horse's mouth. Much as, say, Mein Kampf was, and the reason a young journalist violated copyright to ensure that it was prominently available in the United States in unexpurgated form:

<https://www.latimes.com/archives/la-xpm-1988-02-14-mn-42699-...>

You're presenting a false dichotomy of "making up a the title". The question is how best to find a truth within the mealy-mouthed source. Which has mostly been done (I'm not entirely satisfied with the present HN title "IMF researchers: digital footprint yields better credit assessment", but it certainly improves on the original.


Let's wrap up here but, again, there is a totally trivial remedy for an article being "evil". Just don't post it to HN.


Exposing the evil is the utility here. Letting this proposal float unobserved is the greater evil.

Sunshine ... isn't always a great disinfectant, but it is in this case.


Good lord. The logic twists here are crazy.

Anyways - in developing countries models along these lines are already being used - there are no or limited credit scores and identity issues can be big. So people are already getting financing as a result of this evil - usually microfinance


No, you have to allow the source to do the editorializing, because that's so much better.


Ok, we'll use that latter suggestion above. Thanks!


Tl;Dr.

"The lower strata of society are being forced into precarious employment and debt slavery. (What's new?) Here's how to use a global surveillance apparatus to enable more debt slavery."


dang, as I'd just sent you in email, this is a really tough call, and the article itself is largely to blame.

The submitted title was very clearly editorialised. But both the original title and the entire article text are so anodyne as to make an accurate 80-character precise of the submission as HN requires all but impossible.

The implied violations of personal privacy, third-party doctrine, intellectual freedom, and more, are staggering. I'd hope (though I also begin to doubt) that search engines, browser providers, ISPs, and even online merchants would rebel at all aspects of this proposal.

At the same time, the article highlights numerous issues that seem to be reaching a breaking point:

- Wage-based or even salary-based compensation, during time of global pandemic and lockdown, aren't sufficient.

- Credit is (and has for decades) been used to supplant sufficient living wages for vast portions of the population.

- A necessary foundation of a global p2p or b2c commercial system is sufficient trust on both sides of that transaction that parties will uphold their ends of the bargain, and that neither side has been conmpromised, internally or externally, in such a way as to commit fraud. In-person, cash-on-the-barrelhead transactions solve many of these problems. Local-dealing similarly. Globally we're trying to supplant intrinsic trust mechanisms with mediated ones, with varying degrees of success and tremendous amounts of unintended consequences.

Rather than put all my article responses in this subthread (which likely will and should get reaped to the bottom of the discussion), I'm hoping to write up my thoughts separately. I just want to make clear that this is as subtle a proposal as, oh, say, "Proposal for large-scale passenger handling logistics using IBM tabulation equipment in the central European region" would have been in 1939. This is absolutely horrific, and an exquisite example of Arendt's banality of evil.

Arnoud Boot, Peter Hoffmann, Luc Laeven, Lev Ratnovski are names that should appear in business and privacy case-history courses for decades.

"Browser, hardware, search and purchase history in Fintech credit assessment" is my best spin. 77 characters. Kinda-sorta falls out of that buried paragraph number 6.


Submission is clearly editorialised, but so too, lede is clearly buried. And exceedingly difficult to extract from content as written (HN offers guidelines on improving or clarifying titles, they just don't work here).

Unfortunately, this also makes metadiscussion of the submission and title dominate the thread, at least until dang cleans up.

I'm pinging dang.


Shouldn’t credit scores be an incentive to become a more thrifty and financially conscientious citizen? I can hardly see how knowing the hardware of my browser is going to incentivize me to spend more responsibly. I can only see this serving in the interest of banks who can grade me as a marginally safer investment.

This seems to be in need of regulation. Unfortunately, Liz Warren was the biggest advocate for consumer financial protection and her state’s governor is a Republican which means she can’t have a Cabinet position because her governor would appoint a Republican to her seat and upend the current Democratic Senate majority. Seems like we’ll be walking toward a social credit system like China after all.


I hope that regulatory agencies start cracking down on these "alternative sources" for credit-worthiness data. Researchers have shown how it's very easy to attribute things like political party, race, sexuality, etc., things that are often illegal to take into consideration when determining credit, to other things like where you live, TV shows you like, apps you have on your phone, etc.

And strangely (or not) this blog post from the IMF barely touches on the huge privacy concerns with this stuff.


This. Slippery slope to denying credit based on a consumer's decision to avoid using Facebook or to search using DuckDuckGo.


DDG is more popular in the tech-literate crowd, most of whom are earning large salaries and therefore we'd expect that DDG usage should be positively (not negatively) correlated with credit worthiness


You are completely missing the point and just making up assumptions to argue, stop


I wasn't arguing with or misunderstanding the point being made. I was just disputing that DDG usage would be positively related to credit unworthiness. I admit that it was pedantic.


I think the denial to credit for DDG is less about credit worthyness and more about "deplatforming" based on personal details. Or thats what it'll become.


That's highly unlikely. As the GGP said, using ddg is associated with being wealthy, so it will have the opposite effect over time.

If you're a rich white guy, this oppression is unlikely to apply to your choices. Instead, your choices will be used as filters to identify you at the exclusion of others


> if you're a rich white guy

Amazing how skin color is the first thing brought up by those (presumably) fighting racism. Ironic?

And my point is that aside from the pendantry about DDGs affect specifically... the result will be scoring systems based on more than just financial capabilities. IE: "we don't want to lend to those who use DDG because they are rich white guys". Deplatforming based on perceived "facts".


You are missing the point: DuckDuckGo is famous for its privacy stance, therefore one would expect that one's use of DuckDuckGo would not appear as "DuckDuckGo usage", but instead as "no search engine usage". The end result would be similar to what seems to happen in the USA (if one believes the anecdotes found in the Internet): not having a credit card means one is rejected when doing things like buying or even renting a house or a car, even though that has nothing to do with credit cards.


Tech literacy is negatively correlated with the likelihood of falling for scams or predatory practices. Companies who typically make their money out of screwing their customers over (and relying on the fact that they don't know/don't bother to rectify the situation) might very well preemptively raise prices or outright deny service to people who they think wouldn't fall for their bullshit.


And nothing of value was lost...


If these sources of data are actually more accurate predictors of creditworthiness, then shouln't we be applauding their use? If it so happens that tall people are less likely to repay loans, why is it wrong to charge tall people higher interest? Otherwise aren't we just socializing the cost imposed by artificially fuzzy criteria?


When I have this conversation with myself, here's what I come up with:

Shouldn't we be allowed to everything into account when determining credit worthiness? Even if it's socially uncomfortable?

We get locally more accurate pricing / risk assessment that way, but we also create feedback loops that we as a society have decided we want to not contribute to. The cost for this is the pricing inaccuracy which we collectively eat.

So, I don't think you're wrong, but maybe consider if what we're socializing the cost of is worth the price?


I think the key is "feedback loops". We're not failing to price in inherent differences, out of some moral ideal - we're trying to fix a larger inefficiency, which is that some groups of people are unable to reach their full potential. It's not a cost - it's an investment. It's regulation to fix a negative externality.

[digression into risky territory, here be dragons]

...at least, that's the idea if you believe that racial differences are 100% socially constructed. It gets super tricky if it turns out there is an inherent component, which is why the very idea is such a taboo. It wouldn't be especially shocking, from a scientific standpoint, if it turned out that different "races" tended to have subtly different inherent personality traits that might be of interest to creditors or insurers. But it would cause major headaches, if you want to convince people to break those feedback loops! Even if the inherent differences are only a seed, and the feedback loops account for most of the differences in outcome, you have a massive "political" problem on your hands.


You mention racial differences and the sensitive point about biological differences. But I think there is another kind of racial differences that is not biological - the situational differences. The situation is different for every person, and - on average - for every race.

Depending on the situation the same people behave differently. So it's not the skin color in itself but the different context that makes races behave differently. Being poor or uneducated for example is a better prediction of defaulting on payments, but race, wealth and education are correlated.

The affordances of the context influence choices - how many good options vs bad options you have in your situation. For example, people who would behave perfectly OK in normal life might try to cheat the system to get vaccinated sooner. Because they only got bad options, they might pick a bad behavior. "I got no choice!" Equality of opportunity, and not of outcomes, seems to me the best approach.

The reverse of this is also happening - in the more developed and feminist leaning cultures fewer women choose to do scientific research. Why? They got better options, so many great choices to pick from. It's insulting to question people's choices when they are made in an abundance of good options, as if they were children who can't make their own decisions.


Doesn't need to be inherent to "races". Why wouldn't culture / way of living be the elephant in the room?


well, you could change your culture/way of living. But you can't change your race. If it turned out that race is indeed a component, with proven confidence and evidence, that indicates financial stability or security, then being the wrong race will cause problems for you.

But then on the flip side, if you look at say, athletic selection for a sport like basketball or athletics, you'd find that race is inherently a criteria, because certain races seems to have an inherent advantage (or more correctly, certain races seems to have a _disadvantage). But nobody ever says anything in those cases.


> if you look at say, athletic selection for a sport like basketball or athletics, you'd find that race is inherently a criteria, because certain races seems to have an inherent advantage (or more correctly, certain races seems to have a _disadvantage). But nobody ever says anything in those cases.

I think a charitable explanation for why this is, is sports is pretty much always a voluntary activity, so it's natural to not get as worked up over either implicitly or explicitly favoring a race, compared to, say, doing the same thing in employment or scoring someone's credit, which will almost certainly impact everyone in society involuntarily.


"races" is a cultural concept designed to denigrate certain people.

Using race captures an obvious group of people by how they've already been discriminated against


That is a good summary. An analogy I might add that could further clarify things would be preexisting condition protections for health insurance. Preventing insurance companies from turning down people who they know will require expensive medical care is going to lead to worse pricing for the rest of us. However much of society has decided that is a cost worth enduring in the name of fairness.


Health insurance is weird because it’s become the way you pay for something expected. In every other insurance you will absolutely get denied or charged a huge sum if you’re knowingly going to be a loss.

It’s like applying for home fire insurance and saying you burned down your previous 3 homes and will likely burn down this one.


Problem with this is, this cuts open the feedback loop of being responsible for ones own health. Credit assignment problem all the way


Are you American? Because high cost of health insurance does not deter risky behaviors towards health or a glaring lack of responsibility.


Well, I personally haven't consented to socializing the cost of anything. I'm not sure what the ethical grounds are for imposing this on me, and in so doing dilluting available accuracy of data about people.

These feedback loops you're referring to, what order are you proposing for your chicken and your egg? What's causing what in your view?


> Well, I personally haven't consented to socializing the cost of anything.

This is a strange take. If you had to "consent" to laws there would be utter chaos. The places where you can pick and choose which laws apply to you have no real laws. And all of those places are dangerous and destitute. That's why laws are decided by the majority, there will always be dissenters for every law out there. Some harmless, but others that want to pillage, burn things down, and worse.

There will always be laws that you don't like, unless there's no laws at all.


Here we arrive at the concept of negative and positive rights. You don't have to consent to be subject to laws that bar you from interfering with others. You should have to consent before being forced to provide others with goods and services on the benefit of your labour.


This still doesn't make any sense. Every single government service provided services to people besides you. You're essentially saying there should be no government.

The government plows the streets. Provides police and fire service. Makes sure street lights and sidewalks exist. Handles deeds for property. Handles court cases. If you had to "consent" to all these services providing benefits to other people the entire government would cease to exist.

"I have to consent to anything that doesn't directly benefit me" means a ton of things aren't even going to exist when you actually need them.


I think the deal with feedback loops is that the elements of the loop are mutually causal, which is what makes them feedback loops at all.

Which one of the causes was first in history seems academic and immaterial if we're at t>0 and wanting to devise a strategy to slow it down or stop it.


It's called tyranny of the majority and happens in countries with a weak constitution.


You are consenting by participating in that society.


> So, I don't think you're wrong, but maybe consider if what we're socializing the cost of is worth the price?

But you do not socialize the cost, you just move it on someone else.

For example, we have two people A, B, with risk-assessing algorithm risk1 gives risk1(A) = 3.3, risk1(B) = 2.9. Risk limit for creditors is 3, so person A would not get credit.

But risk1() uses same forbidden information, so we use instead less informed risk2() that got risk2(A) = 2.9, while risk2(B) = 3.1, so now person A would get credit, but person B would not.


Credit worthiness is directly used on job applications etc, so it feeds back into people’s ability to pay back loans. As such accuracy isn’t an independent variable it’s part of a larger system. We don’t want to artificially make people with for example long fingers unable to pay back loans. As such you want to avoid feedback loops even at the cost of some accuracy.


No. We don't actually want the most economically efficient system here. There are other extremely important characteristics - fairness, due process, legibility, and incentives come to mind here.

Like, presumably you could improve your creditworthiness prediction by cross-referencing against bankruptcy law related searches on Google and in library checkouts. We absolutely should not allow this, though - we care a lot more about having a society where people are free to read and speak about what they want than optimal consumer credit allocation.


The requirement for "fairness" and "due process" indicate that you think credit is something a person is entitled to. Is this true? If so, can I ask why you think so?


No.

Credit worthiness should be because of an individuals past actions and not some attribute they may have been born with. Why should a tall individual who's never missed a payment have some invisible penalization applied because some other tall people are worse at re-paying loans.

Don't you see the issue here? You're penalizing individuals not based on their own measurable behavioral signals, but simply due to some physical trait they share with others. Not to mention, we wouldn't even understand why there's a correlation in the first place. Maybe tall people are 30% likely to have some unknown gene, why should the other 70% of tall people who don't have the gene be treated the same?


We must disambiguate between political objectives and the practice of credit risk modelling.

Credit risk, f(X), is an unknown population function that needs to be estimated using observed data X.

If including tallness into X improves our estimate of f(X), then we've gotten a better model.

You've asserted that X should only contain an individual's past actions instead of their inherent traits such as tallness. This may satisfy certain political objectives, and that's fine if we're being upfront about the underlying motivation, but from an ML perspective your prescription doesn't make much sense unless you have some prior knowledge about the function f(X) that tells you that tallness both isn't relevant and isn't acting as an instrumental variable for some other missing feature.

Unless you have such domain knowledge, you've little business asserting what are appropriate features to use in order to improve model quality.


Credit risk modelling is a Political Objective. They can't be separated. The only place they are separated is in the figment of someones imagination. The use and investigation of these models does not happen in a vacuum. The real question is, does better modeling help society? And that is through and through a political question.


I don't agree that this has anything to do with political objectives. It's a question of ethics. My domain knowledge is irrelevant, the ML Modelling aspect is irrelevant. The discussion was specifically around whether it's ok to include inherent traits when determining the credit worthiness of an individual.

If you think it is, that's fine. We might as well just taking the same approach to crime, and start locking individuals up or not extending job offers, NOT because they've done a single thing wrong, but simply because they're statistically more likely to.


> We might as well just taking the same approach to crime, and start locking individuals up or not extending job offers,

The main difference is that people have right to trial and to be considered innocent until proven guilty. But there is no 'right to credit'. Credit is fundamentally two-party contract.

Also there is shared limit to risk, forcing creditors to take more risk with some people means they may not take that risk in other cases (not giving credit to someone who would be marked lower risk with more informed decision) or forcing them to raise credit cost to everybody.


Making the statistical model for E(claims) worse on purpose by excluding relevant features (e.g. inherent traits) is political. The insurers have no choice since the ethical views of the majority are hoisted upon them through politics. The causal path has its roots in an ethical conversation that has played out in public, but this has mediated itself through politics/legislation.

The crime analogy is inappropriate. Insurance is a private voluntary arrangement between two consenting entities. Convictions on the other hand are an involuntary imposition on an unwilling party.

One could make the argument that allowing inherent traits in the pricing of insurance is the less authoritarian and more utilitarian option, since it is less forceful state interference in private business and leads to more accurate claims pricing and less subsidisation of insurance for person A by person B. Your prison sentencing analogy on the other hand implies more force, which is why I don't view it as a valid analogy.

The ethical argument can go either way depending on the axioms we pick a priori. If we pick Libertarian deontological axioms, then the ethical choice is to allow inherent traits into the model. If we pick racial equity deontological axioms, then we get another conclusion.


> It's a question of ethics.

Which is an important thing that isn’t math.


I also think that it is possible that the model learned that information from too small of a data sample. What is a good data sample for every such feature in a relatively balanced manner is really difficult to build a dataset from.

Consider a sample size of 10/1million with height value of 7m. And somehow 7/10 had poor ability to repay loans. With such a small sample size of this relevant factor be a good thing to rely on?


Unless you have causal proof, its irresponsible for a business to use such factors in modeling outcomes.


The point is that these algorythms do study your behaviour and correlate it with the behaviour of others. These predictions don't have to be 100% accurate to be useful. What's wrong with 70% of tall people benefiting from being associated with the other 30%? This only lasts until someone figures out how they are different and breaks the association right?


> This only lasts until someone figures out how they are different and breaks the association right?

Would you be OK if banks started charging you 4% more interest because say you have irish heritage? Your behaviour hasn't changed and nobody can explain why it matters that your father was irish, but the models determined it's statistically significant,

What if at some point 30 years down the line it's determined it wasn't even related to irish heritage. There was just coincidentally long running extreme predatory lending by banks in cities with large irish populations.

Would you still feel there's nothing wrong with it?


But you're only looking at this from a single perspective, where every single bank starts charging the same 4% simply because of irish heritage.

What if, one bank started doing it because they used this model. But another bank doesn't, because they don't believe this model?

If you were an irish person, you would just move banks. And if it turns out, indeed, that the first bank was over-charging the 4% (ie., no irish person defaults due to their being irish), then they've lost business for no reason, and the 2nd bank got more business.

However, if it turns out that being irish _does_ indeed cause you to default more, then the bank charging an extra 4% was correct. The 2nd bank, upon finding proof of this, would also start charging an extra 4% (otherwise, they'd be losing out money to the first bank).

So in the end, a bank can only charge the most appropriate interest rate for the market to bear.


The number of loans a bank gives out to such a specific factor might be low enough that it would take 30 years for a bank to have an actual study. That's 1-2 generations of abuse.

Most banks just copy each other.


> a bank can only charge the most appropriate interest rate for the market to bear.

The market can remain irrational longer than you can stay alive.


Yes, we are just socialising the cost of different races, sexes, ages, etc being different.

Many people like that.


People like many things, not all of them are ethical. Why do you think it's okay to impose this on people who don't consent?


More accurate isn't always good because creditworthiness is all about stereotyping people.

Consider this example: lesbians are statistically more likely to divorce. The ML models decides that lesbians are a higher mortgage risk and raises their interest rates. It does this indirectly by raising interest rates on married couples with different last names and owners of Subaru vehicles.

In another, the model sees that people who follow NBA players on twitter are more likely to default. It then assigns then higher interest rates. Well, it turns out that poor urban black people are far more likely to be interested in the NBA than the average person. Probably because it's one of the only sports you can play in a parking lot. The model decides to raise interest rates and cut lending to most black people.

"Perfect" credit worthiness modelling is only going to be beneficial to rich white and asian people. Essentially favoring the stereotypically richest ethnic groups.

This is extremely problematic for any sort of equality initiatives.

Treating everyone the same regardless of means is definitely socialist. It's the same as removing the pre-existing conditions exclusions for health insurance. It favors the unfortunate at the expense of the fortunate.

I'm not going to go into whether that is good or bad for society but I ask you to consider the extreme, historic level wealth inequality in the US where 1% of the population has over half the money.


Cathy O'Neil's done some excellent work in this area. "Algorithms are opinions embedded in code."

There's a summary of it on the TED Radio Hour here:

https://www.npr.org/2018/01/26/580617998/cathy-oneil-do-algo...

TED Talk https://embed-ssl.ted.com/talks/cathy_o_neil_the_era_of_blin... (video)

Cathy O’Neil Is Unimpressed by Your AI Bias Removal Tool (A RedTail Q&A) https://redtailmedia.org/2018/10/29/redtail-talks-about-flip...

Weapons of Math Destruction: Cathy O'Neil adds up the damage of algorithms https://www.theguardian.com/books/2016/oct/27/cathy-oneil-we...

Human Insights missing from Big Data: https://www.ted.com/talks/tricia_wang_the_human_insights_mis...

Further references:

Weapons of Math Detruction outlines dangers of relying on data analytics: https://www.npr.org/2016/09/12/493654950/weapons-of-math-des...

Can Big Data Really Help Screen Immigrants? https://www.npr.org/2017/12/15/571199955/dhs-wants-to-build-...


A lot to unpack here, let me try. Are machine learning algorythms capable of stereotyping people? How accurate does a stereotype have to be before it's a useful predictor?

If these models lead to good data, then what's wrong with them? If people who follow the NBA do in fact default on their loans more, shouldn't they pay higher interest?

I agree that the healthy should not be compelled to subsidize the costs of healthcare for the unhealthy. Can you tell me why it's ethical to force my compliance?

It seems you primarily see things along racial divides, grouping people into black, white, and asian. There are, more nuanced ways to group people, which is exactly what these agorythms are doing.

Also, what's wrong with wealth inequality? I'd be much more concerned about absolute quality of life, which has increased dramatically for everyone in the last half century.


Algorithms:

- May be based on bad, inaccurate, or nonrepresentative data.

- May be based on factors intrinsic to individuals which they cannot change.

- May mask underlying bias factors (as in the NBA example above.)

- May reflect existing biases in behaviours. E.g., arrest, promotion, recruitment, or admissions data used to encode an AI selector, which merely codify biases reflected in the underlying historical data.

- May reverse causality, implying A causes B, when in fact B results in A.

- Amplify existing inequalities. Starting points in life matter, and an algorithm which simply amplifies the existing inequities of wealth, race, place of birth, religion, health, etc., compound rather than address these issues.

- Fail to consider fairness or equity in decisionmaking. This is the underlying fundamental problem in credit-based resource allocation. People need access to resources regardless of their creditworthiness, though there might well be cases in which allocations are modified or managed given behavioural issues. Reinforcing standing biases does not address the underlying inequities.


I would say that accuracy is a greater concern than "fairness" or "equity" since it's quite hard to agree on the meanings of the latter terms (and as pointed out above, these are political, not statistical concerns, and the distinction is meaningful). I would expect that inaccuracy in such models would be iteratively improved. If the models are already more accurate than existing methods, doesn't that make them better?

Doesn't a more accurate model in fact ensure that fewer people are lumped into groups they don't belong to, and that more deserving people (ie. people who are likley to repay loans) have access to credit?


All these criteria are value judgements and exquisitely political.

I give you Karl Marx:

As soon as land becomes private property, the landlord demands a share of almost all the produce which the labourer can either raise, or collect from it. His rent makes the first deduction from the produce of the labour which is employed upon land.

...

Civil government, so far as it is instituted for the security of property, is in reality instituted for the defense of the rich against the poor, or of those who have some property against those who have none at all.

...

The interest of [businessmen] is always in some respects different from, and even opposite to, that of the public ... The proposal of any new law or regulation of commerce which comes from this order ... ought never to be adopted, till after having been long and carefully examined ... with the most suspicious attention. It comes from an order of men ... who have generally an interest to deceive and even oppress the public.

...

The masters, being fewer in number, can combine much more easily; and the law, besides, authorizes, or at least does not prohibit their combinations, while it prohibits those of the workmen. We have no acts of parliament against combining to lower the price of work; but many against combining to raise it. In all such disputes the masters can hold out much longer.

...

Masters are always and everywhere in a sort of tacit, but constant and uniform combination, not to raise the wages of labour above their actual rate. To violate this combination is everywhere a most unpopular action, and a sort of reproach to a master among his neighbours and equals. We seldom, indeed, hear of this combination, because it is the usual, and one may say, the natural state of things, which nobody ever hears of. Masters, too, sometimes enter into particular combinations to sink the wages of labour even below this rate.

...

A man must always live by his work, and his wages must at least be sufficient to maintain him. They must even upon most occasions be somewhat more; otherwise it would be impossible for him to bring up a family, and the race of such workmen could not last beyond the first generation.

...

No society can surely be flourishing and happy, of which the far greater part of the members are poor and miserable. It is but equity, besides, that they who feed, clothe, and lodge the whole body of the people, should have such a share of the produce of their own labour as to be themselves tolerably well fed, clothed, and lodged.

...

Wealth, as Mr Hobbes says, is power.

...

POLITICAL œconomy, considered as a branch of the science of a statesman or legislator, proposes two distinct objects: first, to provide a plentiful revenue or subsistence for the people, or more properly to enable them to provide such a revenue or subsistence for themselves; and secondly, to supply the state or commonwealth with a revenue sufficient for the public services. It proposes to enrich both the people and the sovereign.

The first object of political economy is to provide subsistence for the people.

Oh, silly me, that's Adam Smith. So hard to tell them apart.

https://en.wikisource.org/wiki/The_Wealth_of_Nations/

http://oll.libertyfund.org/titles/smith-an-inquiry-into-the-...

As well as your politics and economics, your morals and statistical understanding are deeply flawed.


You cite all of this as though it has been haded down by a supreme authority and I implicitly have to agree with what is written. I do not and Adam Smith had many contemporaries who did not as well.

Liberty itself is in opposition to flourishing and collective good. Is it right to murder a man to save 1000 innocent children? What about 5? What about one? If it is right, please do explain why.

All of this treats "the people" as entities without agency, some formless mass that the enlightened politicians have a duty to care for, whether they like it or not. In absence of divine authority, there is simply no basis for this sort sense of superiority. You decide what's right for you, and leave others to do the same. Then we can all get along.


Point being that the mainstream and foundation of economic thought are quite strongly at odds with your minority, fringe, and amoral views. By your own admission.

There are numerous holes in your view. You seem strongly motivated to avoid seeing or admitting any of them.

Cheers.


I'm afriad you've not pointed out any holes. Where does "the mainstream" get its moral authority from? Just from being mainstream? Don't you see how that's massively problematic?

You've not described how my views are amoral either. What's wrong with consent exactly?


The holes have been pointed out to you repeatedly including my own effort here: https://news.ycombinator.com/item?id=26120826

Smith makes his case. He is the father of modern economics whether or not you subscribe to his beliefs or not. The question of equity specifically is addressed in the passage:

No society can surely be flourishing and happy, of which the far greater part of the members are poor and miserable. It is but equity, besides, that they who feed, clothe, and lodge the whole body of the people, should have such a share of the produce of their own labour as to be themselves tolerably well fed, clothed, and lodged.

That makes measurement unambiguous and shows the failure of your multiple assertions otherwise.

The values you propose of "accuracy", "efficiency", "dignity", and political vs. economic concerns (Smith clearly combines the terms) are equally if not far more arbitrary and poorly supported by empirical evidence.

I'm not going to spoon-feed him to you, though the curation of passages posted above would be an excellent starting point for one actually capable of benefiting from exposure.


But this already presumes that the universal goal is to be flourishing and happy, rather than to be free. These goals are often opposed. What I'm honestly asking is from where authority is derived for the "flourishing" camp to force their opinions onto the "freedom" camp?

The main value I'm proposing is "consent". It's rather less arbitrary than "flourishing" or "happy" or "fair". The basic premise is that all human interactions, relationships and associations ought to be as consensual as we have realized romantic ones ought to be. You're no more entitled to a surgery or a loan than a partner. If you disagree, then all I'm asking is from where do you derive the entitlement?


> I agree that the healthy should not be compelled to subsidize the costs of healthcare for the unhealthy. Can you tell me why it's ethical to force my compliance?

Because it makes health insurance impossible to get when you need it most. It makes health insurance a scam because any serious illness and your company will fire you and you won't be able to get any other insurance. Insurance becomes cheap because it's useless.

Wealth inequality is a problem because money = power. Extreme wealth concentration is not compatible with democracy because eventually these people and corporations are so wealthy they can just buy votes. Spiraling wealth inequality gets you a Russia situation, an Oligarchy in everything but name.

Your argument is not a far stretch from "why don't we just let poor and sick people die". It's pretty much equivalent to "why don't we allow anyone to discriminate however they want", which got us great things like slavery


My argument is that all human interactions should be consensual. Slavery is clearly and obviously out of the questions. Discrimination is the act of making choices, you do it every time you order something to eat. I do belive people should be able to make whatever choices in personal and professional association they please, without anyone trying to peek into their heads and guess at their intentions.

I strongly beleive in caring for the poor and the sick. That is one of the oldest definitions of the virtue of charity. I am wholly against imposing my own personal virtues on other people though. Why are you so comfortable doing so?


> I am wholly against imposing my own personal virtues on other people though. Why are you so comfortable doing so?

I'm only explaining the world as I see it, just as you did in the comments above. You shouldn't be so dismissive of dissenting opinions.

In my opinion, your viewpoint that you should be able to choose everything you want to pay for will never work. A sizable portion of the population will decide to pay nothing while still receiving benefits, leading the rest to stop paying.

It's not like you can deposit a nickel for every street light you pass, or every crime prevented by the police. Some services must be paid by everyone if they are to exist at all because their value is not directly tangible.

Education is a good example. You may not go to a school and say "well, I don't go so I shouldn't have to pay for it". Yet you still reap most of the benefits of the rest of the population being able to read and write. If most of the population couldn't read or write the affect on GDP and job availability would be terrible and definitely effect you directly. The value is provided to some extent to everyone in society. The only way to avoid any personal benefit is to live isolated from the rest of society in the middle of nowhere.

Another clear example is government provided healthcare for infectious diseases. The government does a lot of vaccinations and it's impossible to avoid benefitting from that without cutting off all interaction with other humans. You directly benefit from others being vaccinated by having a lower personal risk of getting infected.

I think your view is too narrow and focuses on transactional exchanges, like buying gas or paying a toll. But it doesn't capture the larger benefit to everyone including yourself for things like having less criminals, illiterate people, and junkies running around. Which is the societal benefit to a lot of government programs and why everyone is expected to pay for them.

Your idea of not paying for services you don't use falls apart when considering that many of these services directly benefit you even if you don't directly consume them. And the only way to avoid the benefits is to avoid society completely.


Have you considered the logical endpoint of your line of reasoning?

Basically, loans should only be given to those who are 99% likely to pay them back. Anyone who is at risk of default, say because they lost their job should be denied, making a permanent descent into poverty and misery even more likely.

Health insurance should only be given to those who are already healthy. Anyone who is sick enough to significantly benefit from insurance should be rejected. Even if they have an easily treatable condition that costs $50 in medicine a month, they should be rejected, guaranteeing their health gets worse.

In a world that worked so hard over centuries to create modern healthcare, we should only use it for people who barely need it. We may have literally figured out how to transplant organs and save lives, but nah its too expensive so why bother?

In your world, the slightest shock in someone's life would mean all safety nets get pulled, guaranteeing they will never recover. Then that data gets fed back to an ML model, confirming that yes indeed, people who need money or healthcare never do well anyways, so why bother with a lost cause?

Anyways, your worldview is fucked. Hope you think much more deeply about it. Or just grow old in the world you wish for, maybe one day you will need surgery, insurance will reject you, banks will not loan you the $200k it costs, and you will die for the sake of the algorithms and profit maximization.


The purpose of risk assessment isn't to only give loans/insurance to 0 risk people. It's to quantify the level of risk.

Once you know the level of risk you can then give loans, adjusting the interest rate to make up for the risk. By hiding information from lenders, you actually make them less likely to give a loan, rather than more likely. An unknown level of risk is 100x worse than a known high level of risk.


Maybe credit-based loans are the problem.

What are the alternatives?


> Have you considered the logical endpoint of your line of reasoning?

In fact I have. In my estimation, if we adopted a model of interaction that emphasizes consent, the incapable and unfortunate among us would rely on charity to have their needs met, as they have for most of history. This seems much more ethical than to rely on force through taxation. It can't be compassionate to impose your idea of compassion onto other unwilling people.

It is also worth noting that as long as we consume goods made by people in conditions we don't tolerate domestically, we're being wildly hypocritical. All we've done is offshore the suffering, and in the process massively harm the environment shipping things to and from more lenient jurisdictions.

> In a world that worked so hard over centuries to create modern healthcare, we should only use it for people who barely need it. We may have literally figured out how to transplant organs and save lives, but nah its too expensive so why bother?

It should be reserved for those who can afford to pay for it. If you want a modern operating theatre, filled with expensive equipent and highly educated people, someone has to pay for it regardless of whether it's for yourself or for your horse. I'm only saying that the person who pays should do so consensually.

Likewise, if all you can afford is a street dentist with rusty tools, then I don't see from where one derives entitlement to anything else. As mentioned earlier, we already buy things made by people who live in such conditions, so the point is moot.

> Anyways, your worldview is fucked. Hope you think much more deeply about it. Or just grow old in the world you wish for, maybe one day you will need surgery, insurance will reject you, banks will not loan you the $200k it costs, and you will die for the sake of the algorithms and profit maximization.

This all may be, but I will be dying with my dignity intact, not greedily reaching into the pocket of a stranger, deluded into thinking I'm entitled to what's inside.


You seem to think prices and wages are set in stone by the gods or ideal free markets. They're not. Healthcare is an inelastic good, people have to pay whatever it costs.

Lots of research into the cost vs. outcomes of healthcare in the US vs. other countries shows that its very wasteful. Heck there was an interview with an ex health insurance exec who admitted they created the narrative of healthcare being shitty in Canada to sway public support for privatized healthcare. I personally buy drugs that cost $1-2 in other countries, but $100+ without insurance in US. In most cases, state owned healthcare ends up being much cheaper - so perhaps a surgery that costs $200K to one person, would cost only $100K when funded by the state because they have MUCH more negotiating power. So in a sense, public healthcare makes the entire system more efficient.

If we took on your Darwinian model, then its a very simple negotiation - life-saving medicines or surgeries are worth infinity, because otherwise you die. So if you ever need either of those, the doctors will simply ask for every single dollar and asset you have in perpetuity, easy.


> You seem to think prices and wages are set in stone by the gods or ideal free markets.

I don't. I think prices and wages ought to be privately negotiated between consumers and producers, and employees and employers respectively. I also think that people are no more entitled to goods, employment or services from other people than they are to romantic relationships, which clearly need to be consensual.

Healthcare in the US is extremely heavily regulated, that's why it's so expensive. The minimum quality of care is absurdly high. There's no budget option. If you want any medical care, you have to see a physician with an ridiculously expensive education, the supply of which has been artificially limited by regulatory capture.

This is all highly unethical. If you need a simple procedure, why can't someone with the equivalent of technician training do it for much cheaper? Again, this would be a consensual interaction between two parties, which is nobody's buiness but their own.

> If we took on your Darwinian model, then its a very simple negotiation - life-saving medicines or surgeries are worth infinity, because otherwise you die. So if you ever need either of those, the doctors will simply ask for every single dollar and asset you have in perpetuity, easy.

This just totally ignores the idea of competition. If some doctors are charging infinity, then others would realize they could capture market share by charging less than that. The veterenary comparison is very apt, why should human healthcare work any differently?


Guess we found something we agree on :D I totally agree with healthcare being over-regulated to death. Its ridiculous we have a constant (and ever-growing) shortage of doctors, but you still can't pick up basic medicines without a prescription, order your own tests etc. Its incredibly frustrating because you can't take control of your own destiny even if you want to because of all these stupid paternal laws. "There's no budget option" about sums it up. I do see some signs of it changing with wearables, continuous glucose monitors, DNA testing kits etc. people are taking control of their own health. But the medical institution will take a long, long time to kill.


Indeed =) Then you should also agree with me that adding more regulation to the system is counterproductive. We should be looking to deregulate as much as possible, allowing people to take control of their own destiny, as you aptly put it. This is exactly what a consent-focused interaction model looks like: enforce nonviolent interaction and contract law, but otherwise just leave people alone as much as possible.

Healthcare for humans is just another service, it doesn't need to be a special category. There doesn't need to be an association whose membership is granted a monopoly on this service. In absence of regulation, private industry would self-organize into tiers based on demand. From the very lowest tier of service, to the highest. This what we see in countries without so much regulatory burden, and more market freedom, such as India or China.

More fundementally, there is no moral justification for any of it. Just because one person or group of people belive in empathy, or a particular modal of social order, doesn't give them the right to impose that on others who disagree.


Yes, I'm in favor of doing away with a lot of the existing regulation and privatized health insurance altogether.

But that's not the question we started with. To get back to the original question -

I still believe using better modeling to make health insurance or loans available only to those who barely need it is highly unethical. If we are staying stuck in the current model of insanely high sticker prices that are only affordable with insurance then i don't believe it's ethical to deprive people who need insurance - because in this current model, there is no easy free market alternative, only illness and death.


The langugae you use suggests you beleive that people are entitled to health care as a service. Where do you derive this entitlement from? Honestly, what's wrong with illness and death? It's not something any of us can avoid indefinetly.


In general, the defining goal of modern capitalism is to create surpluses and progress along various axes such that we can afford to be "entitled" to more.

I (and most other people) believe people should be "entitled" to not be slaves to tyrants, to freedom of speech, to food/warmth/shelter, to basic utilities like running water and electricity. If we have a surplus of resources such that we COULD either give everyone healthcare or give a few people more money, why should society as a whole pick the latter?

Except I don't call that entitlement, its called progress. By your logic, we should not take for granted anything that wasn't' present at the birth of civilization such as not being mauled by a bear or freezing to death. Why should our standard for humanity as a whole be stagnation instead of progress? Does that benefit anyone? If your country was taken over by a dictator and you were forced at gunpoint to become a slave, would you prefer other countries to try to restore democracy, or would you prefer that they say "why should you be entitled to freedom?" and do nothing?

I think you are basically taking libertarianism and "not reaching into the pocket of strangers" to a crazy extreme. Yes, it is a virtue to be self sufficient. Charity and empathy are also virtues.


> By your logic, we should not take for granted anything that wasn't' present at the birth of civilization such as not being mauled by a bear or freezing to death.

Yes I agree wholeheartedly. We should not take anything for granted, and perceive any improvements to our quality of life as things to be grateful for, not entitled to. We are all born naked and hungry. Everything we receive afterwards we get from other people, people who are not obligated to give us anything.

All of your examples of being forced into chains at gunpoint oppose the fundamental principle I'm advocating for: consent. Just because someone has something (say indoor heating), it doesn't entitle you to have the same thing. Just because others live comfortably, it doesn't mean you should as well. Just because you define "progress" a certain way, it doesn't give you the right to impose that definition on me, does it?

> Yes, it is a virtue to be self sufficient. Charity and empathy are also virtues.

Yes! Where we disagree is whether these virtues should be voluntary or compulsive. For all of history until recently, they were voluntary. We have NGOs and other organizations which operate on this model of voluntary charity. This is the only model that's ethical. What gives you the right to impose your virtues on others?


Again, this is all maybe sorta possible in a completely theoretical new world, but in this world the debate is different - either we continue to let health insurance companies deny coverage to most and make it so healthcare goes only to those we barely need it, or we socialize it so that its accessible for all. Even if I agreed on the ideal end state, that doesn't change the fact that there are only 2 realistic next steps, and one is better IMO even if its further away from the "ideal".

I understand you are libertarian and believe only in consent, i don't need that explained to me again. I'm talking about actual concrete next steps on a concrete situation (healthcare), not "imagine you could rebuild the entire world in an ideal fashion". Thats the question I really want you to answer.

More along the lines of the philosophical (but less important, pointless) debate - Consent is not so clear as you think it is. Why should I consent to your single-minded focus on consent as the only valid principle of organizing society overriding all other concerns of practicality? What if I view that as an imposition of your virtues on mine?


I would say any next step should be focused on moving towards the ideal, not away from it. As we've identifed earlier, deregulating the actual practice of medicine, and adopting a model that more closely resembles veteranry care, would take us quite far in the right direction. Basically instead of socializing the ridiculous costs of the current system, we should focus on creating an wider spectrum of options for people. There should be a budget option.

I strongly disagree that it's pointless to have an underlying philosophy that decisions are based on, quite the opposite in fact. I would say that having a consistent philosophy or set of principles is the most important thing when making any sort of decisions, especially in the administration of complex systems such as society.

To answer your second (very valid) question, the underlying assumption is that we are all more or less equal, and therefore neither of us has a right to impose anything on the other. The question I've asked that you've not answered is what is the source of your authority to decide what virtues I should live by? I'm not saying you have to live by mine, just that you lack the moral authority to make me live by yours.

The fact that I'm asking for a source of authority is just a fancy way of saying "why do you think you can impose on me and not expect me to resist with violence and subversion?"


Two wrongs don't make a right.

Maybe we can only fix one evil at a time.

The approach that rejects the rusty-tools dentist is also more likely to reject the offshored oppressive offshored (or quite frequently, domestic) labour, rather than binning all empathy looking at either, other similar situations, or worse, and shrugging it off with a "meh, not my problem".

The worldview you're expressing has absolutely no dignity.


Well this all implies that we agree that empathy should be a driving force in decision making, overriding all others, to the point that it's ethical to impose onto wider society. Where do you derive the authority to extract compliance, by force, with your virtues and worldview at the expense of others?

Also I don't think we agree on the definition of dignity. In my eyes it's perfectly possible to die of disease or starvation with dignity. Otherwise you're implying that nature itself is bereft of it.


[T]he healthy should not be compelled to subsidize the costs of healthcare for the unhealthy. Can you tell me why it's ethical to force my compliance?

TDR-TB.


This is a great idea if you'd like to continue down our current path of creating a large group of disenfranchised people who think violent revolution is the only way to get out of their current situation.


I agree with the premise, but the problem is that it gets ethically, uh 'interesting', if you replace 'tall' with say, 'black', in your argument.


I think it gets to the point.

If blacks are for some reason (and I am not saying they are I do not know) less likely to pay back loans, then that is the problem that needs addressing. It might be a little painful to admit the huge racial divide but it is real and that is a problem we need to solve, taking the easy way out lying to ourselves only harms society in the long run and destroys the lives of people who cannot pay back the loans and end up in a huge debt cycle very difficult to get out of.


What if these predictions and others like them are self-fulfilling prophesies?

Maybe someone is objectively less likely to pay back their loans, less likely to succeed in school and more likely to be arrested for a crime, but when those predictions end up forcing them to live in a worse neighborhood because they can't qualify for a loan in the better one, go to a worse school because they aren't expected to succeed in the better one and live with a higher police presence because they're more likely to be arrested, maybe those predictions aren't just predicting worse outcomes but are actually reinforcing them.


So should we use the algorithms to charge different races different interest rates? Or should we not use them. And address the problems you mention? Because in the previous comment it sounded like they wanted to use them with applause.


Using the algorithm is addressing the problems by not hiding them.


Should we legalize drunk driving, and address the problem by trying to solve alcoholism?


I think this would be more aking to banning drinking and drunkenness altogether. I don't think that went so well the last time we tried it.


And how do you propose we solve it?

If there's a systematic financial disadvantage experienced by black people, possibly as a result of racism, then continuing to financially disenfranchise them will hardly help, will it? Access to liquid capital is pretty critical if you want to make money, isn't it?

Judgement is notoriously self-fulfilling. The best way to keep someone poor is to treat them like a poor person. The best way to drive someone to crime is to treat them like a criminal. The best way to keep someone ignorant is to treat them like an idiot.


And how did 'give everyone a home loan never mind if they can't afford it' worked out in 2008?

Sometimes I do wonder if people who present the argument you just presented realize that that is exactly how we've been doing things, and it just keep making the problem bigger.

Maybe I'm too much of a romantic, but I still harbor hope at some point they will realize it and begin to contemplate trying something different.


We're not talking about giving loans to people who can't afford it. We're talking about giving loans to black people.

Nobody is arguing against fairly assessing an individual person's ability to pay off a loan - the issue at question is whether it's okay to use race as a statistical proxy for that.


Of course we should address those problems as we identify them. But solutions may take years, even decades, to take root. What should lenders do in the mean time?


> What should lenders do in the mean time?

They could start by not making the problem worse giving loans to people who can't pay them back.


Lobby politicians to outlaw discriminatory risk assessment just to maintain balance of power in the market?


I don't think it gets ethically interesting, only politically, which isn't very interesting.

Let's take a similar case: kill switches on cars. As you may know, not too long ago industry innovated a feature that allows remote disabling of a parked vehicle. This feature was used as an assurance of reposession and payment by automotive financing. They just stop the car from working on late payment. The result: lots of people who were previously unable to access automotive financing now could. This was a very good thing for everyone!


The politics do get ugly around skin color, but ethically it comes down to whether it is 'fair' to judge someone for something they have no control over[0] (genetic skin color, or genetic tallness.)

Do we want a society where it is okay to charge red heads more for car insurance because they are red heads? Why should people have to bear the sins of others who happen to look like them?

[0]And isn't intrinsic, e.g. people with big feet might pay more for shoes, but not for mortgages.


Well, to talk about that we first have to agree on a definition of "fair". This doesn't seem practical in a society with such a diversity of opinions on the subject as ours. The more important question is to what degree is it ethical to impose your definition of "fair" on others? I would say that there is no ethical basis for the sort of extreme imposition that's currently the status quo.

If some groups wants to charge redheads more, it's their choice. If someone else sees that as a market inefficiency, they can start their own shop where they don't do that. Who cares if you or I think it's "fair"?

Edit: My original point was, the more nuanced we can be about whom we decide to give credit to, which these sorts of techniques provide, the less someone will find themselves lumped into a group in which they don't belong, and the more deserving people will have access to credit.


Let me illuminate the problem:

s/tall/black/

(inb4 downvotes)


I actually wasn't thinking that at all. I chose tall because there is actually a major disparity in income between tall and short people. This disparty seems to be the ultimate counter-argument to any misguided attempt at "levelling the playing field".


I was once peripherally involved in a conversation about ML algorithms being used to determine credit worthiness. One of the bankers explained that race as a factor is illegal, but the algo partially reduced down to a handful of heuristics -- eg that "last names ending in 'z' were less credit worthy" -- and that while these all roughly correlated to race, they were perfectly permissible since the algo was a black box & no one ever ran the correlations (conveniently). TLDR: sickening.


Here's a case where the Google AI ethics researchers should have unleashed their hounds, not on Yann LeCun and large language models (stochastic parrots? lol). Design evaluation datasets and checklists, do research into "bias washing", call them out on biased models in deployment, etc.


Agreed. I can't remember the details but I was listening to a podcast lately (Sam Harris maybe?) where the guest was talking about an example of an ML-driven system that was being used to determine whether or not to approve mortgage applications, and upon closer inspection it turned out to be latching onto some data that gave clues as to the applicant's race and using that in its decision.

You can guess the outcome: the software re-invented redlining.


We should ban the credit reporting agencies. Lenders should simply consult court judgements regarding unpaid loans, and bank statements.

People say having the courts settle legal disputes is too expensive. We’re paying multiple percentage points sales tax to credit card companies to establish a credit rating. Surely, that money would be better invested in a public, impartial court system.


Yikes. It won't be long before there's serious economic incentive to exclusively browsing the web by just running curl with TOR.


Unless it means you won’t get credit.

Edit: Like how apparently not having any social media accounts can weigh against you, if asked to produce them, when attempting to enter certain countries.


Honestly credit is completely overrated. You can rent apartments for example even if you've never owned a credit card or taken out loans provided you have a job and can prove it. People just want to know you have the ability to pay them and behavior is a really stupid way to judge that.


Past behavior is not a guarantee of future behavior, but it’s a pretty good heuristic I think.

If you’re contemplating lending money with no collateral to 2 people, one with a 10 year track record of other borrowing and consistently paying back loans as agreed and no information about their employment and a second person where all you know is they have a job, which one would you rather lend to?

The lender wants to know that you will pay them not that you could afford to pay them.


Create a better one and own the world.


I'm trying and failing to come up with a concise expression of just how utterly evil and wrong this proposal is, and what the assumptions and presumptions it makes

I think the first question needs to be "mu". Unask the question of how, or even whether, individuals' entire personal and informational histories should be deposited in the perpetual datastores of banks, credit lenders, collections agents, slumlords, and repo agents across the world.

Why the heck are we basing all transactions on credit in the first place?

Why aren't we paying people enough to live?

Why aren't we ensuring that at a time when shutting the world down to the maximum extent possible for a relatively brief period of time is the best way to return to a full and normal state of commerce?

Why don't we have mechanisms for distributing trust such that risks of transactions aren't extreme. E.g., do I really need to be able to transact a purchase by connecting to a website continents and oceans away, only to have my payor suddenly get paranoid, deny the deal, and call me back hours later to ask if I'd in fact attempted the purchase? Meantime neither I, the vendor, the payor, our various software, systems, and communications vendors, and others involved in the current state of transactions, trusts one another at all.

Do advances in communications technology inherently degrade interpersonal social trust? I'm inclined to think they do: https://old.reddit.com/r/dredmorbius/comments/6jqakv/communi...

Do people need accounts that can drain their life savings globally accessible to anyone (or any algorithm), any where, at any time?

What would a system in which these are not implicitly assumed or quietly-accepted aspects look like? How do we simply engineer / design around the unstated problem at the heart of this proposal?

Why evil? The dangers of data and surveillance have long been pointed out by key participants in the development of information technology itself. I've compiled a list of pre-1980 warnings from Paul Baran (co-inventor of packet-based switching) and Willis Ware, both of RAND, Shoshana Zuboff, of Harvard University, Arthur R. Miller, and Richard Boeth: https://toot.cat/@dredmorbius/105074933053020193

Thanks heartily to da_big_ghey for submitting this and working around the inerhent evil banality of the IMF proposal by Arnoud Boot, Peter Hoffmann, Luc Laeven, Lev Ratnovski, signatories to the death of all privacy and the opening of universal surveillance in all commercial and financial dealings. Remember their names.

I'd certainly missed it entirely.


It looks as if The Register covered this back in December. Someone's awake. (In this case, Simon Sharwood, kudos.)

"Search history can calculate better credit ratings than pay slips, says International Monetary Fund"

https://www.theregister.com/2020/12/18/web_search_history_cr...


On authors who were publishing information technology panopticon concerns in the 1980s, or earlier

Paul Baran / RAND

- "On the Engineer's Responsibility in Protecting Privacy"

- "On the Future Computer Era: Modification of the American Character and the Role of the Engineer, or, A Little Caution in the Haste to Number"

- "The Coming Computer Utility -- Laissez-Faire, Licensing, or Regulation?"

- "Remarks on the Question of Privacy Raised by the Automation of Mental Health Records"

- "Some Caveats on the Contribution of Technology to Law Enforcement"

Largely written/published 1967--1969.

https://www.rand.org/pubs/authors/b/baran_paul.html

Willis Ware / RAND

Too numerous to list fully, 1960s --1990s. Highlights:

- "Security and Privacy in Computer Systems" (1967)

- "Computers in Society's Future" (1971)

- "Records, Computers and the Rights of Citizens" (1973

- "Privacy and Security Issues in Information Systems" (1976)

- "Information Systems, Security, and Privacy" (1983)

- "The new faces of privacy" (1993)

https://www.rand.org/pubs/authors/w/ware_willis_h.html

Misc

Shoshana Zuboff, In the Age of the Smart Machine: The Future of Work and Power (1988) Notably reviewed in the Whole Earth Catalog's Signal: Communication Tools for the Information Age (1988).

https://www.worldcat.org/title/in-the-age-of-the-smart-machi... https://archive.org/details/inageofsmartmach00zubo/page/n7/m...

"Danger to Civil Rights?", 80 Microcomputing (1982)

https://archive.org/stream/80_Microcomputing_Issue_26_1982-0... (https://news.ycombinator.com/item?id=14329877)

"Computer-Based National Information Systems: Technology and Public Policy", NTIS (September 1981)

http://govinfo.library.unt.edu/ota/Ota_5/DATA/1981/8109.PDF

"23 to Study Computer ‘Threat’" (1970)

https://www.nytimes.com/1970/03/12/archives/23-to-study-comp...

The Stanford Encyclopedia of Philosophy

"Privacy and Information Technology" bibliography is largely 1990--present, but contains some earlier references.

https://plato.stanford.edu/entries/it-privacy/#Bib

Similarly "Privacy"

https://plato.stanford.edu/entries/privacy/

Credit Reporting / Legislation

US Privacy Act of 1974

https://www.justice.gov/opcl/privacy-act-1974

Invasion of Privacy Act 1971 - Queensland Government, Australia

https://www.legislation.qld.gov.au/view/pdf/inforce/current/...

Arthur R. Miller, The assault on privacy: computers, data banks, and dossiers

https://archive.org/details/assaultonprivacy00mill/page/n7/m...

"The Computer, the Consumer and Privacy" (1984)

https://www.nytimes.com/1984/03/04/weekinreview/the-computer... Richard Boeth / Newsweek

The specific item I'd had in mind:

Richard Boeth, "Is Privacy Dead", Newsweek, July 27, 1970

http://www.thedailybeast.com/articles/2013/06/11/is-privacy-...

Direct PDF: https://assets.documentcloud.org/documents/712228/1970-newsw...

Based on an HN comment: https://news.ycombinator.com/item?id=2


Truly a dystopian world we are entering. Now more than ever is the time for funding and developing alternative tools that utilize decentralization.

What's coming cannot be stopped but it can be broken.


It is already in motion, cryptocurrency will accept you no matter what your credit score or browsing history is.

“The computer can be used as a tool to liberate and protect people, rather than to control them.”

-Hal Finney


How are you planning on allocating those bitcoin distributions?

Do you have the context for the Finney quote? Apparently from a set of released emails:

https://news.bitcoin.com/researcher-publishes-never-before-s...

The larger context, from a 1992 Cypherpunks email:

"Here we are faced with the problems of loss of privacy, creeping computerization, massive databases, more centralization - and [David] Chaum offers a completely different direction to go in, one which puts power into the hands of individuals rather than governments and corporations. The computer can be used as a tool to liberate and protect people, rather than to control them."

https://web.archive.org/web/20140326104029/http://www.forbes...

Argument by unsupported assertion is ... relatively weak.


The full quote supports my assertion even more. It’s a beautiful quote and extremely relevant today, even more than when he said it.


I don't think I can agree. Though I might have once.

There's a fundamental misunderstanding of power, its dynamics, and technology's relationship to it.


Now how do I explain to my mom that her conspiracies do occur, the influential wealthy groups do exist and lobby governments, but that they have nothing to do with the independent data mining and data brokering companies which anyone can start right now


Nothing decentralized will ever remain so if humans are involved.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: