Hacker News new | past | comments | ask | show | jobs | submit | more loeber's comments login

Author here. I think you misunderstood the distinction I was drawing. How you train ML models is distinct from how you apply ML models. Regulating applications is so wide-ranging and full of edge cases that it's severely impractical. How you train them is a much narrower scope where you could actually set some regulatory constraints. (I do not endorse regulating AI at this point in time, this is just for the sake of argument.)


i strongly disagree (or i am not getting your point)

first of all, protection of individuals is the only thing that matters.

yes, the current rules are a patchwork, but i don't see any alternative.

how is setting constraints on the language model going to help protect me from abuse by that model? for example how would such a regulation prevent facial recognition? a more limited model only limits the capacity of a facial recognition system, potentially leading to more false positives which would make things worse.

on the other hand, a rule banning facial recognition provides full protection, as does a ban on using machine algorithms to make decisions that affect a persons life.

AI use is either safe or low risk, or it is dangerous. those dangers need to be averted. as i see it, the EU does not regulate AI at all. it regulates the harmful effects of technology on people. you can build whatever AI tool you want, as long as you use it in a manner that does not hurt people. or is my understanding of the current regulations wrong?


The critical nuance is that most of these machine learning applications have existed for a long time. We've had facial recognition for well over a decade. We've had the ability to generate text. We have statistical models for credit scoring, and so forth.

The difference -- why all of this stuff is being regulated now and not 20 years ago -- is that under current techniques, these models are just much more powerful and accurate today. The impetus for regulation is not that a given machine learning application exists, but the fact that it works really well.

The power and sophistication of machine learning models corresponds extremely strongly to the scale of data that it is trained on. If you are pro-regulation, then what you really want to regulate is not the mere existence of a machine learning application, but the scale of data with which it is created.

--

For another way of making the point: consider the phrase you used, "a ban on using machine algorithms to make decisions that affect a persons life". Examine it like an adversarial lawyer: what's the threshold for "affect"? Everything affects a person's life. Does Google Search work under this standard? It uses a machine algorithm to decide what to show, which can affect the user's life. Does Netflix's film recommendation work? Does Spotify's recommendation work? Okay, you want those things to work, but you don't want [insert other purpose]. You're going to find that the lines are blurry everywhere you look, and that makes for really difficult regulation.


machine learning applications have existed for a long time. [...] The difference is that under current techniques, these models are just much more powerful and accurate today.

and now you are suggesting that if these are to be regulated, they should effectively stop becoming better? what would be the point of that?

Everything affects a person's life

well, yes, so there must be a way to force a company to reverse a change that affects me.

kneecapping models doesn't prevent a company from disabling an electronic lock or closing my account. these are problems that already exist regardless of what caused those changes. google or facebook should not be allowed to terminate users unless they can prove fraud. how they arrived at the decision is quite irrelevant. insurance companies should not be allowed to deny coverage without a human verifying the decision, and also not without a human who is able to reverse a decision. weaker models are not going to enforce that, unless the models are so weak that they become useless. again, what would be the point of that?

until recently those models were not good enough. i read that as: they were useless for serious applications. they were research under development. we have been working on this for decades, and only now we are approaching the point where these tools actually become useful.

but the impetus for regulation is that these models are being used and yet still do not work well enough. they do make mistakes, and those mistakes need to be supervised and fixed if needed. if they would work perfectly to the point that an affected person can get them to reverse decisions, then this would be less of an issue.

i agree with you that the current regulations are difficult, but i do not see the benefit in regulating how those models are built instead. the damage happens at the interface between human and machine, and to prevent humans from getting hurt that interface is what needs to be regulated.

what you are suggesting sounds to me like proposing that knives made from steel are to dangerous, because a steel blade doesn't become dull fast enough. so we should instead only make knives from wood to make them weaker. but a wooden blade can still kill. so really what needs to be regulated is how the knives are used, not how they are made.


Author here. Glad it resonated with you :)


You are misusing the term "begging the question", which is a formal logical fallacy and not synonymous with "raising the question".

https://en.wikipedia.org/wiki/Begging_the_question


I think one important reason people misuse this so much is because "begging the question" does not straightforwardly convey the meaning assigned to it by whichever logicians coined the modern English term. It's a linguistically crappy term. I'd personally prefer that "beg the question" be repurposed in the way GP used it, and that another term be used to describe the logician's idea.


This one’s lost. Carry on using it correctly among those who get it, avoiding the expression among those who probably don’t, and quietly accepting this usage from those who employ it (while, perhaps, marking them off your mental list of potential proof-readers).

I don’t even feel too bad about this one, because it’s so easily misunderstood. May as well let it go.


I don't think it's lost at all. One mindful correction may lead the recipient to a lifetime of correct usage (and perhaps correcting others). The viral coefficient seems favorable.


It is not incorrect to use this expression, it can universally be understood to mean what the author meant through context, whereas the usage you consider correct is exotic.

https://www.merriam-webster.com/grammar/beg-the-question


You're getting downvoted, but thanks for posting this link. The blog post by Stan Carey (excerpted below) mentioned in it was quite interesting.

> Beg the question first appeared in English in a 1581 text of Aristotle’s Prior Analytics, and this translation has had semantic ripples down the centuries. The phrase is opaque because its use of beg is really not a good fit – it’s no wonder people have interpreted it ‘wrongly’. Had the original English translation been assume the conclusion or take the conclusion for granted instead of beg the question, there would be far less uncertainty and vexation.

> 269 out of 300 examples of begs the question used it to mean raises the question, more or less. That’s 90%. This figure show its huge predominance in contemporary discourse. Outside of formal debates and philosophical or semi-philosophical contexts, the traditional meaning of beg the question is hardly ever used. The evade the question use is rarer still.

> This is why insisting on the original use, as prescriptivists do, risks confusing many readers. It’s not a practical or constructive stance. Correctness changes with sufficient usage, yet sticklers still refuse to accept there can be more than one way to use this phrase. By adopting the tenets of one phrase → one meaning and original meaning = true meaning, they have painted themselves into a corner.

> The expression is ‘skunked’, to use Garner’s term. Grammarphobia agrees that it’s ‘virtually useless’, and Mark Liberman recommends avoiding it altogether. In formal use I advise caution for this reason, but in everyday use you’ll encounter little or no difficulty or criticism with the raise the question usage.

Additionally the LangLog link shows that "begging the question" is a result of badly translating a less than perfect translation. Greek to Latin and Latin to English.

> Some medieval translator (does anyone know who?) decided to translate Aristotle's "assuming the conclusion" into petitio principii. In classical Latin, petitio meant "an attack, a blow; a requesting, beseeching; a request, petition". But in post-classical Latin petitio was also used to mean "a postulate"

> Why begging the question? Well, petitio (from peto) in this context means "assuming" or "postulating", but it has other (and older) meanings, from which the notion of logical postulate or assumption arose: "requesting, beseeching". So rather than use some fancy Latinate term like postulate or assume, people decided to use the plain English word beg[ging] as a sort of calque for the "requesting" sense of petitio. But even in the 16th century, I think, it was a bit odd to warn people against presupposing the end-point of their argument by telling them not to beg their conclusion.


Language is defined by it's usage.

If enough people make one particular "mistake", eventually that mistake becomes the "correct" usage. See "literally" as an example:

https://www.merriam-webster.com/grammar/misuse-of-literally


or your unnecessary apostrophe in it's


I was using it in this sense

> The phrase "begs the question" is also commonly used in an entirely unrelated way to mean "prompts a question" or "raises a question", although such usage is sometimes disputed.[4]

but will try to refrain from doing so in future...


everyone misuses the term which raises questions about prescriptivism vs descriptivism.


You're begging the question with this dichotomy. It likely originally arose as a descriptive term among those who had the requisite knowledge base prescribed to them.

:P


Good read, thanks for sharing.


Sorry, are you saying that Ilya Sutskever doesn't have an ML background? That Adam D'Angelo can't code? Come on.


Yeah I was ignoring Sutskever since he recanted his support pretty quickly. But that's fair, fixed.


This is correct.

Best case: unlimited/flexible PTO policy simply reflects a company taking the attitude of "you are a responsible adult and we trust you," and skipping the need for a cumbersome tracking system.

Worst case: constant pressure + an unclear PTO policy induces workers to take less vacation than the norm.


There is another sort of worst case (company's perspective): An employee thinks they need to rebuild their house and needs to do it all by hand, hence will be on leave for the next six months. Or, say, wants to explore Europe backpacking and hence needs break for the next two months.

With unlimited PTO the biggest challenge is to define (both ways) what qualifies for a good reason to go on a leave.

I have enjoyed unlimited PTO wherever I had. But I tried to have my own benchmark of about four weeks in a year. Of course, there have been times when I needed more, and it was fine. There have been times when I didn't need four weeks either, and I was okay with that too!


No, that's not correct, read the top rated post above yours. It's not about responsibility, it's about accounting rules.


Oh, that's a good point. I had missed that.


I'm the author of the piece -- this comment is correct.


Hi, I’m the author. Thanks for the compliment!


Yeah, I just wrote about this as well on my substack. There were two significant conflicts of interest on the OpenAI board. Adam D'Angelo should've resigned once he started Poe. The other conflict was that both Tasha McCauley and Helen Toner were associated with another AI governance organization.


Thanks — the history of board participation you sleuthed is interesting for sure:

https://loeber.substack.com/p/a-timeline-of-the-openai-board


Thank you!


Yep, this is exactly right. Websites benefit from design idioms that have been carefully crafted over many years. Every website that disregards these idioms makes itself harder to use because it doesn’t speak the same language as what the user is (perhaps without noticing) used to. I wrote about the loss of design idioms at length: https://loeber.substack.com/p/4-bring-back-idiomatic-design


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: