Hacker News new | past | comments | ask | show | jobs | submit login

She's a professor of computer linguistics; it's literally her field that's being discussed.

The list of signatories includes people with far less relevant qualifications, and significantly greater profit motive.

She's an informed party who doesn't stand to profit; we should listen to her a lot more readily than others.




Her field has also taken the largest hit from the success of LLMs and her research topics and her department are probably no longer prioritized by research grants. Given how many articles she's written that have criticized LLMs it's not surprising she has incentives.


LLMs are in her field; they are one of her research topics and they're definitely getting funding.

We absolutely should not be ignoring research that doesn't support popular narratives; dismissing her work because it is critical of LLMs is not reasonable.


It is not that she is critical of LLMs that is the issue.

Instead, it is that she has strong ideological motivations to make certain arguments.

Those motivations being that her research is now worthless, because of LLMs.

I don't believe the alignment doomsayers either, but that is for different reasons than listening to her.


In her field doesn't mean that's what she researches, LLMs are loosely in her field but the methods are completely different. Computational linguistics != deep learning. Deep learning does not directly use concepts from linguistics, semantics, grammars or grammar engineering, which is what Emily was researcing for the past decades.

It's the same thing as saying a number theorist and a set theorist are in the same field cause they both work in the Math field.


They are what she researches though. She has published research on them.

LLMs don't directly use concepts from linguistics but they do produce and model language/grammar; it's entirely valid to use techniques from those fields to evaluate them, which is what she does. In the same vein, though a self-driving car doesn't work the same way as a human driver does, we can measure their performance on similar tasks.


Hmm I looked into it, and looked at papers/pdfs in google scholar's advanced search with her as an author that mentioned LLMs or GPT in the past 3 years. Every single one was a criticism about how they couldn't actually understand anything (e.g. "they're only trained on form" and "at best they can only understand things in a limited well scoped fashion") and that linguistic fundamentals for NLP was more important.

Good to know my hunch was correct


How are fame, speaking engagements, and book deals not a form of profit?

She's intelligent and worth listening to, but she has just as much personal bias and motivation as anyone else.


The (very small) amount of fame she's collected has come through her work in the field, and it's a field she's been in for a while; she's hardly chasing glory.


People don’t have to be chasing fame to be warped by it. She has cultivated a following of like-minded people who provide ever more positive feedback for her ever more ideological writing.

I mean she is literally dismissing people who disagree with her based on their skin color. Can we stop for a minute to wonder about the incentives that encourage that?

(and I generally like her writing and think she has interesting things to say… but I do see a reward cycle going on)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: