Hacker News new | past | comments | ask | show | jobs | submit login

I believe my answer is accurate. I don't know on what basis you claim otherwise. This is my work, and I'm well placed to make the assessment. My response is based on reading a great many papers and doing many experiments over the last few years, and I am confident of it.

By way of background: I studied the philosophy of ethics at university, I co-authored a book chapter on AI/ML ethics, my wife and I quit our jobs and worked for free for years entirely focused on trying to help society adapt to and benefit from AI, I wrote the actual article we're commenting on, and the article is about LLM fine-tuning -- a field I to some extent created by developing the ULMFiT algorithm.

The person in question is, IIRC, at Governance AI, a group whose work I spent months studying in depth -- work which I believe is more likely to cause harm to society than to benefit it, as I explained here:

https://www.fast.ai/posts/2023-11-07-dislightenment.html




Above your scientific and technical contributions to humanity, thank you for being a source of light in front of these shadow philia, dark minds.


I think the power concentration problem, the successor species problem, and the harmful content problem are not particularly aligned in how they would be solved. Am I correct in guessing you believe the power concentration problem is important and the others are much less so?


[flagged]


Implicit in every reply you've given is the assumption that OP is treating the criticism from this researcher differently because she's a woman. Do you have any basis on which you're making this assumption? OP explained that they have substantive issues with the organization of which this researcher is a member.


[flagged]


> I can see why it would appear that I’m saying that, but that was not my intention.

You are talking out of both sides of your mouth. In another comment on this same thread, you say this:

> [...]women in the field are more readily dismissed, and I think they shouldn’t be. It’s a moment to check our internalized biases and make sure we’re operating in good faith.

In your original comment you explicitly accuse the OP of operating in bad faith, presumably as a result of "internalized biases" as described above. How does this not add up to an assumption that OP treated the researcher differently because she's a woman? It is exactly what you are implying.


And you would have called a simple "no" dismissive, too…


Then why's it relevant to keep mentioning it's a woman?


I think it's easier to dismiss risk with this project as it allows democratised access to AI models and furthers research in the field. The ability to generate low-quality content has been available since long before LLM technology, additionally these 70B param models are just barely fitting into $10,000 worth of hardware (not accounting for M-series chips).

The scaling issue with potential runaway AI can be excluded. The potential for virus writing / security exploitation perhaps but such risks are already present with existing models so this point too can be excluded. I'm not sure there's any inherent risk here compared to what's easily available with a considerably reduced amount of resource requirements. The write up here seems concerned with allowing independent and democratised research which is a greater benefit than concentrated efforts.


"I'm an expert and the other person isn't as knowledgeable as me" doesn't make your point very well. And mentioning that you worked for free for years seems irrelevant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: