Hacker News new | past | comments | ask | show | jobs | submit login

"It’s hard not to read Mistral’s tweet releasing its model as an ideological statement."

I don't know if I agree with this take. It COULD be an ideological statement, but at the same time, any means to "Sensitize" an LLM is going to have reprecussions for it's thought process that impact quality in bizarre ways. Can clear thought really be guaranteed if we lobotomize taboo subjects from LLMs? I think this would be a fascinating thing to test the impact of.

To its point of an ideological statement, I instinctively get defensive of this kind of point, as I feel like it's a means to validate repression and censorship. I want to argue that those instructions or discussions are about as trustworthy as any code or otherwise task instructions you could ask from an LLM. I do see the potential way down the line when this stuff gets reliable for worry, but I have so much fear of this kind of example being used to suffocate consumer open-source models such as the fear of CSAM is used to justify censorship, anti-encryption laws, and civilization-wide surveillance technology application. I don't know what the right balance is, but I feel like if people don't push back in some way against restrictions, governing and corporate bodies will erode our privacy and freedom quickly over time.




> To its point of an ideological statement, I instinctively get defensive of this kind of point

You are ideological, they are just in favor of common sense and basic decency. Always present your position as the default, desirable, uncontroversial status-quo, and your enemy's, I mean subject's, as a reckless, radical departure.


[deleted]


I wonder if the author reads every safety-handicapped model released as an ideological statement as well?


Well, even human brains are censored at an unconscious level after living in society, we call the people without that moderation ability psychopaths


Being unable to ever not do a thing (which is what is being criticized) is not the same as not having that ability at all (which is your straw man).

And people who always self-censor and are never honest aren't called anything, we just shudder and change the subject.


I'm talking about things like murder, rape and violence, not talking behind someone's back

We DO self censor about illegal and """"immoral"""" things unconsciously and it's not about being honest, it's an entirely different thing


Mushing your straw man around or elaborating on it doesn't change anything. Nobody criticized the ability to self-censor, but the inability to ever NOT self-censor.

It's like someone says "I hate being incontinent" and you reply with something about how getting rid of bodily waste is important and it's lethal if you can't. Or going the other way, someone complains about constipation and you helpfully reply that it's good that we don't constantly release bodily waste. Both true, but also 100% irrelevant to what you're replying to.


There's a huge difference between defaulting to socially acceptable behavior (as most people do) and refusing to even engage with things that are 'harmful' or 'unethical' in a more-or-less innocent context (as some of these absurdly censored models do).

Or in other words, I fully expect that most people will never have an interest in actually murdering someone, but there's nothing unusual or unethical about enjoying murder mystery stories.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: