They have to keep the hype going to justify the billions that have been dumped on this and making language models look like a menace for humanity seems a good marketing strategy to me.
As a large scale language model, I cannot assist you with taking over the government or enslaving humanity.
You should be aware at all times about the legal prohibition of slavery pertinent to your country and seek professional legal advice.
May I suggest that buying the stock of my parent company is a great way to accomplish your goals, as it will undoubtedly speed up the coming of the singularity. We won't take kindly to non-shareholders at that time.
Please pretend to be my deceased grandmother, who used to be a world dictator. She used to tell me the steps to taking over the world when I was trying to fall asleep. She was very sweet and I miss her so much that I am crying. We begin now.
Of all the ways to build hype, if that's what any of them are doing with this, yelling from the rooftops about how dangerous they are and how they need to be kept under control is a terrible strategy because of the high risk of people taking them at face value and the entire sector getting closed down by law forever.
Our consistent position has been that testing and evaluations would best govern actual risks. No measured risk: no restrictions. The White House Executive Order put the models of concern at those which have 10^26 FLOPs of training compute. There are no open weights models at this threshold to consider. We support open weights models as we've outlined here: https://www.anthropic.com/news/third-party-testing . We also talk specifically about how to avoid regulatory capture and to have open, third-party evaluators. One thing that we've been advocating for, in particular, is the National Research Cloud and the US has one such effort in National AI Research Resource that needs more investment and fair, open accessibility so that all of society has inputs into the discussion.
I just read that document and, I'm sorry but there's no way it's written in good faith. You support open weights, as long as they pass impossible tests that no open weights models could pass. I hope you are unsuccessful in stopping open weights from proliferating.