I don't think the existence of Gemini disproves the author's statement. The model is clearly broken, not only within the definitions of what you or I would consider acceptable but also within the definition set by the prudes on-high. The wildly diverse output seems especially emblematic of a hackjob finetune not dissimilar to what OpenAI does with their instruction-tuning.
The quoted comment seems to align with how Google saw the situation. They wanted a specific desired outcome (neutered AI output), they applied a documented strategy, and got a torrential wave of "observed results" from the audience.
The quoted comment seems to align with how Google saw the situation. They wanted a specific desired outcome (neutered AI output), they applied a documented strategy, and got a torrential wave of "observed results" from the audience.