from the article "A pattern we see in some interpretability and interpretability-adjacent ML papers is defining some metric which is claimed to correspond to some property of interest, and then very rigorously measuring this metric. We see this as a kind of Cargo-Cult Science."
In Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do about It, Erica Thompson explores how mathematical models are used in contexts that affect our everyday lives – from finance to climate change to health policy – and what can happen when they are malformed or misinterpreted. Rather than abandoning these models, Thompson presents a compelling case for why we should revise how we understand and work with them, writes Connor Chung.
This is becoming so true. I have read so many documents in the last year that are obviously from a GPT, especially when it’s about something new to a group.
But in the end, I would rather get a half baked GPT doc than a quarter baked junior analyst doc. I just worry that GPTs are going to kick the rungs out of the bottom of any knowledge work later. Being bad but junior used to be a learning environment without too many repercussions.
But how do you compete with peers using AI? You use it also. But now you have robbed yourself of a learning opportunity. Yeah you can learn someway by doing it, but it’s like doing homework by looking at the answers. Sure it can help you double check, but if you don’t put the effort into constructing your own answer, then you have only cheated yourself.
I think the AI alignment issues are probably over blown in the short term, but what about the long term when the average person has regressed so far as to be unable to live without AI. They will just do whatever is told to them.
I'm interested in whether this has potential trickle effect in other startups where the staff threaten to mutiny if they don't like the change in leadership. Flipping crazy and precedent setting for corporate governance.
Yes, the fall-out from all this is going to be much, much wider than just OpenAI. And if it ends up in front of a judge I wouldn't be surprised if it eventually results in adjustments to the law.