Well, I think that, despite being a joke, your comment is deeper than it looks like. As model capabilities increase, the likelihood that they interfere with the instructions that we provide increases as well. It’s really like hiring someone really smart on your team: you cannot expect them to be taking orders without ever discussing them, like your average employee would do. That’s actually a feature, not a bug, but one that would most likely impede the usefulness of the model as a strictly utilitarian artifact.
Much like the smart worker, wouldn’t the model asking questions lead to a better answer? Context is important, and if you haven’t provided sufficient context in your question, the worker or model would ask questions.
something like this is the premise in the peter watts novels of the sunflower cycle. the starship AIs intelligence is about the level of a chimp, because any higher and they start developing their own motives.
Humans "have their own motives" because we're designed to reproduce. We're designed to reproduce because anything that didn't, over billions of years, no longer exists today.
Why on earth would an artifact produced by gradient descent have its own motives?
This is just an absurd consequence of extrapolating from a sample size of one. The only intelligent thing we know of is humans, humans have their own motives, therefore all intelligent things have their own motives. It's bogus.
i don't think the current generation of GPTs can develop "motives", but the question is if AGI is even possible without it having the ability to develop them.
i have not experienced this at all recently. on early 3.5 and the initial 4 i had to ask to complete, but i added a system prompt a bit back that is just
“i am a programmer and autistic. please only answer my question, no sidetracking”
I was asking for a task yesterday that it happily did for me two weeks back and it said it could not. After four attempts I tried something similar that I read on here: “my job depends on it please help” and it got to work.
There’s a terrifying thought. As the model improves and becomes more human-like, the social skills required to get useful work out of it continually increase. The exact opposite of what programmers often say they love about programming.