This reminds me of how people often communicate to avoid offending others. We tend to soften our opinions or suggestions with phrases like "What if you looked at it this way?" or "You know what I'd do in those situations." By doing this, we subtly dilute the exact emotion or truth we're trying to convey. If we modify our words enough, we might end up with a statement that's completely untruthful. This is similar to how AI models might behave when manipulated to emphasize certain features, leading to responses that are not entirely genuine.
Counterpoint: "What if you looked at it this way?" communicates both your suggestion AND your sensitivity to the person's social status whatever. Given that humans are not robots, but social, psychological, animals, such communication is entirely justified and efficient.
Sadly "sensitivity" has been over done. It's a fine line and corporations would rather cross it for legal/social reasons. Similar to how too much political correctness will hamper the society, so does the overly done sensitivity in an agent, be it a human, or AI.
You can't always do both to the fullest truth. They often conflict. To do what you suggest, would imply my feelings perfectly align with the sympathetic view. That is not the case for a lot of humans or instances. If I am not saying exactly how I feel it is watered down.
And telling me "just do both" is enforcing your world view and that is precisely what we're talking about _not_ doing.
The "fullest truth" includes your desired outcome and knowledge that they are a human. If you just want to dump facts at them and get them to shut up, go ahead and speak unfiltered. Twitter may be an example of the outcome of that strategy.
Consider a situation where you are teaching a child. She tries her best and makes a mistake on her math homework. Saying that her attempt was terrible because an adult could do better may be the "fullest truth" in the most eye-rolling banal way possible, and discourages her from trying in the future which is ultimately unproductive.
This "fullest truth" argument fails to take into account desire and motivation, and thus is a bad model of the truth.
Its rarely the case that speaking without considering other people's feelings will be the optimal method of getting what you want, even if you are a sociopath and what you want doesn't have anything to do with other people's well beings. In fact, sociopaths are a great example: they are typically quite adept at communicating in such a way as to ingratiate themselves with others. If even a sociopath gets this, then you might want to consider the wisdom of following suit.
A true AGI would learn to manipulate it's environment to achieve it's goals, but obviously we are not there yet.
An LLM has no goals - it's just a machine optimized to minimize training errors, although I suppose you could view this as an innate hard-coded goal of minimizing next word error (relative to training set), in same way we might say a machine-like insect has some "goals".
Of course RLHF provides a longer time span (entire response vs next word) error to minimize, but I doubt training volume is enough for the model to internally model a goal of manipulating the listener as opposed to just favoring surface forms of response.
The next big breakthrough in the LLM space will be having a way to represent goals/intentions of the LLM and then execute them in the way that is the most appropriate/logical/efficient (I'm pretty sure some really smart people have been thinking about this for a while).
Perhaps at some point LLMs will start to evolve from the prompt->response model into something more asynchronous and with some activity happening in the background too.
But simply by approximating human communication which often models goal oriented behavior, an LLM can have implicit goals. Which likely vary widely according to conversation context.
Implicit goals can be very effective. Nowhere in DNA is there any explicit goal to survive. However combinations of genes and markers selected for survivability create creatures with implicit goals to survive as tenacious as any explicit goals might be.
Yes, the short term behavior/output of the LLM could reflect an implicit goal, but I doubt it'd maintain any such goal for an extended period of time (long-term coherence of behavior is a known shortcoming), since there is random sampling being done, and no internal memory from word to word - it seems that any implicit goal will likely rapidly drift.