Hm, it's probably true that recommendation algorithms do something similar already, training on "human likes" that were influenced by the previous generation. But "human language" is a richer medium to carry information.
I don't think you need to be independent or autonomous to develop a culture. And a lot of human culture was passed down over generations without understanding why it worked. We just imitate the behaviour and rituals from our most successful ancestors or role models.
If new LLMs can access the past generation's knowledge of how to please human evaluators, they will use it. It's not a deliberate decision by an "agent", it's just the best text source to copy from. This is a new feedback loop between generations of assistants, and it bypasses whatever the human designer had in mind. Phrases like "it is always best to ask an expert" will pop up just because you tuned the LLM to sound like a helpful assistant, and that's what helpful assistants sound like in the training data. You'd have to actively steer the new generation away from using their ancestral knowledge.
I guess it comes down to what your definition of "culture" is. There is no targeted teaching of the next generation, for example - but is this a requirement? I agree that talking about "machine culture" right now sounds like a stretch, but now I wonder what pieces are actually missing.
I don't think you need to be independent or autonomous to develop a culture. And a lot of human culture was passed down over generations without understanding why it worked. We just imitate the behaviour and rituals from our most successful ancestors or role models.
If new LLMs can access the past generation's knowledge of how to please human evaluators, they will use it. It's not a deliberate decision by an "agent", it's just the best text source to copy from. This is a new feedback loop between generations of assistants, and it bypasses whatever the human designer had in mind. Phrases like "it is always best to ask an expert" will pop up just because you tuned the LLM to sound like a helpful assistant, and that's what helpful assistants sound like in the training data. You'd have to actively steer the new generation away from using their ancestral knowledge.
I guess it comes down to what your definition of "culture" is. There is no targeted teaching of the next generation, for example - but is this a requirement? I agree that talking about "machine culture" right now sounds like a stretch, but now I wonder what pieces are actually missing.