What’s bizarre to me is that the entire field of “AI” research, including the LLM space, seem to be repeating the exact same mistakes of the ecosystem models of the 1920’s and later. Gross oversimplifications of reality not because that’s what is actually observed but because that’s the only way to make reality fit with the desired outcome models.
It’s science done backwards which isn’t really science at all. Not that I think these models have no use cases, they’re simply being used too broadly because the people obsessed with them don’t want to admit their limitations.
Ask someone with an articulation disorder. /s
Still, this is super reductive. Language can be feeling, it can be taste, it can be all sorts of things that are ineffable.
It is weird to me that the research into this assumes there is a baseline of repeatability somehow. At best, this is mimicry, imo.