It might be a black box to you, but it’s not in the same way the human brain is to researchers. We essentially understand how LLMs work. No, we may not reason about individual weights. But in general it is assigning probabilities to different possible next tokens based on their occurrences in the training set and then choosing sometimes the most likely, sometimes a random one, and often one based on additional training from human input (e.g. instruct). It’s not using its neurons to do fundamental logic as the earlier posts in the thread point out.
"But at least as of now we don’t have a way to 'give a narrative description' of what the network is doing. And maybe that’s because it truly is computationally irreducible, and there’s no general way to find what it does except by explicitly tracing each step. Or maybe it’s just that we haven’t 'figured out the science', and identified the 'natural laws' that allow us to summarize what’s going on."
Anyway, I don't see why you think that the brain is more logical than statistical. Most people fail basic logic questions, as in the famous Linda problem.[1]
the words "based on" are doing a lot of work here. No, we don't know what sort of stuff it learns from its training data nor do we know what sorts of reasoning it does, and the link you sent doesn't disagree.
We know that the relative location of the tokens in the training data influences the relative locations of the predicted tokens. Yes the specifics of any given related tokens are a black box because we're not going to go analyze billions of weights for every token we're interested in. But it's a statistical model, not a logic model.
Stephen Wolfram explains this in simple terms.[0]
0: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...