>to what degree my own internal models of my friends and family are actually mini-versions of them
I'd say there's still a lot of inference in your models on what goes on in them internally. They're extrapolated models from outward behavior. There are some things you don't know about them which they don't show.
In 'Revelation Space' by Alastair Reynolds, they have the idea of a "beta simulation" of a person, which is an AI agent based on external observations of the person and their interactions. These could seem very similar to the simulated person but weren't able to actually create any new responses, only respond from appropriately chosen canned responses. They also had "alpha simulations" which were whole-brain uploads and were basically the actual person (at least until they all went mad...)
It's funny that in most SF, they consider the brain the only things to copy to be a person.
But given we have neurons in the heart and guts, that the chemicals we have it our body and the digestive activity affect our behavior and that discover everyday that a lot of "us" is actually made up of alien micro-organism taking a role in almost every parts of our daily lifes, including hormones management - a crucial piece of our reaction puzzles - I think it's far from realistic.
I can't wait to see the first people uploading their brain in a machine, only to discover that:
- A, it's just a copy, not them;
- B, the copy is not nearly close to the original;
- C, they feel that they are missing something but they can't express it because the missing thing helped them defining it.
I swear, we geeks love to solve perfect problems with perfect solutions.
Like the joke of the physicist that can cure a chicken, but only if the chicken is a perfect sphere in a frictionless vacuum.
I'd say there's still a lot of inference in your models on what goes on in them internally. They're extrapolated models from outward behavior. There are some things you don't know about them which they don't show.