What does it change when you add another model? I don't see how this lets us extract extra information.
What distinguishes two conjoined models from one model with a narrowing across the middle?
If the idea is to have two similar minds building a theory of each other, then I guess this could be informative, but first we'd have to establish that the models are "minds" in the first place. It's not clear to me what that requires.
Here's where I am coming from: there have been a number of experiments to teach language to other species, but there is always a problem in trying to figure out to what extent they 'get' language - For example, there is the case of the chimpanzee Washoe signing "water" and "bird" on first seeing a swan - was it, as some people contended, inventing a new phrase for picking out swans (or even aquatic birds in general), or was it merely making the signs for two different things in the scene before it? [1]
One thing that has not been seen (as far as I know) is two or more of these animal subjects routinely having meaningful conversations among themselves. This would be a much richer source of data, and I do not think it would leave much doubt that they 'got' language to a very significant degree.
What distinguishes two conjoined models from one model with a narrowing across the middle?
If the idea is to have two similar minds building a theory of each other, then I guess this could be informative, but first we'd have to establish that the models are "minds" in the first place. It's not clear to me what that requires.