Feynman really liked continuous solutions. It's too bad he wasn't around for the deep learning era. Or for bufferbloat.
The "average number of 1 bits in a message address" makes me think of routing in the NCube. Like the Connection Machine, the NCube was a big array of small CPUs. Bigger CPUs than the CM, though, and running independent programs. The NCube had 2^N CPUs, up to 1024, and each was connected to N neighboring CPUs in N dimensions. Each CPU had an N-bit ID, representing its position in the binary N-dimensional array. The connections between CPUs were thus between ones that differed in address by exactly 1 bit.
So routing worked by taking the current CPU ID and XORing it with the destination ID, then inverting. The 1 bits then represented possible paths to neighbor nodes on which the packet could be sent. Any path would work, and all possible paths are the same length. At the next node, there would be 1 fewer 1 bits in the difference. When there were no 1 bits, the packet had arrived.
Nodes need packet buffers. How much buffering is required? That's probably what Feynman was working on.
The number of 1 bits is a measure of distance to destination. The average number of 1 bits is a measure of network traffic. As a discrete problem, this is a mess. Feynman converted it to a continuous flow problem, for which there's known theory, some of which Feynman had developed. He'd done a lot of hydrodynamics work at Los Alamos.
Van Jacobson, who started as a physicist, also saw network congestion that way.
The "average number of 1 bits in a message address" makes me think of routing in the NCube. Like the Connection Machine, the NCube was a big array of small CPUs. Bigger CPUs than the CM, though, and running independent programs. The NCube had 2^N CPUs, up to 1024, and each was connected to N neighboring CPUs in N dimensions. Each CPU had an N-bit ID, representing its position in the binary N-dimensional array. The connections between CPUs were thus between ones that differed in address by exactly 1 bit.
So routing worked by taking the current CPU ID and XORing it with the destination ID, then inverting. The 1 bits then represented possible paths to neighbor nodes on which the packet could be sent. Any path would work, and all possible paths are the same length. At the next node, there would be 1 fewer 1 bits in the difference. When there were no 1 bits, the packet had arrived.
Nodes need packet buffers. How much buffering is required? That's probably what Feynman was working on.
The number of 1 bits is a measure of distance to destination. The average number of 1 bits is a measure of network traffic. As a discrete problem, this is a mess. Feynman converted it to a continuous flow problem, for which there's known theory, some of which Feynman had developed. He'd done a lot of hydrodynamics work at Los Alamos.
Van Jacobson, who started as a physicist, also saw network congestion that way.