perhaps, but i would think that if you threw another tool (a different way to relate some inputs) at a neural net, it might figure out some way to exploit it, even if it's not clear to you or me. just like a RL agent sometimes finds + exploits bugs in the environment.