The actually values in the NN depends on both the algorithms and the data input during training phase. You cannot find how these algorithms are strategically weak unless you've seen all of what NN have seen.
Not necessarily. Unlike the human brain, NNs are extremely organized. They can change their weights but they cannot easily change their fundamental structure. Their organization can make them bad at certain tasks.
For example a CNN does well at object recognition from photos, but does poorly at recognizing a pencil sketch of an object it hasn't previously seen a sketch of. Or recognizing a scene at night that it has only seen daytime photos of. Those tasks, humans do very well because the trained human brain combines cultural understanding, physics intuition, and visual cortex at the same time, and would be able to use this to their advantage to easily beat, say, a massively trained CNN image recognition program that lacks cultural understanding and physics intuition.
Some other more complex NN structure may be able to tackle these kind of tasks, but as long as its neural structure is rigid, it will have yet other deficiencies. The human brain is still able to structurally adapt in ways that NNs cannot, yet.