Based on the Quanta article, they're only measuring complexity in terms of how complex a neural net needs to be to faithfully learn a neuron's output (presumably by SGD.) That's probably only a weak upper bound on the complexity, though. There may be more parsimonious ways to simulate the neuron which are expensive to represent in a neural net, or which are hard for SGD to learn.