I work in theoretical machine learning and what I don't like about this piece is how casually the author assumes that what goes us to the present state (hacky approaches where you converge to something "good enough", unless the model is very simple, e.g. a feedforward net---and even here there are still open questions---and you have some theoretical guarantee) will also get us to "AGI".
I believe that is unlikely. Here's metaphor for that, that I often like to use when I speak to engineers doing empirical ML work: In ancient times people build a lot of nice structures, such as pyramids, cathedrals etc. By trial and error many rules of thumb were devised and they more or less worked. But it's safe to say things like earthquake-resistant skyscrapes and modern bridge cannot be build without deep theoretical insights into structural mechanics. These are highly optimized, intricate structures. The same is probably necessary to build highly optimized, intricate models that deliver what we now consider to be "AGI" - but then again, the world of ML is full of surprises.
I believe that is unlikely. Here's metaphor for that, that I often like to use when I speak to engineers doing empirical ML work: In ancient times people build a lot of nice structures, such as pyramids, cathedrals etc. By trial and error many rules of thumb were devised and they more or less worked. But it's safe to say things like earthquake-resistant skyscrapes and modern bridge cannot be build without deep theoretical insights into structural mechanics. These are highly optimized, intricate structures. The same is probably necessary to build highly optimized, intricate models that deliver what we now consider to be "AGI" - but then again, the world of ML is full of surprises.