Hacker News new | past | comments | ask | show | jobs | submit login

> The innovation in AI really seems like it is being made on a thin line of engineering and compute.

This perfectly echoes my own thoughts. The advances being trumpeted in AI are functions of hardware advances that allow us to have massively overparameterised models, models which essentially 'make the map the size of the territory'[0], which is why they only succeed at a narrow class of interpolation problems. And even then nothing useful. That's why we're still being sold the "computer wins at board game" trope of the 90s, and yet somehow also being told that we're right on the verge of AGI.

(OK, it's not only that. There's also a healthy amount of p-hacking, and a 'clever Hans effect' where the developer likely-unconsciously intervenes to assure the right answer via all the shadowy 'configuration' knobs ('oversampling', 'regularisation', 'feedforward', etc). I always say: if you develop a real AI, come show me a demo where it answers a hard question whose answer you - all of us - don't already know.)

[0] Or far larger, actually. Google the 'lottery ticket hypothesis'.




Eh, if you boil all research in AI/ML down to the binary of "AGI or bust," then sure, everything is a failure.

But, if you look at your smartphone, virtually every popular application the average person uses--Gmail, Uber, Instagram, TikTok, Siri/Google Assistant, Netflix, your camera, and more--all owe huge pieces of their functionality to ML that's only become feasible in the last decade because of the research you're referencing.


Sorry, I should have been clearer. I obviously concede that stuff like applying kNN over ginormous datasets to find TV shows people like, or doing some matrix decomposition to correlate ('recognise') objects in photographs, is obviously useful in the trivial sense. It has uses. It wouldn't exist otherwise. I was more thinking on a higher level, about whether it has led to any truly epochal technological advances, which it hasn't.

Machine learning / neural nets also (like I said) get to claim credit for a hell of a lot of things which are just products of colossal advances in hardware – simply of its becoming possible to run statistical methods over very very large '1:1 scale' sample sets – and not due to a specific statistical technique (NN) which is not remotely new and has been heavily researched for about 40-50 years now.


These are engineering marvel! This is engineering at it's finest. Applied math at it's finest. So it's not a failure.

The way people hype AGI/AI/ML whatever undervalues the actual effort behind these remarkable feat. There is so much effort being made to make this work. Deep learning works when it is engineered properly. So it is just another tool in the toolbox!

Look at how graphics community is approaching deep learning. They already had sampling methods but with MLPs (NeRFs), they are using it as glorified database. So it's engineering!

I want to underscore that AI/ML/DL research requires ground breaking innovation not only in algorithms but also in hardware and software engineering.


I disagree, there are plenty of amazing advancements in the last 2 years you can't write off like that (especially Instruct GPT-3 and Dall-e 2). For example I have worked on a ML project in document information extraction for 4 years, and recently tried GPT-3 - it solved the task zero shot.


> show me a demo where it answers a hard question whose answer you - all of us - don't already know.

For that, we need artificial comprehension, which we do not. Artificial comprehension, the ability to generalize systems to their base components and then virtually operate those base concepts to define what is possible, to virtual recreate physical working system, virtually improve them, and with those improvements being physically realizable is probably what will finally create AGI. We need a Calculus for pure ideas, not just numbers.


I'm not really sure what you mean. This seems to be another instance of the weirdly persistent belief that "only humans can understand, and computers are just moving gears around to mechanically simulate knowledge-informed action". I may not believe in the current NN-focussed AI hype cycle, but that's definitely not a cogent argument against the possibility of AI. You're confusing comprehension with the subjective (human) experience of comprehending something.


> ‚make the map the size of the territory'[0], which is why they only succeed at a narrow class of interpolation problems

I take it you have not seen the recent Dall-E 2 results? Clearly that model is not just working on a narrow space.

See https://openai.com/dall-e-2/ and the many awe-inspiring examples on Twitter




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: