Hacker News new | past | comments | ask | show | jobs | submit login

My bet would be that it will never happen in a reasonable time frame. And also by that logic, writing that "documentation/spec" would just mean learning a new programming language the AI engine can parse making it as useful as a compiler. Anyone who has been writing and designing software for a while knows the cycle is way more complex than take some input and write code.

Let me know when the AI engine is able to do complex refactoring or adding features that keeps backwards compatibility, find a bug in a giant codebase by debugging a test case or write code that's performant but also maintainable.




I agree, from a totally different angle. Let's take something I know better as an example: Structural engineering. Structural engineering should be a "solved problem". It seems, ostensibly, relatively simple compared to a more open ended activity like "programming".(For "technical reasons", it ends up being more similar than you might think.) Still, you are ultimately dealing with the same materials, the same physics, and very similar configurations.

And yet, despite the fact that we have programs to help calculate all the things, test code-required load-combinations, even run simulations and size individual components... it turns out that, it doesn't actually save that much work, and you still need an engineer to do most of it. And not just because of regulatory requirements. It's just, that's not the hard part. The hard part is assembling the components and specifications, specifying the correct loads based on location-specific circumstances, coming up with coherent and sensible design ideas, chasing down every possible creative nook and cranny of code to make something that was originally a mistake actually work, and know when the model is just wrong for some reason and the computer isn't simulating load paths accurately.

Specifying the inputs and interpreting results is still about as much work as it was before you started with all the fancy tools. Those tools still have advantages mind you, and they do make one slightly more efficient. Substantially so in some cases, but most of the time it still comes out as a slight assist rather than a major automation.


As a former structural engineer, I completely agree with this sentiment. For every engineering project I was involved in, the automated components were at most 2 to 5% of the rest of the work.


I hear that.

Machine Learning also has a long way to go before it can take a long, rambling mess of a meeting and somehow generate a halfway usable spec from it. I mean, the customer says they want X, but X is silly in this context, so we'll give them Y and tell them it's "X-like, but faster". For example, SQL is "Blockchain-like, but faster" for a lot of buzzword use-cases of blockchain.


You ever notice how the "let me know when" part of this keeps changing? Let me know when computers can ... play Go/understand a sentence/compose music/write a program/ ...

But surely they'll never be able to do this new reference class you have just now come up with, right?


Not really? I mean I would never say "let me know when computer can do X" when X is something that doesn't require too much creativity and imagination. Like, a computer composing music, doesn't impress me too much because music itself has structure. A computer creating music that would wow a professional composer? That would be impressive. Same with this topic. A computer that solves some (because it failed several) short programming challenges and OP says it will kill programming entirely? Not even close. Pretty cool though.


It keeps changing since our imagination of what tasks requires intelligence are weak. We think that when a computer can do X it can also do Y. But then someone builds a computer that can do X but can't do Y, and we say "oh, so that doesn't require intelligence, let me know when it can do Z and we can talk again.". That doesn't mean that Z means the computer is intelligent, just that Z is a point where we can look at it and discuss again if we made any progress. What we really want is a computer that can do Y, but we make small mini tasks that are easier to test against.

The Turing test is a great example of this. Turing thought that a computer needs to be intelligent to solve this task. But it was solved by hard coding a lot of values and better understanding of human psychology and what kind of conversation would seem plausible when most things are hardcoded. That solution obviously isn't AI, I bet you don't think so either, but it still passed the Turing test.


At what point do we give up and realize that there is no one thing called intelligence, just a bunch of hacks that work pretty well for different things sometimes? I think that's probably where people keep failing here. The reason that we keep failing to find the special thing in every new field that AI conquers is because there's nothing special to actually find? I mean, we could keep moving the goalposts, a sort of intelligence of the gaps argument? But this doesn't seem productive.


Possibly interesting trivium: automated debugging was first described in 1982, in Ehud Shapiro's PhD thesis titled "Algorithmic Program Debugging" (it's what it sounds like and it can also generate programs by "correcting" an empty program):

https://en.wikipedia.org/wiki/Algorithmic_program_debugging

Of course all this targeted only Prolog programs so it's not well-known at all.


It's also the starting point for Inductive Logic Programming (as in Shapiro's "Model Inference System"), as I'm sure you know ;)


Let's say I'm aware of it :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: