Hacker News new | past | comments | ask | show | jobs | submit login
I don't think people understand the monumental changes coming to software (twitter.com/amasad)
9 points by kbuchanan on Dec 6, 2022 | hide | past | favorite | 30 comments



>The last major productivity boost in software was OSS. Each of those steps was 10-100x boost but then it stopped...

I don't think this author understand what is a productivity boost. OSS is a development model, it didn't turn into any "productivity boost", any more than the general technology level (including mainly proprietary technology) offered.

>Programmers will command armies of software agents to build increasingly complex software in insane record times. Non-programmers will also be able to use these agents to get software tasks done. Everyone in the world will be at least John Carmack-level software capable.

/rolls eyes

>At Replit, we're building an AI pair programmer that uses the IDE like a human does and has full access to all the tooling, open-source software, and the internet.

A, OK, this is building up commercial hype. Makes sense now.


OSS allows really hard pieces of software to be built, most of FOSS is unable to be built if not FOSS. What is the biggest and/or hardest proprietary project? Windows7+? Modern chipset including CPU? Apple jail? And what will happen with this technology if all the source code leaks somehow, don't you believe it will become 10x-100x better by some measures?


>OSS allows really hard pieces of software to be built

Like what that one couldn't build with proprietary software? The main benefit is with OSS you're doing it with free (as in beer) software (whereas before you might not afford a proprietary solution). But that is not a productivity increase, it's budget decrease.

>And what will happen with this technology if all the source code leaks somehow, don't you believe it will become 10x-100x better by some measures?

No.


  The main benefit is with OSS you're doing it with free (as in beer)
This is benefit for user, but we were talking about benefits for someone who codes.

  Like what that one couldn't build with proprietary software?
An OS which deserves to be trusted. Cryptocurrency. A web-browser which is not IE6. Lisp interpreter.


I thought he just meant access to thousands of oss packages that are easy to leverage, npm, nugget etc.


Per Kernighan, debugging is 2X harder. If the AI jockeys don't understand what they're being given, man, it's going to be humorous watching them putting out fires. And how can statistically fit models exceed their training set w/o going random? How is AI going to string together equations to do physics or engineering? They're bloody squiggly symbols and letters.

And the marketplace still isn't interested in fixing bugs over "oooh, shiny", so my concerns might never be addressed.


Imagine if the AI can provide both a program and a machine-verifiable proof of correctness though. Then all you have to manually verify is that the proof proves the right thing.


So the only thing a developer have to deliver is an unambiguous specification of what the program is supposed to do!


That sounds hard to do in a natural language. Maybe they could deliver the unambiguous specification in a formal language designed specifically for the purpose of specifying what programs are supposed to do.


If I could get unambiguous specifications, then my job might actually be in trouble. It would cut my workload down considerably.


I was half joking, because software development is largely the process of developing an unambiguous specification - just in the form of executable code.


A premium on the ability to express requirements clearly and logically, on the fly rather than as part of a cycle of paper-pushing.


How will the AI overcome the halting problem?


You can't in general determine if an arbitrary program will halt. I believe it is however, possible to construct a program that does provably halt. I have no idea if you can write most useful programs that way though.


Almost all programs that people write are written so that they provably halt (or provably don't halt - as continuing forever is sometimes the desired behavior).

How do we do that? By using known-halting parts, and composing them in known-halting ways. How did we find those parts in the first place, well computer science.. But boiling that down to a codified process, that's where it gets murkier. I think we don't really know exactly how human ingenuity discovers new useful halting parts in the face of the halting problem.


Easy - it will come up with a solution too complicated for mere humans to understand, at which point everyone will assume it is correct.


The same way that humans do. By redefining "doesn't halt" as "didn't halt before my patience ran out".


Well humans haven't overcome it, and we muddle through. Same with AI, it seems.


"Oooh, shiny" that doesn't work has a very short run in the marketplace.


> Everyone in the world will be at least John Carmack-level software capable.

lol

I'm sure that for simple tasks, AI-based pair programming will offer some level of acceleration, but until it can understand the semantics of the code it's generating, and how it fits into the broader _system_, it will not be able to be trusted. I do not look forward to a world where I have to spend my time debugging AI-generated code.


I'm not saying it will reach Carmack levels of proficiency any time soon but have you tried pasting a non-trivial method into ChatGPT and ask the AI what the method does and how the code can be improved? I was very impressed about the way it was able to explain the code and suggest improvements.


Not myself, but I've seen a bunch of examples folks have posted on Twitter, and to be fair, they were all surprisingly accurate.

As far as I understand it, though, these explanations are based on similar code contained in the training dataset, rather than any kind of ability to reason about what's actually going on, no?


They are indeed based on similar code in the training dataset, but it also takes into account the context of the code in the conversation thread. While the initial code it generates is already impressive and pretty accurate. To me, the real power comes out when iterating on the code through followup conversation.

For example, I asked it to write an implementation for a piece of Kotlin code I provided describing an interface for a CRUD TaskRepository with typical `addTask`, `removeTask`, `getTask(id: UUID): Task?`, `getTasks(parentId: UUID): List<Task>` methods. It generated a TaskRepositoryImpl class backed by a `Map<UUID, Task>` and all method implementations where correct, including checking a projectId property in the getTasks method. Note that I never gave it the code for the Task class.

Then I iterated on it by asking "Add save and load methods that store state in a JSON file" and it did just that: Introducing the Gson library, serializing the backing map to a JSON string before writing to the file, loading the file again and deserializing into the map.

After that I asked "Add a SharedFlow<TaskEvent> that publishes events when creating, removing and updating tasks" and it was able to infer where to emit events, providing sensible names like `TaskEvent.Created`.

In another test case I asked it to write an Android layout xml for a login screen with username, password, login button and login spinner which it did. I iterated once asking it to use components from the Material library and to make sure the Spinner was overlaid over the form. Then I followed up with "Write the corresponding activity code in Kotlin". It generated a full LoginActivity class that loads the layout xml, findViewById for the fields and buttons, input validation with error handling, show/hide the login spinner when logging in, starting the MainActivity when login succeeds and it even mocked the login method with an artificial delay of 2 seconds. Its explanation of the code described that the delay was there to simulate a login, while in real code you would replace this with an HTTP call or a call to a loginService.

And have a look at these snippets of conversation where I asked it to improve a dueLabelString method I provided. https://imgur.com/a/ZtViC3d

Is this "reasoning about the code"? I don't know and it's hard to say.


Damn. That is impressive.


I was skeptic about AI's writing code but after playing with ChatGPT for a bit I have to adjust my views.

I think tools like this can be great for generating skeletons and draft implementations for simple CRUD-like things. For example I asked it "write an Android layout XML for a login screen with username, password and a login spinner using components from the material library" and it did exactly that. I followed up with "write the corresponding activity in Kotlin" and it did. It generated a correct implementation, including a few paragraphs explaining how it worked and that it mocked the login method with an artificial delay for demo purposes.

Another thread that convinced me was when I gave it a Kotlin interface for a CRUD Taskrepository and asked it to write the implementation. It wrote a correct implementation backed by a Map. With some followup prompts it was able to write save/load methods to store state in a JSON file, and generate events to a Flow whenever a task was created, updated or deleted.

Another one: I asked it how I could debug why a gstreamer pipeline had a refcount of 2 after the pipeline stopped running and it pointed me to a number of debug tools and environment variables I could set to trace refs in the pipeline.


I think it's frequent when professionals in an area are skeptical about massive changes coming to them, even when those later prove to be significant.

However, if there are some courses, videos, detailed documentation about the new way of doing software development, I'd be interested to look at that.


Yes, AI will improve things. They said that 30 years ago. Even MIT had an AI lab.

In the same time, testing has not really improved in the last 30 years.


> AI is the next 100x productivity boost.

I do not agree with this statement. There is totally no progress in AI since cryptowinter. Just there are too much people with always-online smartphones, so governments considered this field as too big to be out of their control. And it leads to 100x increase of no-brain programming job where everything what is needed from that kind of programmers - to fight against users.

The author is right about big changes is coming, but not the changes he is writing about.


AI will definitely eventually allow one person to perform the work that would take 100 today, and more. It's pretty easy to see the path there by extrapolating from what's happening in image/video diffusion models right now, and the language models have shown that they can task generalize well into basic problem solving, to the extent that a problem is similar to something that has been solved many times before. Simple tools built using the models of today could easily double the productivity of an artist or writer, so we're somewhere between 1 and 2 orders of magnitude off, which seems very achievable to me given the progress of the last 50 years.


Stable diffusion is not work, it can not develop logo of my grocery shop without a designer and it cannot even give me a PSD or SVG file.

> Simple tools built using the models of today could easily double the productivity of an artist or writer,

This message might be more useful if you give any other searchable names of such a pieces of AI software.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: