Hacker News new | past | comments | ask | show | jobs | submit login

Not surprised. It's frighteningly good, and a perfect match for programming.

I often ask GPT4 to write code for something, and try if it works, but I seldom copy and paste the code it writes - I rewrite it myself to fit into the context of the codebase. But it saves me a lot of time when I am unsure about how to do something.

Other times I don't like the suggestion at all, but that's useful as well, as it often clarifies the problem space in my head.




I used ChatGPT yesterday for code for the first time.

I gave it a nontrivial task I couldn’t google a solution for, and wasn’t sure it was even possible:

Given a python object, give me a list of functions that received this object as an argument. I cannot modify the existing code, only how the object is structured.

It gave me a few ideas that didn’t quite work (e.g modifying the functions or wrapping them in decorators, looking at the current stack trace to find such functions) and after some back and forth it came up with hijacking the python tracer to achieve this. And it actually worked.

The crazy thing is that I don’t believe it encountered anything like this in its training set, it was able to put pieces together which is near human level. When asked, it easily explained the shortcomings of this solution (e.g interfering with the debugger).


> The crazy thing is that I don’t believe it encountered anything like this in its training set, it was able to put pieces together which is near human level. When asked, it easily explained the shortcomings of this solution (e.g interfering with the debugger).

I have seen similar things. So, no, it's not regurgitating from its training data-set. The NN has some capacity for reasoning. That capacity is necessarily limited given that it's feed-forward only and computing is still expensive. But it doesn't take much imagination to see where things are going.

I'm an atheist, but I have this feeling we will need to start believing in "And [man] shall rule over the fish of the sea and over the fowl of the heaven and over the animals and over all the earth and over all the creeping things that creep upon the earth"[1] more than we believe in merit as the measuring stick of social justice, if we were to apply that stick to non-human things.

[^1]: Genesis 1:26-27, Torah


The published article is not at all about programming tasks but about generating text for "strategy consultant".

Some example found page 10 of the original article:

   - Propose at least 10 ideas for a new shoe targeting an underserved market or sport.
   - Segment the footwear industry market based on users.
   - Draft a press release marketing copy for your product.
   - Pen an inspirational memo to employees detailing why your product would outshine competitors.
Nothing of real value imho.


> Nothing of real value imho.

Without the right target market, business model, and effective methods to reach customers, the most brilliant pair of shoes or piece of code can be useless (unless someone works to repurpose them as art or a teaching tool).


Each question is too generic and there was apparently no specific input data to act upon. How a valid business model can be expected in those condition ?


I’ve also found the act of describing my problem to GPT4 is sometimes just a helpful as the answer itself. It’s almost like enhanced rubber duck debugging.


So true. I've written entire prompts with several lines worth of explanation, only to realize what my issue was and never hit the "send" button. Guess I should do that more often in life, in general


We need an inverse GPT4-style LLM that doesn't provide answers but instead asks relevant questions.


GPT4 can do that too. Just show it something (code or text) and ask it to ask coaching questions about it.


I have tried adding prompts like this and it works really well. "Rather than giving me the answer, guide me using questions in the Socratic method".


This is one step removed from "try different things until it works" style of programming.

Not to say you're of of those programmers, but it certainly enables those sorts of programmers.


And what's the harm in that? That's how I first started out programming decades ago.


Absolutely bonkers algorithms that no one can make sense of unless they dedicate time to study and debug it

Also, it will have to be scrapped when anyone wants to tweak it a little. Sure monkeys randomly typing on a typewriter will eventually write the greatest novel in existence... but most of it will be shit

May $entity have mercy on your soul if the business starts bleeding tons of money due to an issue with the code, because the codebase won't


perhaps the difference is that attitude is fine when building hobby or saas apps adding no real value to the world however that's not the type of behavior we'd expect to see for engineers having responsibility for critical systems dealing with finance, health, etc.


It's a hell of an articulate rubber duck!

https://en.wikipedia.org/wiki/Rubber_duck_debugging


Beware of that practice. If for some reasing you are get used to it too much, one day you may not have and you won't know where to start to write a function yourself.

It's simlar to what happens to people who knows a language (not coding language), stop using it or go back to use translator, and when they need to use it themselves, they are unable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: