Hacker News new | past | comments | ask | show | jobs | submit login

I used ChatGPT yesterday for code for the first time.

I gave it a nontrivial task I couldn’t google a solution for, and wasn’t sure it was even possible:

Given a python object, give me a list of functions that received this object as an argument. I cannot modify the existing code, only how the object is structured.

It gave me a few ideas that didn’t quite work (e.g modifying the functions or wrapping them in decorators, looking at the current stack trace to find such functions) and after some back and forth it came up with hijacking the python tracer to achieve this. And it actually worked.

The crazy thing is that I don’t believe it encountered anything like this in its training set, it was able to put pieces together which is near human level. When asked, it easily explained the shortcomings of this solution (e.g interfering with the debugger).




> The crazy thing is that I don’t believe it encountered anything like this in its training set, it was able to put pieces together which is near human level. When asked, it easily explained the shortcomings of this solution (e.g interfering with the debugger).

I have seen similar things. So, no, it's not regurgitating from its training data-set. The NN has some capacity for reasoning. That capacity is necessarily limited given that it's feed-forward only and computing is still expensive. But it doesn't take much imagination to see where things are going.

I'm an atheist, but I have this feeling we will need to start believing in "And [man] shall rule over the fish of the sea and over the fowl of the heaven and over the animals and over all the earth and over all the creeping things that creep upon the earth"[1] more than we believe in merit as the measuring stick of social justice, if we were to apply that stick to non-human things.

[^1]: Genesis 1:26-27, Torah




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: