Hacker News new | past | comments | ask | show | jobs | submit login

It works pretty good. You define a few “function” and enter a description on what it does, when user prompts, it will understand the prompt and tell you which likely “function” to use, which is just the function name. I feel like this is a new way to program, a sort of fuzzy logic type of programming



> fuzzy logic

Yes and no. While the choice of which function to call is dependent on an llm, ultimately, you control the function itself whose output is deterministic.

Even today, given an api, people can choose to call or not call based on some factor. We don’t call this fuzzy logic. E.g., people can decide to sell or buy stock through an api based on some internal calculations - doesn’t make the system “fuzzy”.


If you feed that result into another io box you may or may not know if that is the correct answer, which may need some sort of error detection. I think this is going to be majority of the use cases


Hm, I see what you mean. Afaict, only the decision to call or not call a function is up to the model (fuzzy). Once it decides to call the function, it generates mostly correct JSON based on your schema and returns that to you as is (not very fuzzy).

It’ll be interesting to test APIs which accept user inputs. Depending on how ChatGPT populates the JSON, the API could be required to understand/interpret/respond to lots of variability in inputs.


Yeah I’ve tested, you should use the curl example they gave as you can test instantly pasting it into your terminal. The description of the functions is prompt engineering in addition to the original system prompt, need to test the dependency more, it’s so new.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: