Hacker News new | past | comments | ask | show | jobs | submit login

This is my favorite part about LLM’s and image generators: they are the ultimate rapid prototyping tool. I really benefited from making it a habit to use them a lot for coding and anything else, as I started learning what works and what doesn’t and thus bootstrapped a whole set of tools for myself and my team.



In your opinion, what are some examples of things that work and don't work? And can you talk about what's the tools do?


It’s hard to tell what will work well and what won’t.

I asked it to create me a python library for the GT1151 touch screen controller and it came up with working code on the first try. Then I asked it to add support for that chip’s gesture mode and it came up with completely reasonable looking but nonfunctional code, because it didn’t understand the gesture mode implementation on the chip.


If you give it documentation on the chip, does it generate good code?


You used to be able to, but they took away browsing capability. However you can still copy and paste documentation, and that works. But it was a lot more convienent when it could just look up the documentation :(


You probably can't. Unless it fits in 2-3 pages, it'll blow out the context window.


I received a message from ChatGPT on Friday telling me about Enterprise access, where restrictions on text entry are unlimited. No price quoted sadly.


Arbitrary context length? Sounds improbable, did they change the entire way the generator works? Or is this just marketing, and they actually mean "really long context?" Because technical documentation is frequently longer than 100k tokens...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: