Hacker News new | past | comments | ask | show | jobs | submit login

  "This knowledge is perfectly fine to disseminate via traditional means, but God forbid an LLM share it!"
Barrier to entry is much lower.



How is typing a query in a chat window “much lower” vs typing the query in Google?


How is reading a Wikipedia page or a chemistry textbook any harder than getting step by step instructions? Makes you wonder why people use LLMs at all when the info is just sitting there.


A Google search requires

* Google to allow particular results to be displayed

* A source website to be online with the results

AI long-term will require one download, once, to have reasonable access to a large portion of human knowledge.


You can easily ask an LLM to return JSON results, and soon working code, on your exact query and plug those to another system for automation.


If you ask "for JSON" it'll make up a different schema for each new answer, and they get a lot less smart when you make them follow a schema, so it's not quite that easy.


Chain of prompts can be used to deal with that in many cases.

Also, the intelligence of these models will likely continue to increase for some time based on expert testimonials to congress, which align with evidence so far.


OpenAI recently launched structured responses so yes schema following is not hard anymore.


Didn't they release a structured output mode recently to finally solve this?


It doesn't solve the second problem. Though I can't say how much of an issue it is, and CoT would help.

JSON also isn't an ideal format for a transformer model because it's recursive and they aren't, so they have to waste attention on balancing end brackets. YAML or other implicit formats are better for this IIRC. Also don't know how much this matters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: