Hacker News new | past | comments | ask | show | jobs | submit | jarofghosts's comments login

Alternative Intelligence


Hallucinating is roughly how they work, we just label it as such when it's something obviously weird


This is something I'm not sure people understand.

LLM's only make a "best guess" for each next token. That's it. When it's wrong we call it a "hallucination" but really the entire thing was a "hallucination" to begin with.

This is also analogous to humans - who also "hallucinate" incorrect answers, usually "hallucinate" incorrect answers less when they "Think through this step by step before giving your answer", etc.


And certainly there is no price to be paid for this accelerated garbage-production loop!


I'd recommend giving tempeh a try. I like that it has a more substantial texture than tofu and it soaks up flavor really nicely.


The problem is that it's killing people that aren't in cars, see: https://www.nytimes.com/2019/10/22/us/pedestrian-cyclist-dea...


New York expanded their bike lanes a few years ago, I would bet that has had a much bigger impact.


That rate only applies to fatalities as the result of motor vehicle collisions, which is likely going down because we keep making vehicles more and more tank-like. See: https://www.nytimes.com/2019/10/22/us/pedestrian-cyclist-dea...


I was fortunate enough to attend a talk given by the person primarily responsible for writing it and he (as well as some people already using it) said they see no discernable performance difference.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: