Hacker News new | past | comments | ask | show | jobs | submit login

one of the things its exceptionally well trained at is saying that certain scenarios you ask it about are unknowable, impossible or fictional

Generally, for example, it will answer a question about a future dated event with "I am sorry but xxx has not happened yet. As a language model, I do not have the ability to predict future events" so I'm surprised it gets caught on Super Bowl examples which must be closer to its test set than most future questions people come up with

It's also surprisingly good at declining to answer completely novel trick questions like "when did Magellan circumnavigate my living room" or "explain how the combination of bad weather and woolly mammoths defeated Operation Barbarossa during the Last Age" and even explaining why: clearly it's been trained to the extent it categorises things temporally, spots mismatches (and weighs the temporal mismatch as more significant than conceptual overlaps like circumnavigation and cold weather), and even explains why the scenario is impossible. (Though some of its explanations for why things are fictional is a bit suspect: think most cavalry commanders in history would disagrees with the assessment that "Additionally, it is not possible for animals, regardless of their size or strength, to play a role in defeating military invasions or battle"!)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: