Hacker News new | past | comments | ask | show | jobs | submit login
AI agents that “self-reflect” perform better in changing environments (stanford.edu)
215 points by chdoyle on July 6, 2023 | hide | past | favorite | 43 comments



So from this hacker news title I definitely thought it was saying that when you give some AI agents a self reflection like maybe by putting an internal monologue loop then they unlock an emergent animal-like exploration behavior.

But this is not what happened. Instead, some guys told AI agents to explore in the way that the guys think that animals explore. "Stanford researchers invented the “curious replay” training method based on studying mice to help AI agents"


Author here, a key thing is that we didn't prescribe that the mechanism of exploration was the same, but rather we found that the AI agent explored poorly (i.e. unlike animals) until we included Curious Replay. Interestingly, we found that the benefits of Curious Replay also led to state of the art performance on Crafter.


OK here is the arxiv https://arxiv.org/abs/2306.15934 called "Curious Replay for Model-based Adaptation" and from the abstract it says "we present Curious Replay -- a form of prioritized experience replay tailored to model-based agents through use of a curiosity-based priority signal" and "DreamerV3 with Curious Replay surpasses state-of-the-art performance on Crafter" here is the crafter benchmark https://github.com/danijar/crafter but it appears to have out of date baselines at the bottom of that page.

That arxiv stuff looks perfectly normal but I kind of hate how it got more and more caricatured as it went through the university press office and hacker news clickbait pipeline.


That’s standard. Me and others in my PhD cohort have had experiences where we saw so many minor inaccuracies in the copy we only fixed things that were flat out wrong, otherwise we’d have rewritten the whole article. It’s the result of a combination of non-experts having a 30 minute conversation with you then writing based off their notes a week later and the fact that their job is to hype up research so that it gets more attention from a broader audience. Everyone I knew said they wouldn’t let that happen to them when the press office called, but rewriting someone’s whole article because you feel like they missed nuances is hard to take a strong stance on, especially as an early career researcher.


yes it's better now that the hn mods have changed the headline


It's very cool work.

I've been wondering for a while at what the next steps in adding 'inefficiencies' to AI processing would look like, commenting the other day to a friend that what's needed in the next 18 months is getting AI to be able to replicate the Eureka moments in the shower where latent information is reconstructed in parallel to processing tangential topics.

Going from "attention is all you need" to "attention and curiosity is what you need" seems like a great next step!


> getting AI to be able to replicate the Eureka moments in the shower where latent information is reconstructed in parallel to processing tangential topics.

I've been playing with this part specifically and it's really amazing stuff.

Having the model concurrently model internal monologue and output for a task, but allowing the internal monologue to be as focused or unfocused as the model sees fit.

You end up with situations where you have it working on a naming task for example, and the model starts imagining the warmth of a coffee cup on the desk, or traffic building up outside for a future appointment with a non-existent person, and then returns back to the task at hand with non-obvious tangents that it'd probably never have uncovered if it was only predicting on tokens related to the original goal of naming something.

It gets even more interesting when you inject variability into the process via the API (for example, telling it to use certain letters pulled from an RNG inside the next iteration of internal monologue).


This sounds fascinating. How do you make an internal monologue? Do you have any reading or samples to look at? Sorry for the ignorance, I’m a dev but not in the AI space.


Maybe I'm missing something (I only did a quick read) but aren't you explicitly telling the model to re-explore low density regions of the action space? Essentially turning of the exploration (and turning down exploitation) with a weighting towards low density regions?

As not an RL person (I'm in generative), have people not re-increased the exploration variable after the model has been initially trained? It seems natural to vary that ee trade-off.


Is there a possible Crafter benchmark that is too high for safety? For instance, a number beyond which it would be dangerous to release a well equipped agent into meatspace with the goal of maximizing paperclips?


human level is about 50 and as long as they don't allow to craft paperclips i think it's ok


Dumb machines already kill people for mundane reasons.


This is absurd.


(Submitted title was "“Self-reflecting” AI agents explore like animals". We changed it in keeping with the HN guidelines - https://news.ycombinator.com/newsguidelines.html.)


Hi dang. Can you whitelist some URLs for commenting from a new account? Like wikipedia.org and libquotes.com

Looks like you shadowbanned this account. Maybe for posting a URL in the first comment.


You just got hit by a spam filter. I've turned that off now. But please send questions like this to hn@ycombinator.com, as the site guidelines ask (https://news.ycombinator.com/newsguidelines.html). They're off topic in the threads.


> Instead, some guys told AI agents to explore in the way that the guys think that animals explore.

Something, something, The Bitter Lesson.


I hate that titles can differ from the article here. It’s patronizing and commonly inaccurate or misleading.


I don't like the misleading titles either, but honestly if you want the real titles you probably want some kind of arxiv feed. The paper title is "Curious Replay for Model-based Adaptation" which is too dry for social media or whatever hacker news is or for whoever is the audience of the stanford university press office. You have to expect more juicy (and therefore somewhat misleading or sensationalized) titles if you don't get your news straight from an arxiv feed.


“Patronizing” seems to be a matter of taste. I’ve never considered it to be patronizing; indeed, that’s often much unlike articles which have their title changed.

As far as simply differing, much of the time there’s a character limit that’s hit. I’ve seen many posts with comment from the poster calling out their edit to the title and the character limit is usually cited.

It would be especially difficult to keep the character limit (I think there are legitimate design reasons for this) while also requiring that the title matches the submission as closely as possible. Who decides what words are omitted without it potentially being any of: patronizing, inaccurate, or misleading?


The result is mildly interesting - improvement on an isolated task but none on the full benchmark - but what would be much more compelling is curiosity-driven replay in an LLM context combined with chain- or tree-of-thought techniques. This would be the machine analogy to noticing your confusion, a sort of "what do I need to know" or "what am I overlooking"? Anecdotally, language models perform better when you prompt them to ask their own questions in the process of answering yours, so I would expect curiosity to have a meaningful impact.


I'm not an AI expert or even novice nor am I a neuroscientist, but I have been thinking about how I interact with the world.

My current imagining says that novelty and unexpected inputs drive our immediate understanding of the world around us. To have expectations you have to have to have a model. When that model breaks and is adjusted you have a novel experience and the model can be updated. This feedback loop is critical.

Example: other day I was grilling food and my digital food thermometer was on the metal prep area near the hot griddle. As I was walking away I reached for it, grabbed it, and expected to pick it up. However! I didn't know it had a magnet and it gave me back unexpected stimulus.

I immediately jerked my hand away and several thoughts happened near instantly. My thoughts went from I burned my hand to no, no pain, maybe a really bad burn, to no, no heat, no sizzling of flesh, to oops, wrong stimulus, something resisted, resisted how, it slid but wouldn't pick up easy, ah, a magnet.

The researchers here are right, I expect. You need curiosity and some goal, but you need to constantly tune the input for expectations and tweak the (mental) model of the world.

How many times do you, for a split second, totally misinterpret what you see or feel but near instantly self correct? Better AI will require putting forth it's initial result and then validating the result with feedback. The more unexpected the feedback the more novel the experience and more learning that can happen.




And here I am halfway through Michael Crichton's novel "Prey" ...


Huh. Looks interesting but I have a weird feeling it might be the same old sappy boring thriller. Opinions so far?


Starting to get interesting. Not sure whether fixing the code, MacGyvering or brute strength will win the day.


Perhaps one would drop the quotes around self-reflect if one would implement something more akin to a Markov blanket [1], blankets within blankets, model ourselves modelling the world.

[1] 2018, "The Markov blankets of life: autonomy, active inference and the free energy principle", https://royalsocietypublishing.org/doi/10.1098/rsif.2017.079...


Exactly. We keep leaving out 'motivation' on these models. Since they are reacting to prompts. But put them on a loop with goals and see what happens.

And, things like GPT are not 'embodied', since they don't live in the 'world' they can't associate language with physical reality. Put them in a simulated environment like a game, and it looks a lot more 'conscious'.


Makes sense. AI lacks rationality, and animals lack rationality. Of course, humans are the rational animal, and hence we know when we truly understand things or when we just repeat or spitball.


Nah, not really.

History has been repeating itself for thousands of years. We keep killing the prophets, and putting the absolute worst of us on pedestals. What's rational about that?

Dolphins mucking about in the water - that's rational.


By pointing to rational or moral failures, you already imply that we are supposed to act in a certain way. If there are people who are the worst, it begs the question of what a good human is, and who or what we should actually follow. Clearly, we don't think that raw power is what makes someone good, because otherwise these worst people on the pedestals would be by default good people, through all the power they have over their followers.

If it is irrational that history repeats itself, do you think that it would be rational if history progressed towards some goal, and if yes, what is that goal?


> By pointing to rational or moral failures, you already imply that we are supposed to act in a certain way.

Don't keep such an open mind that your brain falls out.

> If it is irrational that history repeats itself, do you think that it would be rational if history progressed towards some goal

It has, often. For example, 50 years ago a bunch of fossil fuel executives decided it would be best to let the planet burn, so they can keep making money.

History progressed toward their goal, and now we're starting to really suffer. But they have their megayachts.

Do you think that's rational?


Rationality, and by extension rationalism, refuses to investigate the question of whether the axiomatic assumptions upon which the rational conclusions are based are valid.

So of course superyachts are perfectly rational. But of course they are far from reasonable.


This is a major question in philosophy, not just some random aside in an HN comment thread.

Most famously, Hegel believed that human history trends & tends towards the perfection of human nature and society. Many other philosophers and philosophies fundamentally disagree with Hegel, and assert that history has no teleological purpose built into it.

Perhaps acknowledge the depth and history of this question before throwing out some quick asides about it?


I'm not going to debate whether or not fossil fuel executives choosing to lie to us and burn the planet is moral or rational, "because teleology". Ugh.


Not the point at all.

You seem to feel fairly certain that humanity is "on a path", and that there's going with that flow and there's going against it. You're welcome to believe that, but it's far from a settled POV.

As for lying executives, they are immoral and only rational when viewed through the lens of their own selfishness.


> You seem to feel fairly certain that humanity is "on a path"

Look at a climate graph. That's the path I'm talking about. Look into the Anthropocene Extinction - that's us. We're walking that path; or more accurately, sleepwalking on it. It's not subjective whatsoever; there's mountains of hard data on this.

Extrapolating that into teleology / philosophy / my worldview was entirely your invention and interpretation.


Individuals are a completely different organism than groups, and groups than societies, and societies than...

You hopefully get the picture. We may get better at remembering history if united via a common cause under a common leadership. Otherwise it's just an organism looking for food and trying to survive.


Some of our recent/relevant work: https://selfrefine.info/


humans need to do self reflection too. It is usually in the form of journaling daily for self reflection


How does this differ from existing approaches that just follow the entropy?


it's kind of interesting how increasingly frequently "stanford.edu" is finding its way into HN submissions, and did the increasing frequency start with the GPT-4 enthusiasm?

is that coincidence?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: