Does anyone have any insight into why reinforcement learning is (maybe) required/historically favoured? There was an interesting paper recently suggesting that you can use a preference learning objective directly and get a similar/better result without the RL machinery - but I lack the right intuition to know whether RLHF offers some additional magic! Here’s the “ Direct Preference Optimization ” paper: https://arxiv.org/abs/2305.18290
> Does anyone have any insight into why reinforcement learning is (maybe) required/historically favoured?
From a concept stage, it has attractive similarities to the way people learn in real life (rewarded for successful learnings, punished for failure), and although we know similarities to nature don’t guarantee better results than alternatives (for example, our modern airplane does not “flap” its wings the way a bird does), natural solutions will be continually looked to as a starting point and tool to try on new problems.
Additionally, RL gives you a good start on unclear-how-to-address problems. In spaces where it’s not clear where to begin optimizing besides taking actions and seeing how they do judged against some metric, reinforcement learning often provides a good mental and code framework for attacking these problems.
>There was a paper recently suggesting that you can use a preference learning objective directly
Doing a very quick skim, it looks like that paper is arguing rather than giving rewards or punishments based on preferences, you can just build a predictive classifier for the kinds of responses humans prefer. It seems interesting, though I wonder the extent to which you still have to occasionally do that reinforcement learning to generate relevant data for evaluating the classifier.
I was familiar with that phrase and its shorthand ("GLHF") but the latter half of the sentence ("for interacting with GPT models") confused the punchline enough that the joke just didn't land, because the context is one of using RL to "interact with GPT" (relevant to this article) but a more appropriate context would have been regular ole RL using agents in a simulated environment, like - I don't know, a video game?
RLHF as used by OpenAI in InstructGPT (predecessor to ChatGPT): https://arxiv.org/abs/2203.02155 (academic paper, so much denser than the above two resources)
This is essentially the premise behind Generative Adversarial Networks, and if you've seen the results, they're astounding. They're much better for specialized tasks than their generalized GPT counterparts.
GANs pair a generative model with a classification model (both unsupervised) whose loss functions have been designed to be antithetical. Basically, one performing well means the other is performing poorly. Keeping with the example posed by the given link, this results in a kind of hyper-optimization that causes the generative model to gradually hone in on the perfect way to render a face, while the classification model keeps pace with it and feeds back that "I don't see a face" until something resembling a face emerges. With this approach, you can start with complete noise and end up at a photorealistic face.
I'm not sure that's a valid statement on either count. There is plenty of work being done to bolster GANs with diffusion, in an attempt to take GANs where they couldn't before. Here's one such example: https://arxiv.org/abs/2206.02262
You might've been more correct to say that diffusion surpassed prior generative models, but the adversarial element doesn't even compare to diffusion at all. The adversarial element would be more accurately seen as a trade-off for standard RLHF/Human-in-the-Loop models.
I will bet money that GANs bolstered with diffusion will far outperform a standalone diffusion model.
It's not the first paper on the topic IIRC, but OpenAI's InstructGPT paper [0] is decent and references enough other material to get started.
The key idea is that they're able to start with large amounts of relatively garbage unsupervised data (the internet), and use that model to cheaply generate decent amounts of better data (ranking generated content rather than spending the man-hours to actually write good content). The other details aren't too important.
Problem with this is that it leads to the algorithm targeting outputs that sound good for humans. Thats why its bad and wont help us, it should also incorporate „sorry dont know that“, but for that it needs to actually be smart
Honesty/truthfulness is indeed a difficult problem with any kind of fine-tuning. There is no way to incentivize the model to say what it believes to be true rather than what human raters would regard as true. Future models could become actively deceptive.