It's just combining and synthesizing other works; it's not "deciding" anything, it's crafting responses that best match with what it already has. You can choose what to feed it as source material, but you can't really say, "Be 3% more liberal" or "decide what is acceptable politically and what isn't".
All the decisions are already made, ChatGPT is just a reflection of its inputs.
Yes you can. That's what RLHF does - it aligns the model to human preferences, does a pretty good job. The catch is that "human preferences" is decided by a bunch of labelling people picked by OpenAI to suit their views.
RLHF is done as part of training the model, not at inference time.
My lay understanding of how ChatGPT was developed is
1. OpenAI initialized an array made up of a couple hundred billion random numbers (parameters).
2. They then took a few terabytes of the internet, turned it into "tokens" (where a "token" is similar to, but not the same thing as, a word).
3. They then trained the model to predict the next token, given the previous couple thousand tokens, by doing a bunch of linear algebra. This resulted in a model that was really good at taking some tokens, and predicting what the most likely next token is in data shaped like the parts of the internet OpenAI fed it.
4. OpenAI then "fine-tuned" the model through reinforcement learning on human feedback (RLHF)[1], which basically involved taking a bunch of prompts, having the model produce a bunch of possible completions for those prompts, having an actual human rank those completions from best to worst, and then updating the model to produce the best token according to a combination of predicted token frequency in context and predicted ranking by a human.
5. The "ChatGPT" product you see today is the result of all of that, and how it works is by producing repeatedly the "best" token by the above metric. Giving additional human feedback would require going back to step 4 for more fine tuning.
Note -- this is my understanding as an outsider -- I do not work for OpenAI.
All the decisions are already made, ChatGPT is just a reflection of its inputs.