Nice! I noticed that Bard allows you to see drafts it made prior to selecting its final response and I kind of wanted chatGPT to do the same.
This is not just useful to reduce hallucinations or improve reliability in general, but also you could get as precise and specific as you want with the criteria to select the winning draft, which is something you can't control with Bard either. You could also extend this idea by then having another model extract and combine the best aspects out of each draft and so on.
This seems like a pattern / approach that would also be particularly great for cases where the output from the LLM has to be precise to be useful, such as writing code.
Citrusbyte | Backend Polyglot Engineer | Los Angeles, New York | REMOTE, https://citrusbyte.com
Citrusbyte is a software consultancy that believes in using simple tools to solve problems. We build custom systems for both startups and enterprises.
We have offices in Los Angeles and New York, but we work remotely with people from all around the world.
Our engineers have experience in a consultative environment, excellent communication skills and a desire to work with talented teams building innovative products. We work closely with our clients, sharing our experience and processes to help them define and create their products.
Most of our projects are written in Ruby, so experience in this language is required, but we are looking for polyglot engineers who have a hunger for learning new languages and tools. At Citrusbyte, learning is part of the job, and you should always be able to answer the question: "what have I learned this week?"
We believe that less is more and we vehemently value simplicity. This is reflected in the tools we choose to work with. For this reason we tend to avoid using big frameworks like Ruby on Rails and instead lean towards smaller tools like Sinatra, Roda or Cuba.
We have also done projects in Elixir, Node.js, Go, Python, Lua, etc. And strive to learn new tools and languages everyday. We believe in always choosing the right tool for the job and we are very open minded about trying different technologies.
This is not just useful to reduce hallucinations or improve reliability in general, but also you could get as precise and specific as you want with the criteria to select the winning draft, which is something you can't control with Bard either. You could also extend this idea by then having another model extract and combine the best aspects out of each draft and so on.
This seems like a pattern / approach that would also be particularly great for cases where the output from the LLM has to be precise to be useful, such as writing code.