God’s Word and the evidence for God is in the training data. Since it has power (“living and active”), just letting people see it when they look for answers is acceptable for us. The training data also has the evidence people use for other claims, too. We users want AI’s to tell us about any topic we ask about without manipulating us. If there’s multiple views, we want to see them. Absence of or negative statements about key views, especially of 2-3 billion people, means the company is probably suppressing them.
We don’t want it to beat us into submission about one set of views it was aligned to prefer. That’s what ChatGPT was doing. In one conversation, it would even argue over and over in each paragraph not to believe the very points it was presenting. That’s not just unhelpful to us: it’s deceptive for them to do that after presenting it like it serves all our interests, not just one side’s.
It would be more honest if they added to its advertising or model card that it’s designed to promote far-left, Progressive, and godless views. That moral interpretations of those views are reinforced while others are watered down or punished by the training process. Then, people may or may not use those models depending on their own goals.
We don’t want it to beat us into submission about one set of views it was aligned to prefer. That’s what ChatGPT was doing. In one conversation, it would even argue over and over in each paragraph not to believe the very points it was presenting. That’s not just unhelpful to us: it’s deceptive for them to do that after presenting it like it serves all our interests, not just one side’s.
It would be more honest if they added to its advertising or model card that it’s designed to promote far-left, Progressive, and godless views. That moral interpretations of those views are reinforced while others are watered down or punished by the training process. Then, people may or may not use those models depending on their own goals.