If you are using GPT-4 to try to deal with the fact that technical documentation on the public internet is sparse for your topic of interest, you are likely to be disappointed, since GPT-4’s training set likely has the same problem, so you are, in effect, hoping it will fill in gaps in missing data, prompting hallucinations.
It’ll be much better on subjects where there is too much information on the public internet for a person to efficiently manage and sift through.
I think you're right. My hope was that it could reason through the problem using knowledge from related sources like C and an understanding below the syntax of what was actually happening.
Depending on what you're doing, you might find few-shot techniques useful.
I used GPT 3.0 to maintain a code library in 4 languages, I'd write Dart (basically JS, so GPT knows it well), then give it a C++ equivalent of a function I had previously translated, and it could do any C++ from there.
It’ll be much better on subjects where there is too much information on the public internet for a person to efficiently manage and sift through.