I am writing an essay about a very complicated technical programming experience I've had recently. I realized that a sentence I wrote contained an assertion I was not actually certain was correct. So, I asked an AI (Bard).
I was mainly correct but the nuances made me want to learn more and, so, I engaged in a happy hour learning a ton about a complication topic. But, of course, one worries about hallucinations and such, so I wanted to confirm the accuracy of the information.
Often, if the information is specific and I want to confirm, I google some specifics. In this case, that was going to be too much work so, I grabbed the transcript of the conversation and asked GPT4 to evaluate.
It came up with a half-dozen disagreements. No falsehoods, but places where Bard had mislead. I took those back to Bard and asked it to comment on the points. It agree and expanded on the points.
Then I asked Bard to tell me if it could find any other errors. I could not.
All the AIs are pretty good at finding their own errors if you ask them. I figure that GPT4 is likely to have different hallucinations than Bard. Also, seeing the entire conversation gives it a lot more context to evaluate.
Now I feel confident that I have gotten good information.
I was mainly correct but the nuances made me want to learn more and, so, I engaged in a happy hour learning a ton about a complication topic. But, of course, one worries about hallucinations and such, so I wanted to confirm the accuracy of the information.
Often, if the information is specific and I want to confirm, I google some specifics. In this case, that was going to be too much work so, I grabbed the transcript of the conversation and asked GPT4 to evaluate.
It came up with a half-dozen disagreements. No falsehoods, but places where Bard had mislead. I took those back to Bard and asked it to comment on the points. It agree and expanded on the points.
Then I asked Bard to tell me if it could find any other errors. I could not.
All the AIs are pretty good at finding their own errors if you ask them. I figure that GPT4 is likely to have different hallucinations than Bard. Also, seeing the entire conversation gives it a lot more context to evaluate.
Now I feel confident that I have gotten good information.