Hallucination Control: Quality Assurance in AI Chat

In today's version of the 'TQ is obsessed with AI' show, I have been worried about hallucinations and accuracy in my peregrinations through the intellectual landscape with AI as my guide.

To that end, I have spent some time engineering a quality assurance prompt. Then I went back to some of my favorite conversations and applied it with terrific results, results that confirmed that careful conversation with AI produces excellent experiences. It might be my insightful prompting, but very little hallucination in my experiences.

Here's what to do: After you have elicited all the information your AI wants to give you, give the following prompt. It will not only give you information to solidify your understanding and retention of the new information, it will let you know what parts are reliable and, where you continue to be interested, give you non-AI avenues for further exploration.

PROMPT:

In a step by step process, first write a bullet point summary of the top level point or points made in this text. Then, make a list of every fact and conclusion presented in it. For each, add an assertion about whether it is actual fact or is an error AND whether the fact or conclusion properly supports that point. Where possible, include a citation and/or URL. Then discuss any possible errors or misleading conclusions from the text. Finally, make a conclusion whether the text is accurate and should be relied upon. End with a judgement: Accurate or Misleading.


You can see an example of the result at the end (that means scroll to the bottom!) of this conversation.