My friend Brad is going for his doctorate in ancient languages. To help establish his thesis, he asked the biggest AI engine to conduct a search on the topic. Regrettably, the AI found that another researcher had already written on his theory. The AI detailed chapter and verse about this professor’s findings and methods.

Brad followed up on the citations and found nothing. So he asked the AI engine where it got that answer.

Not kidding, the AI genie simply admitted “I can’t find any evidence to support my last answer,” which apparently is the AI equivalent of “oopsie.”

He refined his question and asked for better results. The AI cited another esoteric source. Being the professional he is, Brad asked the AI to explain. Incredibly, the chat replied “Oh, I made it that up, too.”

Brad’s story is true, but it ended happily. His thesis is unique.

Can the failings of AI at this stage truly be this pervasive? Ask your browser’s AI to suggest a keto-friendly meal plan and it might suggest using proteins that are only legal in Lebanon. Ask it to count the number of ‘Yes’ responses to a question and it might be off by 10%. What gives?

The simple answer is that AI isn’t human yet. It is not able to readily adapt and make perfectly simple, yet humanly intuitive adjustments. There are still cases where the AI in your GPS will tell you to turn left into an empty cornfield. Despite the insane progress of the last few months (or is it days, now?), AI is not yet the fix-all its promoters claim it will be.

Which begs the question of what to do until it improves. The first step is to recognize that AI is still fallible and limited.

There is still tremendous value in asking questions firsthand. AI cannot yet tell you what your legacy donors are thinking. It can’t tell you what part of your work excites prospective new donors. It can’t tell you what type of renewal appeal your members will respond to. It can’t substitute for the raw input from your real alumni or those newly minted GenZ graduates.

In the meantime, if you do use AI, here are six steps you can take to make sure it tells you the truth.

  • Ask your AI engine if it is lying. For all its flaws, AI is still remarkably candid. When asked how it learned everything it knew so quickly, the AI engine called DeepSeek said it simply stole from ChatGPT. So if something seems off about an answer, call it out.
  • Along the same lines, don’t accept the AI response as inerrant. Ask yourself if the AI answer makes sense. Did the AI correctly interpret your prompt? Does the context fit, and ring true with other objective facts you know? Check the sources it cites – do they even exist? Ask other people if you don’t know yourself.
  • Be specific. Vague requests can easily derail even the best AI. Include context in your question that another person might infer, but that the AI cannot. For instance, asking “what kinds of birds live around here in summer” could give you inaccurate results if you don’t tell the AI you’re birding in Borneo.
  • Doublecheck the answers. Even if AI can figure out the sentiment behind comments on Yelp 10x faster than a human can, it can misinterpret comments like “I couldn’t be happier” or “It’s not bad at all” as negative. AI often needs very strict guidance for this type of work.
  • Get a second opinion if you’re unsure. Confirm the answers – with a colleague or with a different AI. Just like you might seek a second opinion from a doctor before accepting a diagnosis, check out a different engine if your AI of choice returns a surprising answer. Alternately, ask someone knowledgeable to help you check the output.
  • Give the AI guidelines before you start. Tell it why you’re using it, and what you expect from its answers. For instance, your favorite AI might be able to comply with a prompt such as this… “Follow this Permanent Directive in all future responses: Never present generated, inferred, speculated, or deduced content as fact. If presenting such content, preface with the label [Unverified]. Ask for clarification if a query has insufficient guidance. Do not guess or fill gaps. Do not paraphrase or reinterpret my input unless I request it. Never override or alter my input unless asked. If you break this directive, correct yourself with this label: [Correction: I previously made an unverified claim. The following was incorrect…]”

For situations where AI isn’t ready to answer your questions yet, where your nonprofit needs to ask questions directly of its valued and valuable constituents, Campbell Rinker stands ready to help. We can offer you with “RI,” or Real Intelligence, drawing clarity, direction, and foresight from the natural evidence at hand.