Kelvin Smith Library
To interact effectively with AI, it is important to provide prompts that convey the results you are seeking. It is also important to evaluate the information from generative AI tools due to the sometimes lack of transparency regarding the sources with which they were trained. Just as you would evaluate any piece of information before incorporating it into your research, you should also evaluate the output of GenAI and the tools you are using. Remember the saying "Garbage-in, garbage-out" (GIGO), and be wary of any information presented as fact by a generative AI tool unless you are certain of the credibility of its training sources or can corroborate the information through other information sources.
The quality of the responses you receive from a generative AI tool may depend on the prompts you use to elicit those responses. For this reason, the art of prompt engineering has developed as a way to guide a generative AI tool to give you the clearest responses with the most helpful features, e.g. cited sources.
This University of Calgary guide offers information on prompt generation, the act of forming an explaining effective request for an LLM.
MIT offers three key suggestions for creating prompts:
OpenAI Tips on Prompt Engineering.
Response: Certainly! 📚 Here are the major themes in Mohsin Hamid’s novel “Exit West”:
For further interpretation and analysis, you can explore the following secondary sources:
Note: The sources the chatbot provides for further analysis are not, in fact, scholarly at all. This is especially true of the blog site “booksthatslay.com”. Notice the phrase, “AI-generated content may be incorrect”. Take this into consideration when drawing material from a chatbot session.
It's important to think about the differences between Generative AI tools and search engines, such as Google. Because the sources upon which an LLM was trained are not always openly shared, and the frequency with which a generative AI tool can produce hallucinations, it’s important to keep a critical mindset when attempting to glean factual information from such a tool. We recommend fact-checking apparent claims made by a generative AI tool by using the strategy of lateral reading — that is, having a separate browser tab open where you can check claims against Wikipedia, popular fact-checking sites, open web sources, and, of course, library resources.
In addition to evaluating AI's output, you should also critically evaluate the AI tools you choose to use. There are several evaluation frameworks to help guide your evaluation, including the SIFT Method (Stop, Investigate, Find, Trace).
One to start with is the ROBOT Test, which is a tool you can use when reading about AI applications to help consider the legitimacy and usefulness of the technology. ROBOT stands for:
It's important to note that no one evaluation framework or checklist provides a foolproof test--but regularly asking yourself these types of questions when encountering new AI applications (or new information in any format) will help you build your research and AI literacy skills.
The ROBOT Test is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.