Kelvin Smith Library
Have questions?
Learn more about the Center
Because LLMs can produce very convincing text, it’s easy to be lulled into treating their output as if it were authoritative. Critical literacy when it comes to AI means understanding what these systems do well and where they fall short, so you can use them responsibly in research and learning. Key reasons to stay critical:
No Guaranteed Accuracy: LLMs sound confident but may output incorrect information. They do not have an internal fact-checker and will sometimes produce false statements. Users must verify important facts from reliable sources rather than assuming the AI is correct.
Biases in Output: LLMs learn from human-created content and thus can pick up all the biases, stereotypes, and imbalances present in that data. Without scrutiny an LLM might generate responses that reflect bias, stereotypes and other viewpoints present in training text that are problematic for research or instruction goals. Critical users should watch for bias and not accept biased outputs as normal or acceptable.
Lack of Transparency: Proprietary LLMs (like ChatGPT, Gemini) are “black boxes” – we typically don’t know exactly what data they were trained on or how they make decisions. Even open models are so complex that it’s hard to trace why a given output was produced. This opacity means we must test and probe the system’s behavior to understand its limits. The lack of transparency presents significant obstacles to producing reproducible research. Since proprietary models change often and in unknown ways, researchers who must publish reproducible research should use LLMs that they have more definite control over.
Ethical and Social Impact: Using AI in academia raises ethical questions (Is it acceptable to use AI text in an assignment? How do we attribute AI-assisted work? How might ubiquitous AI assistance change how students learn or how research is conducted?). A critically literate user reflects on these questions.
As with all areas of expertise, critical literacy empowers you to benefit from LLMs’ capabilities, while mitigating risks like spreading misinformation or becoming overly reliant on AI in ways that erode learning.