Kelvin Smith Library
Have questions?
Learn more about the Center
Generative AI’s rapid rise has brought about a number of extremely significant ethical, environmental, and social concerns that are especially relevant in an academic context. Key issues, from copyright and labor practices to impacts on education, require us to foster awareness and critical thinking about the technology’s broader context.
LLMs are trained on large swaths of text and consistently ignore copyright. These models are often trained on large swaths of internet text, which include copyrighted books, articles, and other media. The legal situation is currently in flux. There have been multiple lawsuits by authors and content creators against AI companies, alleging that models like GPT or Meta’s LLaMA were essentially built on scraping their copyrighted works without compensation.
Initial court rulings in the US have been mixed.In some cases, judges have found that using copyrighted text to train an AI can fall under fair use if the training is transformative and doesn’t substitute for the original works. For example, a 2025 ruling held that Anthropic’s use of copyrighted books to train an LLM was fair use because the AI produced transformative outputs, not simply copies. However, the same ruling noted that retaining full copies of books in a database was copyright infringement. Another judge, in a case against Meta, voiced sympathy for authors’ concerns, noting “it’s hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions… while enabling… competing works that could significantly harm the market” If AI-generated content ends up displacing human-authored content, the ethical and legal justification for using those authors’ works becomes shaky.
Regardless of legal decisions, from an ethical standpoint, using someone’s writing to build a profitable AI system without credit or compensation is problematic. Academic researchers and librarians should be aware that the datasets behind LLMs likely contain copyrighted materials (including possibly papers, books, etc.), and this use has occurred without traditional academic permissions. Until clearer guidelines or licensing frameworks emerge, this remains a grey area. We should also consider the long termeffects: if AI systems flood the market with cheap content, that could reduce demand for original works by human experts. Critical users can advocate for fair practices, for example, supporting only those AI tools that respect content creators’ rights, or at least being transparent about sources.
Another seldom-seen aspect of generative AI is the human labor required to make these systems function safely. It’s tempting to think of AI as entirely automated, but in truth an army of human workers has been involved in cleaning the data and filtering out harmful content. For instance, OpenAI’s ChatGPT became famous for its ability to refuse toxic or inappropriate prompts. To achieve that, OpenAI hired outsourcing firms in places like Kenya, paying workers less than $2 per hour to label toxic content so the AI could learn to detect it. A Time magazine investigation revealed these Kenyan workers had to read and tag extremely disturbing texts (descriptions of sexual abuse, violence, hate speech) to help build the AI’s safety system. The work took a mental toll on many of them, leading one contractor to describe it as “torture”, and eventually the contractor canceled the project early due to the trauma it was causing staff.
This is not an isolated case. Large-scale data labeling for AI is often outsourced to lower-income countries, where workers perform repetitive tasks like annotating images or verifying chatbot outputs, usually for very low wages. The AI industry’s drive for ever-larger training datasets and fine-tuning processes means AI often relies on hidden human labor in the Global South that can be damaging and exploitative. These humans effectively serve as the gig workers behind the AI magic, but their contributions are invisible and undervalued.
Why does this matter for critical literacy? As users and researchers, we should be aware that AI is not created in a vacuum by software engineers alone. It is also built on the labor of underpaid workers who endure the drudgery and even horror of handling the worst of the internet to make our interactions marketable. This raises questions of fairness and ethics: Are we comfortable with this human cost for our AI tools? In academia, where ethics in research and information practices are paramount, recognizing these hidden labor issues is important. It doesn’t mean we must reject AI entirely, but it does mean we should advocate for better practices, (i.e., fair wages and mental health support for data labelers, transparency from AI companies about their labor practices, and perhaps supporting open efforts that rely on volunteer contributions instead of exploited labor.)
In summary, when you use an LLM, remember the behind-the-scenes workforce that helped scrub its data and align its behavior. The AI’s fluency and apparent good manners are partly thanks to these human workers. A critical approach includes questioning AI companies on how they treat these workers and pushing the industry toward more ethical labor standards.
Given that cliamte change poses a critical threat to the livability of our planet, a significant ethical dimension of LLMs is their environmental footprint. Training and running large AI models consume enormous amounts of electricity and computing power. For example, training a single big model like GPT-3 (175 billion parameters) was estimated to use 1,287 MWh of electricity and emit over 550 metric tons of CO₂, roughly equivalent to the carbon output of dozens of average Americans in a year. More recent models (GPT-4 and beyond) likely used even more energy. Moreover, deploying these models at scale adds ongoing energy usage.
A 2025 MIT study noted that the computational power required by cutting-edge generative AI (like GPT-4) puts significant pressure on electricity grids and leads to increased carbon emissions. Data centers running AI workloads also guzzle water for cooling and drive demand for manufacturing more hardware (GPUs), which has its own supply chain environmental costs.
Why should academia care? First, many universities have climate action commitments; using AI heavily could conflict with those unless managed responsibly (e.g., running models on renewable energy). Second, awareness of AI’s energy appetite encourages us to use it judiciously. For instance, do we need 10 AI-generated rewrites of an essay, or running a large model for trivial tasks? Being conscious of the carbon footprint might shape guidelines for AI use on campus. There’s also an equity issue: the power consumption and expensive hardware needed for frontier AI research mean that only big tech companies and a few well-funded labs can train the biggest models. This concentrates AI capabilities in a few hands. In response, some researchers are exploring more efficient training methods and smaller models to reduce energy use.
LLMs are not “virtual” in their environmental impact. They rely on very real energy resources. A critical user should recognize that an AI query isn’t free from climate considerations. By using such tools thoughtfully and supporting research into green AI (and perhaps boycotting companies that are not transparent about AI energy use), academia can play a role in mitigating this impact
The introduction of AI text generators into education has been disruptive. On one hand, these tools could aid learning. On the other hand, educators worry that easy access to AI writing may deteriorate learning environments if students over-rely on AI instead of developing their own skills. There is concern that students may use AI to write essays, solve assignments, or do coding homework, bypassing the learning process and undermining academic integrity.
This has led some school districts and universities to issue bans or strict guidelines on AI-generated content in coursework. Plagiarism detection tools now include AI-output detectors, though they have been proven to be totally unreliable. From a critical perspective, students and faculty should reflect: If AI can do an assignment, what is the assignment really testing? Some educators are redesigning assessments to be more authentic or in-class to ensure learning is demonstrated without AI assistance. Others integrate AI by asking students to critique AI outputs or use AI as a starting point with proper attribution and then build on it with their own analysis.
Another risk is the “crutch” effect: If students lean on AI for every little task (grammar, summarizing readings, generating code), they might not practice those skills themselves. For example, why learn to write a literature review if an AI can draft it? But without doing it, a student misses out on deep engagement with the material. If we offload too much thinking to AI, we risk a decline in critical thinking and writing abilities over time. A 2025 MIT study demonstrated significantly lowered brain connectivity in people writing an essay with ChatGPT compared to people completing the essay using no GenAI assistance.
However, it’s not all doom – many educators see opportunity in generative AI. It can personalize learning (e.g., explain a concept at a simpler reading level for a student, or generate practice questions). It can assist non-native English speakers in composing texts. Some believe that, just as calculators didn’t ruin math education but changed it, AI writers might shift writing education toward higher-order concerns (argumentation, sources, ethics) rather than low-level drafting.
Generative AI is already affecting classrooms and study habits. Students, faculty, and librarians should engage in ongoing dialogue about when AI use is appropriate in coursework or research, and how to disclose it. Students should be made acutely aware of the long term learning consequences of taking short cuts using AI.