ClimateUni Statement on AI
Climate Justice Universities Union Statement on Artificial Intelligence (AI) developed at the Union's scholar-activism meeting in Belfast (Queens University, 10th-11th October 2025) and its autumn meeting in Carlow (South East Technical University, 13th November 2025) and approved by the Union's Coordinating Team in November 2025.
Click here to see our resource page on Resisting AI in Higher Ed
This statement denounces the uncritical promotion of AI in higher education institutions. Our collective concern focuses on "displacement AI." Displacement AI has been defined by Guest (2025) and Guest & van Rooij (Guest & van Rooij, 2025, p3) as a series of technology products that have the following properties:
-
Are sophisticated statistical models, so large they impact humans and the environment through their energy, land, and water use (e.g., Alexander, 2025; Barratt et al., 2025; O'Donnell & Crownwell, 2025; Suárez et al., 2025; Valdivia, 2025);
-
Depend on vast swathes of data, which is mostly stolen or otherwise unethically obtained or refined (e.g., Bansal, 2025; Bender et al., 2021; Birhane et al., 2024; Kalluri et al., 2023; Lantyer, 2025; Startari, 2025; Vercellone & Di Stasio, 2023);
-
Can represent various statistical distributions and so can be discriminative, generative, or neither (e.g., Guest et al., 2025);
-
Exist in a displacement relationship to humans, i.e. this type of AI product is harmful to people, it contributes to deskilling, and it obfuscates cognitive labour (e.g., Bender, 2024; Birhane, 2025; Glickman & Sharot, 2025; Gourabathina et al., 2025; Guest et al., 2025; Kelly et al., 2025; Kidd & Birhane, 2023; Kosmyna et al., 2025; Yeung et al., 2025).
These characteristics, describe, inter alia, the commercial LLMs, including ChatGPT, Gemini, Claude, and similar technologies embedded in Grammerly and Microsoft.
We view displacement AI as antithetical to the values that we, as educators, and our institutions claim to uphold, including academic and research integrity; critical and independent thinking; sustainability and a liveable future for all; trust, care and respect; fairness, equality and justice; independence and autonomy; and responsibility and accountability.
We view the conflict between displacement AI and these values as inherent to the technology products. This conflict, therefore, cannot be resolved through transparency of use; "responsible use" of these technologies as currently structured and produced is impossible. We agree that critical reflection on the harms of AI without appropriate action can be considered critical washing (Suárez et al., 2025).
We call attention to the contradiction between our institutions' commitment to these values and their widespread adoption of AI. We also call attention to the contradiction between commitments to the wellbeing of our university communities, and the uncritical embrace of displacement AI. We acknowledge that many colleagues and students feel unable to refuse or resist displacement AI, and we recognise this as a deep injustice and moral injury.
By displacing learning, we view displacement AI as a direct threat to thinking. Use of displacement AI relegates learning to a product, rather than a process. By shortcutting the learning process and outsourcing cognition, displacement AI deskill us of the cognitive capacities that make us both human and intelligent, and which are fundamental to the wellbeing of a diverse, inclusive, human and humane democracy (Birhane, 2025; Guest & van Rooij, 2025; Jollimore, 2025; Kelly et al., 2025).
We view displacement AI as a direct threat to knowledge. Under the guise of democratisation, embrace of displacement AI cedes control of truth itself to the tech industry (Guest et al., 2025; Hardcastle, 2025). It trades independent thought and autonomy for corporate influence over the creation and validation of knowledge.
We view the uncritical embrace of displacement AI within higher education as an accelerant to its toxic trajectory towards commercialisation, commodification, and co-option by extractive and exploitative companies. Displacement AI further erodes the potential for higher education institutions to support independent scholarship, science, and education for the public good that respects the safe and just boundaries of both people and planet (Stephens, 2024; Urai & Kelly, 2023).
Displacement AI is neither inevitable nor is it something that cannot be resisted, rejected, or radically reformed. The precautionary principle should be robustly applied, and we need to collectively resist the narrative that AI must be embraced as a matter of urgency, productivity, or competitive advantage.
We recognise possibilities for ethical, emancipatory, independent, and sustainable AI (e.g., https://papareo.nz/), and support the development of such technologies. However, the potential for ethical AI does not resolve the unethical, ecologically destructive and extractive-exploitative nature of current displacement AI technologies. This potential does not render use of current displacement AI technologies ethical or responsible.
We view the failure to take a strong, defensive stance against displacement AI in our institutions as a threat to higher education itself. If we do not safeguard against displacement AI, the degrees awarded by our institutions will be undermined because nobody will be able to guarantee that academic work was conducted by people, rather than machines.
The Climate Justice Universities Union reaffirms our commitment to solidarity and a culture of care. We will support our members, regardless of their current level of engagement with displacement AI technologies, to counter the narrative of inevitability. We will provide solidarity, resources, and pathways towards reducing, resisting, refusing, and dismantling displacement AI, and we will advocate for institutional divestment and disassociation from displacement AI.
References
Alexander, A. (2025, November 10). The Ecological Cost of AI Is Much Higher Than You Think. Truthdig. https://www.truthdig.com/articles/the-ecological-cost-of-ai-is-much-higher-than-you-think/
Bansal, V. (2025, September 11). How thousands of 'overworked, underpaid' humans train Google's AI to seem smart. The Guardian. https://www.theguardian.com/technology/2025/sep/11/google-gemini-ai-training-humans
Barratt, L., Witherspoon, A., Uteuova, A., Gambarini, C., & Witherspoon, data graphics by A. (2025, April 9). Revealed: Big tech's new datacentres will take water from the world's driest areas. The Guardian. https://www.theguardian.com/environment/2025/apr/09/big-tech-datacentres-water
Bender, E. M. (2024). Resisting Dehumanization in the Age of "AI". Current Directions in Psychological Science, 33(2), 114--120. https://doi.org/10.1177/09637214231217286
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610--623. https://doi.org/10.1145/3442188.3445922
Birhane, A. (2025). The incomputable classroom: The limits and dangers of AI in education. In AI and the future of education: Disruptions, dilemmas and directions (pp. 53--46). Paris: UNESCO, 2025. https://doi.org/10.54675/KECK1261
Birhane, A., Dehdashtian, S., Prabhu, V., & Boddeti, V. (2024). The Dark Side of Dataset Scaling: Evaluating Racial Classification in Multimodal Models. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, 1229--1244. https://doi.org/10.1145/3630106.3658968
Glickman, M., & Sharot, T. (2025). How human--AI feedback loops alter human perceptual, emotional and social judgements. Nature Human Behaviour, 9(2), 345--359. https://doi.org/10.1038/s41562-024-02077-2
Gourabathina, A., Gerych, W., Pan, E., & Ghassemi, M. (2025). The Medium is the Message: How Non-Clinical Information Shapes Clinical Decisions in LLMs. Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 1805--1828. https://doi.org/10.1145/3715275.3732121
Guest, O. (2025). What Does 'Human-Centred AI' Mean? (No. arXiv:2507.19960). arXiv. https://doi.org/10.48550/arXiv.2507.19960
Guest, O., Suarez, M., Müller, B. C. N., van, E., Elizondo, A. R., Blokpoel, M., Scharfenberg, N., Kleinherenbrink, A., Camerino, I., Monett, D., Brown, J., Avraamidou, L., Alenda-Demoutiez, J., Hermans, F., & van, I. (2025). Against the uncritical adoption of 'AI' technologies in academia. Zenodo. https://doi.org/10.5281/zenodo.17065099
Guest, O., & van Rooij, I. (2025). Critical Artificial Intelligence Literacy for Psychologists. PsyArXiv. https://doi.org/10.31234/osf.io/dkrgj_v1
Hardcastle, K. (2025, September 29). How generative AI is really changing education -- by outsourcing the production of knowledge to big tech. The Conversation. https://doi.org/10.64628/AB.haumcgvjp
Jollimore, T. (2025, March 5). I Used to Teach Students. Now I Catch ChatGPT Cheats | The Walrus. https://thewalrus.ca/i-used-to-teach-students-now-i-catch-chatgpt-cheats/
Kalluri, P. R., Agnew, W., Cheng, M., Owens, K., Soldaini, L., & Birhane, A. (2023). The Surveillance AI Pipeline (No. arXiv:2309.15084). arXiv. https://doi.org/10.48550/arXiv.2309.15084
Kelly, C., Bruisch, K., & Leahy, C. (2025). Opinion: We are lecturers in Trinity College Dublin. It is our responsibility to resist AI. The Irish Times. https://www.irishtimes.com/opinion/2025/09/04/opinion-we-are-lecturers-in-trinity-college-we-see-it-as-our-responsibility-to-resist-ai/
Kidd, C., & Birhane, A. (2023). How AI can distort human beliefs. Science, 380(6651), 1222--1223. https://doi.org/10.1126/science.adi0248
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task (No. arXiv:2506.08872). arXiv. https://doi.org/10.48550/arXiv.2506.08872
Lantyer, V. (2025). The Bartz v. Anthropic Settlement: From Fair Use to Piracy in AI Training Data (SSRN Scholarly Paper No. 5455514). Social Science Research Network. https://doi.org/10.2139/ssrn.5455514
O'Donnell, J., & Crownwell, C. (2025). We did the math on AI's energy footprint. Here's the story you haven't heard. MIT Technology Review. https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
Startari, A. V. (2025). Borrowed Voices, Shared Debt: Plagiarism, Idea Recombination, and the Knowledge Commons in Large Language Models (SSRN Scholarly Paper No. 5494528). Social Science Research Network. https://doi.org/10.2139/ssrn.5494528
Stephens, J. C. (2024). Climate Justice and the University. Johns Hopkins University Press. https://doi.org/10.56021/9781421450056
Suárez, M., Barbara, M., Guest, O., & van Rooij, I. (2025). Critical AI Literacy: Beyond hegemonic perspectives on sustainability. https://doi.org/10.5281/zenodo.15677840
Urai, A. E., & Kelly, C. (2023). Rethinking academia in a time of climate crisis. eLife, 12, e84991. https://doi.org/10.7554/eLife.84991
Valdivia, A. (2025). The supply chain capitalism of AI: a call to (re)think algorithmic harms and resistance through environmental lens. https://www-tandfonline-com.elib.tcd.ie/doi/full/10.1080/1369118X.2024.2420021?src=exp-la
Vercellone, C., & Di Stasio, A. (2023). Free Digital Labor as a New Form of Exploitation: A Critical Analysis. Science & Society, 87(3), 334--358. https://doi.org/10.1521/siso.2023.87.3.334
Yeung, J. A., Dalmasso, J., Foschini, L., Dobson, R. J., & Kraljevic, Z. (2025). The Psychogenic Machine: Simulating AI Psychosis, Delusion Reinforcement and Harm Enablement in Large Language Models (No. arXiv:2509.10970). arXiv. https://doi.org/10.48550/arXiv.2509.10970