Many conversations I’ve had with my peers regarding AI use have resulted in questions about whether foregoing AI would make any real difference if ‘everyone else’ would be using it anyway. While, unfortunately, they are correct about the sheer amount of students that use AI — a staggering 64% — they are misguided in their assumption that one fewer student using AI would not make a positive impact.
College students report using AI for a myriad of reasons, ranging from summarizing lectures to editing their own writing. Students use AI services like ChatGPT and Claude as crutches, leaning on them to complete tasks that, prior to the enormous surge in AI use in late 2022, they would have successfully completed independently. The amount of conversations I’ve overheard or even participated in about casual AI use is shocking, with students offhandedly referencing their willingness to use ChatGPT for something as simple as writing an email — something that any college-aged individual should be able to do without technical assistance. A popular argument for the validity of using AI in such scenarios is that it would alleviate the stress of having to complete a task that students identify as busywork — why do it yourself when it can be done for you?
While there is no obvious academic or personal benefit to writing an email, the fact of the matter is that writing emails and other purportedly simple tasks, like writing papers or doing research, are part of a larger communication skillset that is essential for simply being human. While it’s true that ChatGPT could complete these tasks faster than either you or I could, there’s something innately human about these tasks that we should all value. What makes human beings so unique is our ability to communicate with one another, and as casual AI use becomes extensively normalized, especially on college campuses, we’re losing our ability to develop communication skills that will ultimately translate into our face-to-face interactions with others.
Casual AI use is adversely affecting more than just our communication skills. Early data collection indicates that marginalized communities already suffering from the imposition of internet data centers due to discriminatory urban planning will continue to suffer from the consequences of AI data centers. The surge in AI use has resulted in a frantic response from the United States to strengthen AI infrastructure and increase its competitiveness in the global AI market, prompting an increase in the creation of new AI data centers. Current analysis suggests that the imposition of these data centers will compound upon already existing environmental inequalities, with a review of roughly 700 data centers finding that nearly half were placed in regions already experiencing high levels of air and water pollution.
The National Association for the Advancement of Colored People has sued xAI for illegally operating 27 gas turbines in Southaven, Miss. without an air permit to build a power plant that would power the company’s chatbot. The gas turbines were located near predominantly Black neighborhoods and emitted pollution and known carcinogens, sacrificing the health and safety of Black communities in favor of AI infrastructure.
By suing xAI, the NAACP is seeking to protect the citizens of North Mississippi and Memphis. However, the normalization of casual AI use will ultimately inspire the establishment of more AI data centers, which, if this trend continues, will likely be located in or near already marginalized communities. The normalization of AI use is making students blind to the large-scale impacts of AI. I urge Tufts students to limit their use of artificial intelligence to protect both their academic integrity and the livelihood of marginalized communities already suffering the imposition of AI data centers.



