Skip to Content, Navigation, or Footer.
The Tufts Daily
Where you read it first | Saturday, August 31, 2024

Why I don’t use AI

The cultural proliferation of artificial intelligence through the past year has glossed over its morally repugnant reality.

Artificial-Intelligence.jpg

The reality of AI has turned out to be a dark, cautionary tale of greed and disregard for humanity.

One of the more dystopian aspects of the beginning of this semester has been learning professors’ policies on artificial intelligence. The development of these new policies follows that of AI itself, as generative models like DALL-E 2 and ChatGPT have exploded into the public consciousness. These AIs have gained extreme popularity over the past year due to their unprecedented text and image generation abilities. They’ve been hailed by many in the media as revolutionary. However, the reality of AI has turned out to be a dark, cautionary tale of greed and disregard for humanity.

Years of science fiction novels and movies have produced a very specific image of AI: conscious machines capable of human levels of logic and creative thinking. At first glance, our current AI technology seems to fit that bill. ChatGPT can even beat the Turing Test of AI viability, meaning a blind interviewer would not be able to differentiate between a human and an AI when engaging both in conversation. But importantly, AI as we know it today is not yet the AI of our fictions, as it is fundamentally incapable of producing original ideas. Rather, AI can only use existing ideas that it has found by scouring content from every corner of the internet, notably without any citation of where it found this information. The marketing of products such as ChatGPT as AI helps to disguise what is really going on: plagiarism.

This cry of plagiarism arises not only from syllabi but from many organizations and content creators across the internet, including The New York Times, who are currently suing OpenAI for the unpaid use of their work. Journalism is not the only profession at risk of suffering at the hands of AI. Teachers, of whom we already face a dire shortage, now have to contend with the use of AI in their classrooms. Artists across many mediums, including graphic design, music and writing, have felt the effects of AI, having their own work used without proper compensation and being shut out from opportunities to profit from their work. 

The issues with AI go beyond plagiarism. AI has been used to design chemical weapons, produce evidence that led to the wrongful arrest of a pregnant woman for carjacking, write ransomware code and even clone a woman’s daughter’s voice to convince her that her daughter had been kidnapped. 

All of this highlights how irresponsible the release of this poorly regulated product is for the general public. It begs the question: How did we even get here? We can get a good understanding of AI’s recent history from OpenAI, the organization behind DALL-E 2 and ChatGPT.

OpenAI founded its organization as a nonprofit on the principal tenets of transparency and benefit to humanity. In reality, however, it’s been frighteningly secretive with its information and motivated by extreme greed. OpenAI first released ChatGPT to the public as a sort of global testing program, taking information from the world’s inputs to improve its product. Essentially, it’s willing to risk the horrible consequences of an unpredictable machine for the benefit of its product. Furthermore, while nominally a nonprofit, OpenAI has a capped-profit entity, which has allowed it to receive enormous amounts of outside funding. This capped-profit side of OpenAI has developed ChatGPT and has, unsurprisingly, instituted a $20 per month subscription service for it. If that’s not enough of a clear interest in profit, take instead the company’s use of Kenyan laborers paid less than $2 an hour to build ChatGPT’s safety system.

OpenAI has also backpedaled on its promise of transparency. The unique business structure of OpenAI has allowed it to obscure its revenue. OpenAI reported its revenue as less than $50,000 in 2022. Considering that its revenue rose to  an estimated $1 billion in just one year, it is unlikely that the reported number is accurate. The company has quietly reversed its policy of sharing numerous internal documents and financial statements to the public upon request. When confronted about this switch-up, the company’s response was that its new practice aligned them with industry standards — standards OpenAI was purportedly rebuking to create a more moral AI.

In the face of this morally repugnant phenomenon, it can feel difficult to have any impact. How do you fight the seemingly endless progress of technology? This is one case where boycotts can have a large impact. Not only would a drop in users help to pop the investment bubble of AI, it will also rob AI companies of one of their most precious resources: the public’s interactions with their products. While it’s easy to succumb to the convenience of AI, it’s important to remember that by refusing to engage, you have the power to cripple an institution brimming with greed and an outstanding lack of regard for humanity.