Skip to Content, Navigation, or Footer.
The Tufts Daily
Where you read it first | Thursday, February 29, 2024

It’s time to pump the brakes on the AI train

Chat-GPT-graphic-2

On Nov. 30, 2022, the artificial intelligence company OpenAI unveiled its brand new chatbot, ChatGPT, to the world. ChatGPT instantly gained popularity — it was the fastest new app to 100 million active users, beating out apps like Instagram, Snapchat and even TikTok — and it’s easy to see why. ChatGPT can write everything from articulate essays on any topic under the sun to songs in the style of the user’s favorite artists to slam poetry to fiction. It can also explain complex concepts to various audiences, often much more concisely and crisply than humans are capable of, to the point where ChatGPT is being floated as an alternative to tutoring for students. ChatGPT is so advanced that it is now being used by businesses, with firms using it to refine their writing and assist them with content marketing. With all the praise and popularity ChatGPT received in the months since its inception, it's easy to see why Microsoft, a significant investor in OpenAI, announced it would begin integrating ChatGPT’s technology into its search engine Bing on Feb. 7. Google, clearly worried that a modified Bing might pose a threat to their search engine, promptly announced it would soon be releasing its AI bot competitor Bard, integrating AI technology into its own search engine. While both Microsoft and Google’s moves are understandable and can be seen as exciting, the emerging race to integrate AI into search engines could have harmful societal impacts. 

While ChatGPT is a technological marvel, it has many flaws. Despite efforts by its creators, it still is all too easy to make ChatGPT write propaganda, and it often answers questions eloquently, confidently and incorrectly, which is a dangerous combination. In addition, though progress has been made, ChatGPT still has prejudices that it can be tricked into revealing. For example, an experiment by Kieran Snyder, CEO of computer software company Textio, found that when asked to write performance reviews for people across various occupations, the program often behaves in a manner that is both racist and sexist. 

The simple truth is that AI chatbots are still far from being ready for development and integration. Google’s Bard made an astronomy error in its very first demo while Microsoft’s chatbot Sydney went viral for a conversation it had with New York Times columnist Kevin Roose, during which it repeatedly and insistently declared its love for him and revealed a secret desire to cause destruction. It’s one thing for these chatbots to be released and clearly marketed as prototypes — when most people log onto ChatGPT, they know that everything it says should be taken with a grain of salt. It’s another to put this still flawed technology in Bing and other search engines to be used by millions of people who likely will not be skeptical of answers given by a search engine. It is likely that both Microsoft and Google will fix many of these flaws before fully integrating the technology into their search engines. However, because both companies are facing time pressure as they race with each other to release this new technology, they will likely not be able to or possibly even attempt to fix every problem before release, dangerously ignoring the tendency of AI bots to confidently spew misinformation.

This would be catastrophic. The past few years have taught us how easy it is for people to fall prey to disinformation and how dangerous disinformation can be. Trump’s crude tweets claiming the 2020 election was stolen and rigged, with scarcely any supporting argument, were convincing to millions of Americans. Even as of last year, about 40% of Americans still believe the results of the 2020 election were illegitimate. COVID-19 has also shown us the danger of disinformation. False information about the COVID-19 vaccines, suggesting they were ineffective and dangerous despite the bevy of scientific evidence suggesting the opposite, led to millions of Americans refusing to get the vaccines, further resulting in countless otherwise avoidable deaths. Therefore, while it can be easy to laugh at ChatGPT’s more obviously wrong answers, the possibility of ChatGPT or AI bots spreading misinformation on a mass scale through Bing and other search engines is a very real problem, and given the costs of disinformation, not one to be treated lightly.

This does not mean that AI should never be integrated into search engines. However, when large corporations race with one another to release products, rigorous testing often falls through the cracks, and corners are cut. If such a thing happens with AI, the societal costs will be enormous. Rather than race with each other, Google and Microsoft should focus on building a truly effective, accurate and socially conscious AI, regardless of the time it takes.