ChatGPT: Helpful or Harmful?

ChatGPT, a language bot, has taken the world by storm since its November 2022 release due to its remarkable ability to generate advanced human-like writing. Created by OpenAI, the bot can produce an original piece about any subject. The accuracy and versatility of the program has led to many concerns in the areas of education, misinformation, privacy, and job security.

The bot is terrifyingly human-like. On OpenAI’s website (www.openai.com), the company boasts, “[ChatGPT] interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” While some pieces it generates have a robotic quality to them, many sound exactly like an intelligent human being composed them.

One primary concern about the bot is that it will make it easy for students to cheat on writing assignments. Because each piece it generates is unique, it is not like typical plagiarism, of which there are already many systems in place for teachers and professors to catch. In an effort to combat this, OpenAI has released a Text Classifier, a program that analyzes a piece and determines if it was generated by a bot. However, it is unclear whether simply tweaking a word here and there would prevent the classifier from catching an otherwise bot-generated piece. It also does not give a concrete answer. Rather, it judges how likely it is that it was bot-generated or not using a scale of “very unlikely” to “very likely.” This seems like a recipe for cheating and ambiguity. The fact that OpenAI’s own classifier cannot definitively recognize material from its own bot indicates that nobody is equipped to handle the consequences of ChatGPT.

Another major issue when it comes to increasingly strong AI is that it will cause actual human beings to lose their jobs. If companies can have a bot handle their communication, they will no longer need to hire as many people. The replacement of human writers with ChatGPT has already been seen, and it is especially concerning when it comes to writing about sensitive topics. According to the Vanderbilt Hustler, Vanderbilt University’s student-run newspaper, the university was caught using the bot to write a message of condolence to Michigan State after an on-campus shooting in February. This message was emailed to students and staff, who quickly picked up on the phrase at the end: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication.” Even in the wake of a mass shooting, human thought and emotion was removed from the condolences in place of ChatGPT. If this service did not exist, surely a human would have written a personal statement. After this was discovered, the university apologized and said they would “learn from our mistakes.”

If the content generated by ChatGPT is biased or inaccurate, it could lead to detrimental misinformation. The program boasts accuracy, which could lead to impressionable people to believe anything it generates. If ChatGPT cannot tell apart bot-generated and human-generated content, then it seems like it also would not be able to tell fact from fiction. In addition, users would not know if certain organizations have a hand in the content. A political party or corporation could work with OpenAI and get them to produce biased content. Misinformation is already a problem with human-generated content, and adding AI “facts” to the mix could make things even worse.

Schools and even entire nations around the globe are trying to be proactive about ChatGPT. In Lynbrook, the service is blocked for students on their tablets. Various countries have made moves to regulate or even outright ban the AI. According to CNBC, Italy became the first western nation to completely ban ChatGPT on April 4. The nation feared that the amount of data ChatGPT must harvest in order to sound so human would mean a severe breach in privacy for Italian citizens and released a statement: “There appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.” The European Union (EU) also recently proposed the European AI Act, part of which classifies ChatGPT as a “high risk” application in fear that it could affect users’ fundamental rights and safety. 

There should be no replacement for genuine human language. With technology advancing at a much faster pace than laws can keep up with, it is vital that everyone uses it wisely. It may seem harmless and easy to use ChatGPT in the short term, but it may have terrible consequences in the future.