Skip to main content
 

CHAPEL HILL, N.C. – If you’ve been on the Internet recently, you’ve probably heard of ChatGPT.  

ChatGPT—short for Chat Generative Pre-trained Transformer—is a program developed by artificial intelligence research laboratory OpenAI. Its main function was to generate human conversation, but it can also write essays, take tests and even compose poetry and song lyrics. 

But how does ChatGPT work? Shashank Srivastava, an assistant professor of computer science at UNC, compared it to the predictive text on a phone. 

“In principle, GPT does the same thing,” he said. “It’s just a much more complex, much bigger model, trained on a lot more text, and not just maybe emails and text messages but all sorts of that’s describing all sorts of knowledge about the world.” 

Users put in a specific request, ChatGPT will generate text to fulfill that request, and users can then provide feedback on what they receive. The program will then use that feedback to improve future responses. 

ChatGPT generates its responses from data sets, which include everything from scholarly articles to Reddit posts. But even with this vast pool of text, Hussman Professor Heesoo Jang says that ChatGPT still doesn’t take everything into account. 

“The problem is the data sets mostly come from the internet, if not all, and we know that the internet is full of information that is one-sided,” she said. “Sometimes, the voices of marginalized people and communities are not heard.” 

Jang said that there is a disproportionate amount of content on the Internet presented in English compared to other languages. 

She said that while bias on the Internet is not a new issue, ChatGPT reflects existing biases because it is trained on data from the Internet. 

“We don’t want an AI model that is racist, that is sexist and that is not inclusive at all,” she said. 

But the lack of inclusivity isn’t the only thing alarming academia. Because of its ability to learn and improve, as well as its capability to generate full essays, many educators are worried about its effect on academics.  

However, Carolina Digital Humanities Director Daniel Anderson said that he’s not as worried about cheating. 

“I think everyone’s latching on to cheating or plagiarism, and I’m not trying to diminish those concerns, but I don’t know,” he said. “I just feel like it’s not time to press the panic button, or that’s not the first move in terms of what this is doing for our intellect and our culture.” 

Anderson said he is able to tell whether text was written by a person or generated with ChatGPT. He said that people have already developed programs that can detect if ChatGPT was used.  

He also says that writing is always evolving, and that society has always been worried about change. 

“The word processor, the invention of the eraser for the pencil, there’s always something that’s changing the technology of writing,” he said. “This one’s a little different, but you know, there’s a long history of it.” 

Leave a Reply