KU News: Digital tool spots academic text spawned by ChatGPT with 99% accuracy

Today's News from the University of Kansas

0
171

From the Office of Public Affairs | http://www.news.ku.edu

Headlines

Digital tool spots academic text spawned by ChatGPT with 99% accuracy
Heather Desaire, a chemist who uses machine learning in biomedical research at the University of Kansas, has unveiled a new tool that detects with 99% accuracy scientific text generated by ChatGPT, the artificial intelligence text generator. Desaire says accurate AI-detection tools urgently are required to defend scientific integrity.

Professors call for further study of potential uses of AI in special education, avoiding bans
A group of educators that includes a University of Kansas researcher has just published a position paper reviewing AI’s potential in special education, calling for patience and consideration of its potential uses before such technology is banned. Most importantly, AI should be considered as a tool that can potentially benefit students with disabilities, according to James Basham, KU professor of special education, and co-authors.

Full stories below.

————————————————————————

Contact: Brendan M. Lynch, 785-864-8855, [email protected]
Digital tool spots academic text spawned by ChatGPT with 99% accuracy
LAWRENCE — Heather Desaire, a chemist who uses machine learning in biomedical research at the University of Kansas, has unveiled a new tool that detects with 99% accuracy scientific text generated by ChatGPT, the artificial intelligence text generator.

The peer-reviewed journal Cell Reports Physical Science published research showing the efficacy of her AI-detection method, along with sufficient source code for others to replicate the tool.

Desaire, the Keith D. Wilner Chair in Chemistry at KU, said accurate AI-detection tools urgently are required to defend scientific integrity.

“ChatGPT and all other AI text generators like it make up facts,” she said. “In academic science publishing — writings about new discoveries and the edge of human knowledge — we really can’t afford to pollute the literature with believable-sounding falsehoods. They’d unavoidably make their way into publications if AI text generators are commonly used. As far as I’m aware, there’s no foolproof way to, in an automated fashion, find those ‘hallucinations’ as they’re called. Once you start populating real scientific facts with made-up AI nonsense that sounds perfectly believable, those publications are going to become less trustable, less valuable.”

She said the success of her detection method depends on narrowing the scope of writing under scrutiny to scientific writing of the kind found commonly in peer-reviewed journals. This improves accuracy over existing AI-detection tools, like the RoBERTa detector, which aim to detect AI in more general writing.

“You can easily build a method to distinguish human from ChatGPT writing that is highly accurate, given the trade-off that you’re restricting yourself to considering a particular group of humans who write in a particular way,” Desaire said. “Existing AI detectors are typically designed as general tools to be leveraged on any kind of writing. They are useful for their intended purpose, but on any specific kind of writing, they’re not going to be as accurate as a tool built for that specific and narrow purpose.”

Desaire said university instructors, grant-giving entities and publishers all require a precise way to detect AI output presented as work from a human mind.

“When you start to think about ‘AI plagiarism,’ 90% accurate isn’t good enough,” Desaire said. “You can’t go around accusing people of surreptitiously using AI and be frequently wrong in those accusations — accuracy is critical. But to get accuracy, the trade-off is most often generalizability.”

Desaire’s coauthors were all from her KU research group: Romana Jarosova, research assistant professor of chemistry at KU; David Huax, information systems analyst; and graduate students Aleesa E. Chua and Madeline Isom.

Desaire and her team’s success at detecting AI text may stem from the high level of human insight (versus machine-learning pattern detection) that went into devising the code.

“We used a much smaller dataset and much more human intervention to identify the key differences for our detector to focus on,” Desaire said. “To be exact, we built our strategy using just 64 human-written documents and 128 AI documents as our training data. This is maybe 100,000 times smaller than the size of data sets used to train other detectors. People often gloss over numbers. But 100,000 times — that’s the difference between the cost of a cup of coffee and a house. So, we had this small data set, which could be processed super quickly, and all the documents could actually be read by people. We used our human brains to find useful differences in the document sets, we didn’t rely on the strategies to differentiate humans and AI that had been developed previously.”

Indeed, the KU researcher said the group built their approach without relying on the strategies in past approaches to AI detection. The resulting technique has elements completely unique to the field of AI text detection.

“I’m a little embarrassed to admit this, but we didn’t even consult the literature on AI text detection until after we had a working tool of our own in hand,” Desaire said. “We were doing this not based on how computer scientists think about text detection, but instead using our intuition about what would work.”

In another important aspect, Desaire and her group flipped the script on methods used by previous teams building AI-detection methods.

“We didn’t make the AI text the focus when developing the key features,” she said. “We made the human text the focus. Most researchers building their AI detectors seem to ask themselves, ‘What does AI-generated text look like?’ We asked, ‘What does this unique group of human writing look like, and how is it different from AI texts?’ Ultimately, AI writing is human writing since the AI generators are built with large repositories of human writing that they piece together. But AI writing, from ChatGPT at least, is generalized human writing drawn from a variety of sources.

“Scientists’ writing is not generalized human writing. It’s scientists’ writing. And we scientists are a very special group.”

Desaire has made her team’s AI-detecting code fully accessible to researchers interested in building off it. She hopes others will realize that AI and AI detection are within reach of people who might not consider themselves computer programmers now.

“ChatGPT is really such a radical advance, and it has been adopted so quickly by so many people, this seems like an inflection point in our reliance on AI,” she said. “But the reality is, with some guidance and effort, a high school student could do what we did.

“There are huge opportunities for people to get involved in AI, even if they don’t have a computer-science degree. None of the authors on our manuscript have degrees in computer science. One outcome I would like to see from this work is that people who are interested in AI will know the barriers to developing real and useful products, like ours, aren’t that high. With a little knowledge and some creativity, a lot of people can contribute to this field.”

-30-

————————————————————————
The official university Twitter account has changed to @UnivOfKansas.
Refollow @KUNews for KU News Service stories, discoveries and experts.


————————————————————————

Contact: Mike Krings, 785-864-8860, [email protected]
Professors call for further study of potential uses of AI in special education, avoiding bans
LAWRENCE — Artificial intelligence is making headlines about its potentially disruptive influence in many spaces, including the classroom. A group of educators that includes a University of Kansas researcher has just published a position paper reviewing AI’s potential in special education, calling for patience and consideration of its potential uses before such technology is banned.

Most importantly, AI should be considered as a tool that can potentially benefit students with disabilities, according to James Basham, KU professor of special education, and co-authors. Tools such as ChatGPT can quickly turn out writing. And naturally, some students have used that to avoid schoolwork.

But banning it is not the answer.

“It’s really been over the last decade or so that we’ve seen AI and machine learning move from just what you might call geek culture to the bigger world,” Basham said. “We’ve been studying it, but ChatGPT made it a little more real by making it available to the public. While we think the writing process is complex, AI can do it, quickly and fairly well.

“When you think about people with disabilities in education, you often think about writing. We get referrals all the time for students who can’t or struggle to express themselves in writing. And AI can help with that. So we need to think about what questions we need to ask or issues to think about.”

In the paper, the authors provided a brief history of artificial intelligence and how it developed to its current state. They then considered ethical questions regarding its use in education and special education and how policy should address the technology’s use. Foremost, schools should not reflexively ban the technology, the authors wrote. Meanwhile, educators, researchers and others need to think about what they want students to learn and how the technology can aid that process. Additionally, teacher educators who are producing future generations of educators need to work with their students to consider how they can effectively address the topic.

Among the main ethical considerations is information literacy, the authors wrote. Students need to learn how and where to find valid information as well as how to discern true information from false, think critically and assess topics to avoid misinformation. Educators should also avoid the trap of evaluating skills like writing too narrowly.

“If we’re only having students do things in one certain way, the AI can probably do that,” Basham said. “But if we’re bringing in multiple concepts and modalities, then it’s a much different conversation. We need to think about who we are as a society and what we teach, especially when we think about students with disabilities, because they are often judged on just one aspect.”

The article, published in the Journal of Special Education Technology, was co-written with Matthew Marino, Eleazar Vasquez and Lisa Dieker, all of the University of Central Florida, and Jose Blackorby of WestEd.

The authors also urged those in education to consider AI and if it is a “cognitive prosthesis” or something more. Just as a student with physical impairments might use speech-to-text to translate their thoughts more efficiently to writing or a student with a hearing impairment can use an app on a phone to turn down ambient noise in the classroom, a student with cognitive disabilities could potentially use AI to improve their writing.

But while technology can help students improve writing and other skills, educators need to consider consent, the authors wrote. All students should be taught about what information any AI collects, how it is stored and how it is shared. Parents have a role to play in that regard as well, in considering whether a school that uses AI is right for their child, if it complies with an Individualized Education Plan and if it can be personalized while being respectful of diverse student backgrounds and values, the authors wrote.

The authors also noted that AI already exists in schools: Students use laptops, tablets, smartphones and other technologies unavailable to previous generations. Yet those tools are not banned from classrooms outright. Similarly, while technologies such as ChatGPT could be used to cheat or reduce student workload, they could also potentially be an effective resource for students with disabilities. Before any such judgments are made, researchers and policymakers should continue to ask questions and ensure people who represent students with disabilities are at the table, the authors wrote.

“Technology is a societal experiment,” Basham said. “We can use it effectively or ineffectively. But the education system needs to get in front of it and figure out how to use this particular technology to further human betterment. What we need is not to be afraid of change but to think about critical thinking and problem-solving so we are teaching students to do that whether with AI or without. We need to reflect not on today on how it will change our lives, but what it means for the future.”

-30-

————————————————————————

KU News Service
1450 Jayhawk Blvd.
Lawrence KS 66045
Phone: 785-864-3256
Fax: 785-864-3339
[email protected]
http://www.news.ku.edu

Erinn Barcomb-Peterson, director of news and media relations, [email protected]

Today’s News is a free service from the Office of Public Affairs

LEAVE A REPLY

Please enter your comment!
Please enter your name here