SEOUL, November 11 (AJP) - South Korean universities and companies are increasingly adopting detecting tools to identify AI cheats to prevent foul plays in admissions and evaluations, especially after the large-scale AI-employed test cheating that recently disgraced one of the country’s top elite universities — but many find the services largely lacking.
Tools are used randomly and trusted blindly, while most universities have yet to reach any consensus on AI usage for class materials and evaluations.
For job seekers, the consequences can be personal and severe. Kim Ye-ji (28), a job applicant in Seoul, said she feared being unfairly screened out after spending nights perfecting her self-introduction essay.
“I can’t imagine how frustrating it would be if the AI detector labeled my own writing as machine-written,” she said. “It’s already hard enough to get a few big-company interview chances — losing one that way would feel devastating.”
According to a survey by Incruit, 27.5 percent of companies already screen for AI-assisted writing in self-introduction letters, though few verify detection results before disqualifying applicants. Similar tools are now used across college assignments and corporate reports.
At the same time, AI-driven cheating is spreading faster than detection technology can keep up.
The number of remote and large-scale lectures in major universities has surged since the pandemic, creating blind spots in monitoring. At Yonsei University, the number of large classes with more than 200 students rose from 75 in 2020 to 104 last year, while online courses jumped from 34 in 2023 to 321 this semester, according to public education data.
Professors say the rapid rise in online lectures and corporate remote interviews has created an environment ripe for AI misuse — but current detection tools remain inconsistent and poorly integrated into learning systems.
“AI detectors can often identify which model — whether GPT, Gemini or Claude — generated the text, since each has distinct patterns,” said Billy Choi, professor at Korea University’s AI Research Institute. “But once a human edits the content, those patterns collapse, and that’s when detection errors occur.”
He added that improving such tools is “technically simple but financially demanding,” requiring additional training data and costly fine-tuning by developers.
At universities, the debate is shifting from “whether to use AI” to “how to use it meaningfully.”
“AI tools are not impossible to ban completely,” said Jin Lee, professor of cultural content studies at Hanyang University. “Instead of policing students for using AI, professors need to rethink how assignments are designed — focusing on how meaning is created through AI use, not just whether it was used.”
Lee added that the ethical burden of detecting AI use can no longer fall solely on students. “We can’t 100 percent verify if something was written by AI, and questioning students based on suspicion alone is meaningless,” she said. “The conversation should move toward helping students use AI responsibly while developing their own voices.”
Universities such as Yonsei and Korea University have issued internal guidelines urging faculty not to grade students solely based on detector results. Others are exploring hybrid evaluation systems — combining oral defenses, peer review, and in-class writing — to balance technology with fairness.
Still, pressure to detect AI-generated content continues to mount as academic-integrity concerns rise worldwide. In the United States and Europe, several institutions have already limited use of detection tools such as ZeroGPT and Turnitin due to accuracy concerns.
Copyright ⓒ Aju Press All rights reserved.



