S. Korea's election governing body warns of AI-based 'Deepfake' content surge

By Park Sae-jin Posted : February 19, 2024, 15:46 Updated : February 19, 2024, 15:46
Gettyimages Bank
[Gettyimages Bank]

SEOUL -- South Korea's state-operated election governing body has sent out a strong warning against Deepfake, artificial intelligence-generated content using facial and physical images of real-life people.

Deepfake is an AI-based technique used to create fake videos or images that look incredibly real. It works by taking a real-life person's face and superimposing it onto another person's body in a video or picture. This can make it seem like a famous person is saying or doing something they never actually did.

Misuses of Deepfake technology are being reported worldwide, with victims ranging from high-profile celebrities to ordinary people. The technique is often used to spread misinformation or to seriously defame someone by making him or her appear in images or video clips that depict socially unacceptable scenarios.

According to the National Election Commission, a total of 129 cases of illegal Deepfake content targeting general election voters were detected and deleted from the web between January 29 and February 16. South Korea has an upcoming General Election scheduled on April 10.

"Most of Deepfake content had been deleted or is currently being processed," the election commission said in a statement, adding that all fabricated content was detected through the election governing body's monitoring system.

In 2022, a flurry of confusion was caused among local voters in a southern county after a ruling party politician used a fabricated image of President Yoon Suk-yeol to promote his local election campaign. The fake video clip of Yoon became the center of controversy about whether the Deepfake content should be accepted as election campaign content.

Alerted by the fabricated information case involving the South Korean president, the National Assembly revised a law that bans the use of Deepfake content in political campaigns and elections in December 2023. Anyone who violates the law will be slapped with a maximum fine of 50 million won ($37,419) or up to seven years in prison.

"It is basically almost impossible to prevent the production and distribution of Deepfake content. The best way to reduce AI-fabricated content is to counteract quickly. If there is a special team formed for the rapid detection and deletion of fake content, then we can reduce the chaos and damage done to society," Korea AI Education Association's Chairman Dr. Moon Hyung-nam told Aju Korea Daily on February 19.
기사 이미지 확대 보기
닫기