Korean AI firm showcases multilingual dubbing technology at global conference

By Han Young-hoon Posted : November 17, 2025, 15:42 Updated : November 17, 2025, 15:58
Eastsoft researchers at EMNLP 2025 [Photo=Eastsoft]
Eastsoft researchers at EMNLP 2025/ Courtesy of Eastsoft


SEOUL, November 17 (AJP) - Eastsoft, a South Korean AI services company, said Monday that its research on multilingual automatic dubbing has been accepted for presentation at EMNLP 2025, one of the field’s leading conferences.

The company’s study introduces an end-to-end dubbing framework that uses large language models to synchronize translated audio with the timing and rhythm of the original video — a longstanding challenge for automated dubbing systems, which often produce speech that finishes too early or too late relative to on-screen mouth movements.

The research paper outlines a three-step pipeline combining speech-to-text, machine translation and text-to-speech modules. Eastsoft says it has enhanced the translation process with two LLM-driven techniques: “speech length adjustment” and “speech pause integration.”

The first predicts how long translated speech should be in order to match the original speaker’s timing, while the second incorporates natural pauses — including breaths and short breaks — to create more humanlike rhythm in dubbed audio.

Together, the additions aim to move beyond literal translation to generate dialogue that feels more natural across languages.

According to Eastsoft, the system improved synchronization accuracy by 24 percent and multilingual user satisfaction by 12 percent compared with existing commercial AI dubbing tools, including the company's current products.

“We are moving closer to producing naturally dubbed videos in multiple languages that preserve the speaker’s original pace and rhythm,” the company said in a press release.

* This article, published by Aju Business Daily, was translated by AI and edited by AJP.

기사 이미지 확대 보기
닫기