South Korea is escalating its response to AI-powered misinformation, deploying a rarely used telecoms statute to detain suspects and seek tougher penalties—moves that are drawing interest across Asia, including in Japan. According to police data reported by NEWSIS via KOREA WAVE and AFPBB News, authorities are increasingly applying Article 47(2) of the Telecommunications Basic Act to online falsehood cases, particularly where profit motives or demonstrable harm are alleged.
What changed in South Korea—and why now
Police say the provision—punishable by up to three years in prison or fines of up to 30 million won (about ¥3.3 million)—had been used sparingly because prosecutors must prove the perpetrator intended to benefit themselves or others, or to cause damage. That calculation is shifting. Investigators now trace ad revenue flows, donations, and sponsorship accounts on platforms such as YouTube to establish “profit intent,” allowing them to bring more cases under Article 47(2). The national police launched a dedicated “false information” task force on 14 October 2025, led by a cyber-investigation official, and began a parallel sweep on 2 January 2026 targeting coordinated or automated distribution methods, including macros. To date, police say they have apprehended 110 people, are investigating 199 additional cases, and have asked relevant agencies to delete or block 1,074 pieces of false or harmful content. In January 2026 alone, 10 cases were formally filed, with one suspect detained; between 2020 and 2025, annual detentions for violations of the statute ranged from zero to three.
High-profile cases: AI bodycam fake and disaster hoaxes
Recent examples illustrate how AI tools, monetization, and virality are converging. On 28 January, the Gyeonggi Northern Police Agency detained and referred a YouTuber in his 30s for allegedly fabricating a police callout video using AI. The clip, which amassed roughly 34 million views in total, mimicked a real body-worn camera feed with an on-screen “REC” indicator, timestamp overlays, and even AI-generated breathing and radio chatter. Investigators say they confirmed an advertising revenue structure underpinning the channel. In a separate case, a man in his 60s was detained and referred to prosecutors for posting 362 false messages targeting the bereaved families of the Itaewon crowd-crush tragedy, while also publicizing a donations account; he denied the charges at his first hearing. Prosecutors also referred a YouTuber with roughly 950,000 subscribers for spreading a fabricated claim that “dozens of lower-body-less corpses” had been found in South Korea. For authorities, such cases typify the nexus of scale, monetization, and potential harm that justifies invoking Article 47(2).
Legal backdrop: A narrow path after a 2010 court ruling
South Korea’s Constitutional Court in 2010 struck down Article 47(1) of the Telecommunications Basic Act, which broadly penalized the dissemination of false information, citing constitutional concerns. That decision left a constrained toolkit: Article 47(2) requires proving intent to profit or inflict damage, a higher bar that limited its use for years. Even now, investigators face hurdles. Major platforms may decline to release subscriber data solely on the basis of “false information,” compelling police to pursue parallel charges such as defamation to unmask suspects. Policymakers also acknowledge a legislative gap: there is still no comprehensive criminal statute that directly and narrowly targets harmful falsehoods while safeguarding free expression.
Why this matters in Japan
Japan is closely tracking global approaches to AI governance and online integrity while maintaining a strong rule-of-law and pro-innovation posture. Tokyo led the G7 Hiroshima AI Process in 2023, promoting risk-based, human-centric principles for generative AI, and has favored multi-stakeholder cooperation with platforms over sweeping criminalization of speech. Domestically, Japan relies on a legal mix that includes the Penal Code’s defamation and insult provisions (with tougher penalties introduced in 2022), and streamlined procedures under the Provider Liability Limitation Act to disclose the identity of unlawful anonymous posters—mechanisms that help victims seek redress without chilling legitimate speech. For Japan’s regulators, South Korea’s test case offers concrete evidence that revenue tracing and transparency requirements can be significant levers against malicious deepfakes. At the same time, Japan’s institutions—from the Digital Agency to the Personal Information Protection Commission—have emphasized proportionate, evidence-based interventions and collaboration with industry to label AI-generated media, improve provenance, and strengthen media literacy. That balance—protecting individuals and public safety while preserving open discourse—remains central to Japan’s appeal as a trusted, stable digital economy in Asia.
Regional implications for creators, platforms, and expats
For content creators across Asia—including many who publish in Japanese or to Japanese audiences—monetization records, donation links, and ad partnerships can now form part of the evidentiary trail in criminal probes abroad. Cross-border viewership means a video produced in one country can trigger legal exposure in another, making clear labeling, accurate sourcing, and prompt corrections ever more important. Platforms operating in Japan and South Korea face intensifying expectations to act against coordinated manipulation, AI-driven hoaxes, and harassment of victims of disasters. While South Korea’s approach highlights the deterrent power of targeted criminal enforcement, Japan’s steady, consensus-driven model underscores how democratic safeguards and innovation can coexist. Both directions will shape Asia’s next chapter in governing AI and online speech. (Source: NEWSIS/KOREA WAVE/AFPBB News)