Journalist
AJP
-
CES 2026: Lego no longer just child's play as it goes high-tech and AI LAS VEGAS, January 05 (AJP) - Lego bricks don’t just stack anymore — they listen, light up and talk back. At CES 2026, the Lego Group unveiled AI-embedded bricks designed to bring sound, motion and real-time feedback into physical play, blurring the line between classic building toys and interactive games. At a media showcase on Monday at the world’s largest technology trade show, the Danish toymaker introduced what it calls “Smart Play,” a tech-hybrid Lego platform centered on a standard 2×4 brick embedded with sensors, LED lights and a tiny speaker. The move marks Lego’s boldest step yet into connected play as it seeks to compete on equal footing with digital-first entertainment. The Smart Brick, identical in size to a traditional Lego piece, houses a 4.1-millimeter ASIC chip — smaller than a Lego stud — that runs what the company calls the “Play Engine.” The chip allows the brick to sense motion, orientation, magnetic fields and the proximity of other Smart Bricks, enabling pieces to interact with one another in real time. “We wanted to leverage technical innovation and bring it into physical play,” said Julia Goldin, Lego Group’s chief product and marketing officer. “Kids have unlimited creativity. They have unlimited imagination.” Beyond the core brick, the Smart Play system includes Smart Minifigures and Smart Tags, each equipped with digital IDs readable through near-field magnetic communication. The bricks also feature an accelerometer, integrated copper coils and a miniature speaker that produces sounds triggered by live actions rather than pre-recorded clips. During a live demonstration, two Smart Bricks changed colors depending on how far apart they were and whether they were facing each other. A toy car revved its engine when pushed forward and screeched when sharply tilted. A Lego duck quacked while “swimming” and croaked softly when laid down to sleep. A plane roared through aerial maneuvers, responding to every twist and turn. Minifigures further altered the play experience. A civilian figure screamed when repeatedly rammed by a toy car, while a pilot figurine let out disgruntled sounds if a plane was flipped upside down. Lego said the system operates on a proprietary wireless layer called BrickNet, built on Bluetooth technology using what it calls “Neighbor Position Measurement.” Crucially, the bricks communicate directly with one another without the need for apps, internet connections or external controllers. “The way to think of this is like a tiny distributed console, but for physical play,” said Tom Donaldson, senior vice president at the Lego Group, who led the technical demonstration. “One Smart Brick can be reused to unlock a huge range of different play experiences across potentially thousands of models.” Battery performance was another focus. Lego said the Smart Bricks are designed to function even after years of inactivity, with multiple pieces capable of being charged wirelessly on a shared charging pad. In a surprise appearance, executives from Lucasfilm joined Lego on stage alongside R2-D2 and C-3PO to preview how the Star Wars franchise will anchor the first Smart Play lineup. Lego’s partnership with Lucasfilm began in 1998, when its minifigure catalog comprised just 27 characters. Today, the lineup spans more than 1,500. “The Lego brand is all about building creativity and inventing your own adventure, and this new Lego Smart innovation takes that to the next level,” said Dave Filoni, Lucasfilm’s chief creative officer. “We hope it’ll inspire a whole new generation of storytellers.” Lego plans to launch the Smart Play system with three Star Wars sets in March, with starting prices around $90. The company, which celebrated its 70th anniversary last year, said Smart Play reflects its effort to bring advanced technology into physical play while preserving the open-ended creativity and simplicity that have long defined the Lego brand. 2026-01-06 11:25:21 -
[[CES 2026]] NVIDIA CEO unveils Vera Rubin, full-stack AI platform LAS VEGAS, January 05 (AJP) -Vera Rubin - the next bar-raising AI chip platform from NVIDIA for release in the second half - is designed not as a single product, but as a full-stack platform spanning AI training, inference and deployment across large-scale systems, according to its creator Monday (local time). Unveiling the architecture, Jensen Huang, NVIDIA’s founder and chief executive, said Vera Rubin represents a shift toward platform-level computing aimed at powering end-to-end AI workloads, from model development to real-world deployment at scale. Clad in his signature leather jacket, Hwang introduced the newest product at a press conference held at the BlueLive Theater inside the Fontainebleau Hotel in Las Vegas, ahead of CES 2026 opening. Introducing the system as "One Platform for Every AI," Huang positioned Vera Rubin as a unified foundation designed to support AI training, inference and deployment at scale, rather than as a single chip or product line. Opening his keynote, Huang placed the launch within a broader cycle of technological change. "Every 10 to 15 years, the computer industry resets," he said, pointing to past shifts from PCs to the internet, and from cloud computing to mobile platforms. This time, he argued, the transition is more disruptive. "Two platform shifts are happening at once. Applications are being built on AI, and how we build software has fundamentally changed." Huang stressed that AI is no longer simply another layer in computing. Instead, it is reshaping the entire stack. "You no longer program software. You train it," he said. "You do not run it on CPUs. You run it on GPUs." Unlike traditional applications that rely on precompiled logic, AI systems now generate outputs dynamically, producing tokens and pixels in real time based on context and reasoning. That shift, Huang said, is driving an unprecedented surge in demand for computing power. As models grow larger and reasoning becomes central to performance, both training and inference workloads have expanded sharply. "Models are growing by an order of magnitude every year," he said, noting that test-time scaling has turned inference into a thinking process rather than a single response. "All of this is a computing problem. The faster you compute, the faster you reach the next frontier." Vera Rubin is NVIDIA’s response to that challenge. Named after astronomer Vera Florence Cooper Rubin, the platform is built through what NVIDIA calls extreme codesign, integrating six newly developed chips — including the Vera CPU and Rubin GPU — into a single system. According to NVIDIA, this approach enables large mixture-of-experts models to be trained with roughly 4x fewer GPUs and reduces inference token costs by up to 10x compared with the Blackwell platform. Huang highlighted the platform’s rack-scale architecture as a key departure from earlier designs. The Vera Rubin NVL72 system connects 72 GPUs using sixth-generation NVLink technology, creating internal bandwidth at a scale rarely seen in data centers. "The rack provides more bandwidth than the entire internet," Huang said, underscoring how data movement has become a limiting factor for modern AI systems. Memory was another issue Huang addressed directly. As AI models move toward multi-step reasoning and longer interactions, the amount of context they must retain has outgrown on-chip memory. Vera Rubin introduces a new inference context memory platform powered by BlueField-4 processors, allowing large volumes of key-value cache data to be stored and shared locally. Huang said the goal is to reduce network congestion while maintaining consistent inference performance at scale. Energy efficiency and system reliability were also emphasized. Despite significantly higher compute density, Huang said Vera Rubin systems are cooled using hot water at about 45 degrees Celsius, eliminating the need for energy-intensive chillers. The platform also introduces rack-scale confidential computing, encrypting data across CPUs, GPUs and interconnects to protect proprietary models during training and inference. Throughout the keynote, Huang repeatedly framed NVIDIA’s role as extending beyond chip design. "AI is a full stack," he said. "We are reinventing everything, from chips to infrastructure to models to applications." He added that this approach is intended to support what he described as the next phase of AI deployment, built around large-scale AI factories operated by cloud providers, enterprises and AI labs. NVIDIA said Rubin-based products will be rolled out through partners beginning in the second half of 2026, with cloud providers, AI labs and enterprise customers expected to be among the early adopters. 2026-01-06 11:23:39 -
[[CES 2026]] NVIDIA CEO Jensen Huang delivers keynote speech Las Vegas, January 5 (AJP) - NVIDIA CEO Jensen Huang delivered the keynote address during an NVIDIA press conference at the Fontainebleau Hotel's BlueLive Theater in Las Vegas on Monday (local time). 2026-01-06 11:09:34 -
Excitement builds ahead of BTS' full-group comeback SEOUL, January 6 (AJP) - Excitement is building for K-pop juggernaut BTS' full-group comeback in late March, with pre-events and other promotional activities already underway in Seoul. Promotional displays for the septet's fifth album, set for release on March 20, covered the staircases to the main hall of the Sejong Center for the Performing Arts near Gwanghwamun in central Seoul on Monday. In a press release, their management agency Big Hit Music said, "BTS began in Seoul and have since expanded their presence globally. As they reunite as a full group after a long time, we arranged some offline events in the heart of Seoul, where their cultural roots are." The agency also plans to set up similar outdoor promotional installations and activities in major cities around the world including London, New York, and Tokyo. After completing their mandatory military service, the seven members of BTS will return with a new album after nearly four years of hiatus and embark on a large-scale world tour to promote the album, which contains 14 tracks. 2026-01-06 10:50:22 -
Samsung breaks ground on $475 million low-carbon ammonia plant in US SEOUL, January 06 (AJP) - Samsung's engineering unit Samsung E&A has begun construction of a low-carbon ammonia plant in the United States under the Wabash project. The firm said on Tuesday that it held a groundbreaking ceremony the previous day for the U.S. Wabash Low-Carbon Ammonia Project at the Hay-Adams hotel in Washington. The company signed an engineering, procurement and fabrication contract with Wabash Valley Resources in October valued at about 680 billion won ($475 million), and is targeting completion of the plant in 2029. Around 70 people attended the event, including South Korea’s Minister of Land, Infrastructure and Transport Kim Yun-deok, Samsung E&A President Namgung Hong, U.S. Deputy Secretary of Energy James P. Danly, and Simon Greenshields, chairman of Wabash Valley Resources. The facility will be built in the West Terre Haute area of Indiana and is designed to produce 500,000 tons of ammonia annually while capturing about 1.67 million tons of carbon dioxide each year. Samsung E&A described the project as a national-level initiative supported by a fund involving the U.S. Department of Energy and South Korea’s Ministry of Land, Infrastructure and Transport, as well as the Ministry of Climate, Energy and Environment. Samsung E&A said it plans to apply its ammonia-plant experience and advanced technologies, including digital transformation, artificial intelligence, automation and modular construction. It will also work closely with the project owner and technology licensor Honeywell UOP. 2026-01-06 10:46:41 -
[[CES 2026]] South Korea's Doosan Bobcat showcases voice-controlled machinery LAS VEGAS, January 06 (AJP) - South Korean construction equipment maker Doosan Bobcat unveiled next-generation equipment technologies at CES 2026 on Tuesday, highlighting artificial intelligence-based solutions aimed at improving productivity on jobsites. Vice Chairman Scott Park and Joel Honeyman, executive vice president for global innovation, presented AI-driven systems designed to simplify machine operation, reduce downtime and help equipment adapt to complex work environments. Among the new technologies is what the company described as the compact equipment industry’s first AI-based voice-control system, the “Bobcat Jobsite Companion.” The system allows operators to use voice commands to control more than 50 functions, including equipment settings, engine speed, lighting and radio controls. Doosan Bobcat said the system can also recommend optimal settings based on the task and attachments in use. Built on the company’s proprietary large language model, Jobsite Companion provides real-time responses and runs as an onboard AI model, allowing it to operate even when network connectivity is limited. “Jobsite Companion lowers the barrier for new operators while helping experienced operators work faster and more accurately,” Honeyman said. “It’s not just smart technology — it’s a smart experience that provides expert guidance from the driver’s seat.” The company also introduced “Service.AI,” an integrated support platform for dealers and technicians. The system combines repair manuals, warranty data, diagnostic guides and a database of previous service cases to help shorten repair times and reduce equipment downtime. It can be accessed through text or voice commands. In addition, Doosan Bobcat showcased a modular, fast-charging standard battery pack known as “BSUP,” a concept machine dubbed RogueX3, a collision warning and avoidance system, and next-generation operator displays. Doosan Bobcat said the technologies are designed as part of a single, integrated solution ecosystem, with many developed for future commercialization. The company plans to display them at the Doosan Group booth at CES. 2026-01-06 10:15:22 -
OPINION: Massive data breach at Coupang exposes lax security and lack of accountability SEOUL, January 6 (AJP) - Coupang, South Korea's leading e-commerce giant, has offered just 50,000 Korean won (about US$35) in compensation to customers affected by its massive data breach detected in late last year. It is a meager amount, considering that sensitive personal information including home addresses and phone numbers, was exposed. As the breach occurred on a platform widely used to purchase daily necessities such as bottled water, following data leaks at telecom companies, many consumers now fear that their information could be stolen again. Similar incidents in the financial and banking sectors have further eroded public trust. Following a large-scale hacking incident at Lotte Card last summer, Shinhan Card also belatedly detected a massive data breach affecting 190,000 users. Making matters worse, Shinhan Card failed for more than three years to detect that its employee was involved in the breach, and then waited nearly 20 days to inform affected customers after becoming aware of the leak. This has raised questions about whether companies have strengthened their internal security by learning from previous data breaches at other firms. Upbit, South Korea's largest cryptocurrency exchange, also suffered a hacking incident involving hundreds of billions of won but faced no penalties, since virtual assets are not subject to regulation due to a lack of relevant laws. The industry says hacking methods have become more sophisticated, and it can take up to five years to identify hackers. Experts say these incidents reflect both failed internal security and lax supervision by financial authorities, arguing that government watchdogs such as the Financial Services Commission and the Financial Supervisory Service should also bear responsibility. Authorities say individual misconduct is difficult to detect in advance, making thorough preparation of preventive measures the only viable way to prevent any future breaches. As financial services become more complex, protecting consumer data matters more than ever. Financial firms, as private companies driven by short-term profits, often treat security as an afterthought. For this reason, experts argue that regulators should hold them accountable with significant financial penalties. * This article, published by Aju Business Daily, was translated by AI and edited by AJP. 2026-01-06 10:07:11 -
South Korea to strengthen protection of K-brands in China SEOUL, January 06 (AJP) - South Korea’s intellectual property office said on Tuesday it signed a memorandum of understanding with its Chinese counterpart to deepen cooperation on intellectual property, on the sidelines of a South Korea–China summit held at the Great Hall of the People in Beijing. The agreement, signed between the Korean Intellectual Property Office (KIPO) and China’s National Intellectual Property Administration (CNIPA), updates and expands a similar pact reached in 2021. Under the revised agreement, the two sides will broaden cooperation in areas including the protection of intellectual property rights, the prevention of counterfeit goods, the use of new technologies such as artificial intelligence and big data in patent examinations and analysis, and the promotion of intellectual property transactions, commercialization and finance. Ahead of the signing, KIPO Commissioner Kim Yong-seon met with CNIPA head Shen Changyu for talks on IP policy trends, existing cooperation and priority areas for future collaboration, the KIPO said. The two offices also agreed to jointly respond to bad-faith trademark applications, including cases in which applicants seek to preemptively register trademarks already in use in order to extract economic gains. “This MOU and stronger cooperation to prevent malicious trademark preemption will help protect K-brands more effectively in China,” Kim said. 2026-01-06 10:04:45 -
[[CES 2026]] LG Innotek showcases autonomous driving, EV solutions LAS VEGAS, January 06 (AJP) - South Korea's LG Innotek unveiled future mobility solutions combining autonomous-driving and electric-vehicle technologies at CES 2026, the world’s largest technology trade show, on Tuesday. Exhibiting at CES 2026 in Las Vegas, the components maker gave a preview to South Korean reporters one day before the show opened. The company set up a roughly 330 square-meter booth near the entrance of the West Hall at the Las Vegas Convention Center. LG Innotek centered its exhibition on mobility, highlighting next-generation convergence solutions for the “AI-defined vehicle” era. The company displayed a concept mock-up for autonomous driving, moving beyond a parts-only showcase to present integrated solutions combining hardware and software. The mock-up features 16 advanced driver-assistance system components, including cameras, LiDAR and radar. A key highlight was a fusion sensing solution that integrates multiple sensing functions into a single camera module. LG Innotek demonstrated a camera combining heating and active-cleaning functions, with a cleaning module capable of removing moisture or dust from the lens in about one second. The company said the system, combined with its in-house software, improves sensing performance for autonomous driving. The company also showcased an ultra-compact LiDAR developed in collaboration with U.S. sensor firm Aeva. The LiDAR can detect objects at distances of up to 200 meters, addressing limitations of camera-based sensing. Visitors were able to experience the sensing performance from a test seat designed to simulate a vehicle ride. In-cabin technologies were also on display, including a next-generation under-display camera module, which LG Innotek said was shown for the first time at CES. Installed behind an instrument panel, the module uses AI-based image restoration to enable facial recognition without obstructing the display, while also supporting dual cabin recording during driving. The company also presented a separate electric-vehicle-focused mock-up, displaying 15 electronic components including an 800-volt wireless battery management system and its “B-Link” solution, which integrates a battery and junction box to improve weight reduction and system integration. “This CES is a place to seek new business opportunities in autonomous driving and EVs,” Chief Executive Officer Moon Hyuk-soo said. 2026-01-06 08:55:55 -
Korean AI firm Upstage releases AI model as open source SEOUL, January 06 (AJP) - South Korean artificial intelligence company Upstage said on Tuesday it has released its in-house large language model, Solar Open 100B, as open-source software. Solar Open is the first output of the Ministry of Science and ICT’s “Independent AI Foundation Model” project, in which Upstage is participating as the lead company. Upstage said it developed the model entirely in-house, overseeing the full process from data construction to training. The company released the model on the global open-source platform Hugging Face and published a technical report detailing its development. Upstage said the 102-billion-parameter model delivers performance comparable to global frontier models. It said Solar Open is about 15 percent the size of China’s DeepSeek R1 but outperformed it in key benchmark evaluations across three languages: Korean, English and Japanese. According to the company, Solar Open recorded performance gains of 110 percent in Korean, 103 percent in English and 106 percent in Japanese compared with DeepSeek R1. The company said it plans to open part of the dataset through the National Information Society Agency’s AI Hub, describing it as a public resource aimed at strengthening South Korea’s artificial intelligence research ecosystem. “Solar Open is a model Upstage trained independently from the beginning, and it is the most Korean yet also global AI, with a deep understanding of Korea’s emotions and linguistic context,” Chief Executive Kim Sung-hoon said. 2026-01-06 08:44:14
