![]() |
Photo courtesy of Yonhap News |
[Alpha Biz= Paul Lee] Investor sentiment toward South Korea’s memory chip leaders—Samsung Electronics and SK hynix—took a hit as concerns emerged over a new memory optimization technology unveiled by Google.
According to the Korea Exchange on March 26, Samsung Electronics and SK hynix fell 4.71% and 6.23%, respectively, closing at KRW 180,100 and KRW 933,000. Both stocks have declined sharply this month, down 16.81% and 12.06%. The sell-off was driven by heavy net selling from foreign and institutional investors.
The trigger was Google Research’s newly introduced “TurboQuant” technology, disclosed on March 25 (U.S. time). The algorithm significantly reduces memory usage—particularly the KV cache in large language models (LLMs)—by at least sixfold without compromising accuracy, while potentially boosting performance of GPUs such as NVIDIA H100 by up to eight times.
The development raised concerns that future AI systems may require less memory, potentially weakening demand for memory chips. Matthew Prince, CEO of Cloudflare, described TurboQuant as “Google’s version of DeepSeek,” referring to the Chinese AI startup known for achieving similar performance at lower costs.
Following the news, U.S. memory chipmaker Micron Technology fell 3.4% in New York trading, while Korean chipmakers also came under pressure. The decline was exacerbated by profit-taking after a strong rally driven by booming AI-driven memory demand.
However, analysts largely view the market reaction as overdone. Kiwoom Securities analyst Han Ji-young noted that the TurboQuant issue appears to have served as a catalyst for profit-taking amid lingering fatigue from the recent memory stock rally.
Market participants are also invoking the concept of Jevons paradox—the idea that improved efficiency lowers costs and ultimately increases demand. Analysts argue that more efficient AI models tend to expand overall computing demand rather than reduce it.
Samsung Securities analyst Lee Jong-wook explained that ongoing efforts to optimize semiconductor usage—accelerated since developments like DeepSeek—are lowering AI costs and driving broader adoption. “Optimized models are not reducing semiconductor demand, but enabling higher-performance AI services using the same resources,” he said.
Similarly, KB Securities analyst Kim Il-hyuk noted that earlier fears—such as claims that DeepSeek R1 reduced LLM training costs to one-twentieth—quickly subsided as AI market growth accelerated. “As inference costs decline, the emergence of killer applications in AI agents will further drive market expansion,” he added.
AlphaBIZ Paul Lee(hoondork1977@alphabiz.co.kr)
















