{"id":2865,"date":"2026-03-19T15:13:36","date_gmt":"2026-03-19T06:13:36","guid":{"rendered":"https:\/\/news.sktelecom.com\/en\/?p=2865"},"modified":"2026-03-19T15:13:36","modified_gmt":"2026-03-19T06:13:36","slug":"inside-t-omnimodal-foundation-model-team-on-advancing-to-phase-2-confidence-built-on-519-billion-parameters","status":"publish","type":"post","link":"https:\/\/news.sktelecom.com\/en\/2865","title":{"rendered":"[Inside T] Omnimodal Foundation Model Team on Advancing to Phase 2: Confidence Built on 519 Billion Parameters"},"content":{"rendered":"<p>519 billion parameters. The number alone may be hard to grasp, but within it lies South Korea\u2019s endeavor towards AI sovereignty. In early 2026, SK Telecom passed the first phase evaluation of the nation\u2019s flagship Sovereign AI Foundation Model Project* and advanced to Phase 2.<\/p>\n<p class=\"img-cap\">* Sovereign AI Foundation Model Project: A national AI infrastructure initiative led by the Ministry of Science and ICT to develop a globally competitive, domestically built AI foundation model<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-2866\" src=\"https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-GPUaaS.png\" alt=\"\" width=\"1060\" height=\"522\" srcset=\"https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-GPUaaS.png 1060w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-GPUaaS-368x181.png 368w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-GPUaaS-586x289.png 586w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-GPUaaS-768x378.png 768w\" sizes=\"auto, (max-width: 1060px) 100vw, 1060px\" \/><\/p>\n<p class=\"img-cap\">SKT Consortium Participants<\/p>\n<p class=\"img-cap\">* SKT Consortium: A consortium comprising eight organizations including Krafton, 42dot, Rebellions, Liner, SelectStar, Seoul National University, and KAIST that developed the \u201cA.X K1\u201d model. The consortium tied for first place on the NIA benchmark in Phase 1 and advanced to Phase 2.<\/p>\n<p>At the center of this effort are Lee Yu-jin, Cheon Sung-jun, and Yang Hyun-ho of SKT\u2019s Omnimodal Foundation Model (OFM) Team, who shared what it means to build a \u201cnational flagship AI\u201d and the journey behind it.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-2871\" src=\"https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_01.png\" alt=\"(From left) Lee Yu-jin, Yang Hyun-ho, Cheon Sung-jun of the Omnimodal Foundation Model Team\" width=\"1060\" height=\"685\" srcset=\"https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_01.png 1060w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_01-368x238.png 368w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_01-586x379.png 586w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_01-768x496.png 768w\" sizes=\"auto, (max-width: 1060px) 100vw, 1060px\" \/><\/p>\n<p class=\"img-cap\">(From left) Lee Yu-jin, Yang Hyun-ho, Cheon Sung-jun of the Omnimodal Foundation Model Team<\/p>\n<h2 class=\"cnt-tit\">Massive Scale, a Bold Vision for AI as \u201cSocial Infrastructure\u201d<\/h2>\n<p><strong>Q. What led you to develop a 519-billion-parameter large-scale model in-house? It must have required a significant investment. <\/strong><br \/>\n<strong>A. Cheon Sung-jun:<\/strong> We wanted this large-scale model to be more than a short-term revenue driver. We saw it as a form of social overhead capital (SOC) that contributes to Korea\u2019s entire AI ecosystem. Just as building roads enables countless vehicles to be mobile and create new value, we wanted our AI model to become a solid foundation on which companies and developers can innovate. It was a very important decision in SKT\u2019s transition toward becoming an AI company.<\/p>\n<p><em><i>\u201cThe larger the model, the better it can learn rare and complex knowledge, and the less it suffers from hallucinations. This hyperscale model is meaningful in and of itself, but it will also serve as \u2018social overhead capital (SOC)\u2019 that enables us to efficiently develop smaller specialized models in the future.\u201d \u2014 Cheon Sung-jun<\/em><\/i><\/p>\n<p><strong>Q. Recently, there\u2019s a trend toward smaller but more efficient models. Why did you choose such a large scale of 519 billion parameters?<\/strong><br \/>\n<strong>A. Yang Hyun-ho:<\/strong> We based our decision on the scaling law*. Within the limits of a set budget, time, and computing resources comparable to our competitors, we calculated the most efficient configuration and concluded that a structure with 519B total parameters and 33B active parameters was optimal. We also believed that a strong general-purpose model would deliver superior performance when later specialized for specific domains. Building the most broadly capable model \u2013 that is our differentiator.<\/p>\n<p class=\"img-cap\">* Scaling Law: An empirical principle that AI model performance improves predictably based on the relationship between the number of parameters in a model, training data, and computing resources.<\/p>\n<h2 class=\"cnt-tit\">MoE and Think Fusion: Achieving Both Efficiency and Performance<\/h2>\n<p><strong>Q. Could you explain the MoE (Mixture of Experts) architecture and Think-Fusion technology?<\/strong><br \/>\n<strong>A. Yang Hyun-ho:<\/strong> MoE is a \u201cmixture of experts\u201d model that does not use the entire large-scale model at once. Instead, it selectively activates the expert networks most relevant to a given query. This allows us to maximize cost efficiency by maintaining computational costs comparable to that of a 33-billion-parameter model despite having 519 billion parameters overall. Although the engineering is complex, we are able to obtain high efficiency by only using the active parameters while retaining knowledge at the 519B scale.<\/p>\n<p><strong>A. Cheon Sung-jun:<\/strong> Think Fusion is not an architecture but rather a model that can be toggled to provide answers with or without a Thinking trace \u2013 that is, the reasoning process \u2013 depending on the option selected. MoE models are most efficient when they handle multiple requests simultaneously. We made it so that we can run a single model and choose the functionality per request instead of running two separate models. Otherwise, maintaining both a \u201cthinking model\u201d and a \u201cgeneral model\u201d would require running two large-scale models or constantly switching models, which would be inefficient.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-2869\" src=\"https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_04.png\" alt=\"(From left) Yang Hyun-ho, Cheon Sung-jun of the Omnimodal Foundation Model Team\" width=\"1060\" height=\"685\" srcset=\"https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_04.png 1060w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_04-368x238.png 368w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_04-586x379.png 586w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_04-768x496.png 768w\" sizes=\"auto, (max-width: 1060px) 100vw, 1060px\" \/><\/p>\n<p class=\"img-cap\">(From left) Yang Hyun-ho, Cheon Sung-jun of the Omnimodal Foundation Model Team<\/p>\n<h2 class=\"cnt-tit\">A Commitment to Optimizing Korean Language Modeling<\/h2>\n<p><strong>Q. What differentiates this model in terms of its Korean-language capabilities?<\/strong><br \/>\n<strong>A. Yang Hyun-ho:<\/strong> We used approximately 30% Korean data, which is a far higher proportion than global models and built a separate processor to select Korean sentences requiring rich context. Training on context-dense sentences improves the model\u2019s ability to understand nuance.<\/p>\n<p><strong>A. Cheon Sung-jun:<\/strong> Our tokenizer* is highly efficient for Korean. The same sentence can be represented with fewer tokens, which gives us advantages in speed and power efficiency compared to global models. Internally, we believe the model shows strong performance in Korean mathematics and coding tasks.<\/p>\n<p class=\"img-cap\">* Tokenizer: A preprocessing tool that splits text into tokens, the smallest units an AI model can process. Depending on efficiency, the same sentence can be expressed with fewer tokens.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-2868\" src=\"https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_05.png\" alt=\"Demonstration of A. note and A. phone services\" width=\"1060\" height=\"685\" srcset=\"https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_05.png 1060w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_05-368x238.png 368w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_05-586x379.png 586w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_05-768x496.png 768w\" sizes=\"auto, (max-width: 1060px) 100vw, 1060px\" \/><\/p>\n<p class=\"img-cap\">Demonstration of A. note and A. phone services<\/p>\n<p><em><i>\u201cOur superior Korean language capability was possible because we have been developing LLMs since 2018 and operating services like A. (pronounced \u201cA dot\u201d). This allowed us to accumulate high-quality Korean data and expertise. The key is not just quantity, but quality and experience.\u201d \u2014 Lee Yu-jin<\/em><\/i><\/p>\n<h2 class=\"cnt-tit\">Fairness and Safety: Non-Negotiable Principles<\/h2>\n<p><strong>Q. What principles did you insist on maintaining throughout the project?<\/strong><br \/>\n<strong>A. Lee Yu-jin:<\/strong> Our principle was to avoid benchmark contamination*. To objectively evaluate the performance of the AI model, it must not be exposed to benchmark data during training. Including benchmark answers would raise scores easily but would undermine true generalization* capability, so we strictly avoided it.<\/p>\n<p class=\"img-cap\">* Benchmark Contamination: When the benchmark data used for evaluation leaks into training data, allowing models to score high through memorization rather than genuine capability.<\/p>\n<p class=\"img-cap\">* Generalization: The ability of a model to perform accurately on new data that was not used in the training process.<\/p>\n<p><strong>A. Cheon Sung-jun:<\/strong> Benchmarks are like college entrance exam questions. Since benchmark datasets are publicly available, they may find their way into training data, whether intentionally or not. That can boost evaluation scores, but it also makes it impossible to measure a model\u2019s true capability. We adhered to the principle of competing fairly without looking at the answer sheet.<\/p>\n<p>We also put significant effort into ensuring safety by minimizing harmful or biased outputs, given the impact AI can have on society. We included safety data in the training so the model refuses dangerous requests, such as instructions for making bombs. We wanted to prove our technical excellence through honest and principled methods, and we take even greater pride in the fact that we held to those principles.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-2867\" src=\"https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_06.png\" alt=\"(From left) Cheon Sung-jun, Lee Yu-jin of the Omnimodal Foundation Model Team\" width=\"1060\" height=\"685\" srcset=\"https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_06.png 1060w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_06-368x238.png 368w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_06-586x379.png 586w, https:\/\/news-static.sktelecom.com\/wp-content\/uploads\/2026\/03\/Inside-T-Omnimodal-Foundation-Model-Team_06-768x496.png 768w\" sizes=\"auto, (max-width: 1060px) 100vw, 1060px\" \/><\/p>\n<p class=\"img-cap\">(From left) Cheon Sung-jun, Lee Yu-jin of the Omnimodal Foundation Model Team<\/p>\n<p>A. Yang Hyun-ho: We designed the model with real-world use in mind. Since it will eventually be deployed in services like A. and across SK Group affiliates, we aimed to build an AI that is helpful in everyday life and responds kindly to people, not just one that excels at solving math problems.<\/p>\n<h2 class=\"cnt-tit\">The Journey from First to Best<\/h2>\n<p><strong>Q. What was the most significant achievement or lesson from this project?<\/strong><br \/>\n<strong>A. Cheon Sung-jun:<\/strong> It felt like a maturation process for Korea\u2019s AI ecosystem. We realized the importance of transparent disclosure to prove technological capability. We saw that the Korean government and elite teams are working to advance the ecosystem, and we also experienced growth through that process.<\/p>\n<p><strong>A. Lee Yu-jin:<\/strong> The biggest achievement was that SK Telecom was recognized not just as a telecommunications operator but as a leading AI company. The title \u201cTop 3 National Flagship AI\u201d says a lot. It was highly motivating for all team members.<\/p>\n<p><em><i>\u201cWe proved the barrier is not insurmountable. We gained confidence that we can catch up with models from China and the U.S. through sustained effort. In some benchmarks, we even achieved better results than DeepSeek.\u201d \u2014 Yang Hyun-ho<\/em><\/i><\/p>\n<p><strong>Q. Now that the team\u2019s advancement to Phase 2 has been confirmed, what are your plans?<\/strong><br \/>\n<strong>A. Cheon Sung-jun:<\/strong> In Phase 1, our model only supported Korean and English, but in Phase 2 we will support more languages. We believe this will facilitate broader adoption and allow for more diverse evaluations.<\/p>\n<p><strong>A. Lee Yu-jin:<\/strong> We will also strengthen agent capabilities. Multimodality will be introduced sequentially, starting with images and later expanding to voice and video processing in the second half of the year.<\/p>\n<p><strong>A. Yang Hyun-ho:<\/strong> The trial-and-error we experienced during Phase 1 preparation will not be repeated. Our internal technical pipeline has been significantly improved, so this phase should be less hectic.<\/p>\n<p>\u00a0<\/p>\n<p>Korea\u2019s AI journey continues. After successfully completing the first phase evaluation, the OFM Team is now full steam ahead in preparations for the next phase. The next chapter of the OFM Team\u2019s journey is eagerly anticipated, as all eyes are on what kind of game changer the national flagship AI becomes as it enters the global stage.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>519 billion parameters. The number alone may be hard to grasp, but within it lies South Korea\u2019s endeavor towards AI sovereignty. In early 2026, SK Telecom passed the first phase evaluation of the nation\u2019s flagship Sovereign AI Foundation Model Project*<\/p>\n","protected":false},"author":3,"featured_media":2875,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[24,526,528,497,525,527],"class_list":["post-2865","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-insight","tag-ai","tag-national-flagship-ai","tag-omnimodal-foundation-model","tag-skt-consortium","tag-skt-elite-team","tag-sovereign-ai-foundation"],"acf":[],"_links":{"self":[{"href":"https:\/\/news.sktelecom.com\/en\/wp-json\/wp\/v2\/posts\/2865","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/news.sktelecom.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/news.sktelecom.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/news.sktelecom.com\/en\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/news.sktelecom.com\/en\/wp-json\/wp\/v2\/comments?post=2865"}],"version-history":[{"count":4,"href":"https:\/\/news.sktelecom.com\/en\/wp-json\/wp\/v2\/posts\/2865\/revisions"}],"predecessor-version":[{"id":2878,"href":"https:\/\/news.sktelecom.com\/en\/wp-json\/wp\/v2\/posts\/2865\/revisions\/2878"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/news.sktelecom.com\/en\/wp-json\/wp\/v2\/media\/2875"}],"wp:attachment":[{"href":"https:\/\/news.sktelecom.com\/en\/wp-json\/wp\/v2\/media?parent=2865"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/news.sktelecom.com\/en\/wp-json\/wp\/v2\/categories?post=2865"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/news.sktelecom.com\/en\/wp-json\/wp\/v2\/tags?post=2865"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}