SK Telecom's Haein GPUaaS Hits Six-Month Mark, Advances Carrier-Grade AI Infrastructure
- SK Telecom’s Haein GPUaaS hit six months, shifting its AI infrastructure from pilot to stable production. • Its Petasus stack offers end-to-end provisioning, reducing deployment time, overhead, and total cost for AI workloads. • Haein virtualizes xPU resources for secure multi-tenancy, fast logical isolation, and predictable concurrent large-model training.
Haein cluster marks six-month turning point for SK Telecom’s AI infrastructure
SK Telecom Co. is marking six months of operation for Haein, its GPU cluster that the company deploys as a GPU-as-a-Service (GPUaaS) platform to support large-scale AI workloads and advanced model development. Launched on Aug. 1, 2025 and reaching the milestone on Feb. 4, 2026, Haein is credited with moving the carrier’s AI infrastructure from pilot deployments into stable production, enabling faster experimentation and more efficient model training across internal teams and external partners.
The platform’s integrated software stack — Petasus AI Cloud, AI Cloud Manager and GPUaaS Service Orchestrator — provides an end-to-end environment that lets customers transition from infrastructure provisioning to model development and production without separate toolchains. SK Telecom reports that the stack streamlines configuration and validation, reduces deployment lead times and lowers operational overhead, which in turn helps shrink total cost of ownership for customers running compute-heavy AI workloads.
Operationally, Haein emphasizes secure, high-performance multi-tenancy. Petasus AI Cloud virtualizes heterogeneous xPU environments and uses Dynamic Allocation to abstract transport protocols such as NVLink, InfiniBand and RoCEv2, allowing flexible partitioning and assignment of large GPU pools. SK Telecom highlights that this virtualization enables fully segregated, logical isolation of both GPU and network fabric at the tenancy level in under an hour, versus days to weeks for physical isolation of equivalent resources.
Accelerating model development across teams
AI Cloud Manager focuses on intelligent job scheduling and multi-tenant user environments so multiple teams can train massive models concurrently on shared clusters with predictable performance. The orchestration layer provides real-time visibility into GPU, network and storage resources, producing actionable insights that stabilize job execution and improve utilization.
Ecosystem and industry implications
The Haein deployment positions SK Telecom as a regional operator of scalable GPUaaS, addressing demand from enterprises and research groups for accessible, production-grade GPU infrastructure. By combining virtualization and orchestration, SK Telecom is aiming to turn carrier-grade datacenter practices into a foundation for collaborative AI development across industry partners.