In a bid to address the memory bottleneck in massively parallel AI systems, Samsung, the world leader in advanced memory technology, and NAVER Corporation, a global internet company with top-notch AI technology, today announced a wide-reaching collaboration to develop semiconductor solutions tailored for hyper-scale artificial intelligence (AI) models.
The two tech giants plan to increase the speed and power efficiency of large-scale AI models by fusing their expertise in semiconductor design and manufacturing with their well-proven AI skills.
Develop Semiconductor Solutions that are Hyperscale AI-optimized
Utilizing Compute Express Link (CXL), Processing-in-Memory (PIM), Processing-near-Memory (PNM), and Computational Storage from Samsung, the firms want to combine their hardware and software resources to significantly speed up the handling of large-scale AI workloads.
The amount of data that has to be handled has increased exponentially as a result of recent hyperscale AI advancements. The need for new AI-optimized semiconductor technologies is fueled by the difficulty of satisfying these demanding computational needs due to the performance and efficiency limits of current computing systems.
Elegant Themes - The most popular WordPress theme in the world and the ultimate WordPress Page Builder. Get a 30-day money-back guarantee. Get it for Free
Such solutions need a wide-ranging fusion of semiconductor and AI disciplines. To develop solutions that significantly improve the performance and power efficiency of large-scale AI, Samsung is combining its expertise in semiconductor design and manufacturing with NAVER’s experience in the development and verification of AI algorithms and AI-driven services.
To support high-speed data processing in AI applications, Samsung has long introduced memory and storage products, ranging from computational storage (SmartSSD) to PIM-enabled high bandwidth memory (HBM-PIM) to next-generation memory that supports the Compute Express Link (CXL) interface. To advance large-scale AI systems, Samsung will now work with NAVER to optimize these memory technologies.
NAVER will continue to improve its compression algorithms and HyperCLOVA, a hyper-scale language model with over 200 billion parameters, to produce a more streamlined model with significantly higher computation efficiency.
Creating A Cutting-edge Semiconductor Technologies
Speaking on the partnership, Jinman Han, Executive Vice President of Memory Global Sales & Marketing at Samsung Electronics, pointed out that they will work with NAVER to create cutting-edge semiconductor technologies to address the memory bottleneck in massively parallel AI systems.
“With tailored solutions that reflect the most pressing needs of AI service providers and users, we are committed to broadening our market-leading memory lineup, including computational storage, PIM and more, to fully accommodate the ever-increasing scale of data.” Jinman Han added.
On the other hand, Suk Geun Chung, Head of NAVER CLOVA CIC, said through this strategic alliance, NAVER anticipates expanding its AI capabilities and strengthening its competitive edge in the AI industry. In his words:
“Combining our acquired knowledge and know-how from HyperCLOVA with Samsung’s semiconductor manufacturing prowess, we believe we can create an entirely new class of solutions that can better tackle the challenges of today’s AI technologies.”
“We look forward to broadening our AI capabilities and bolstering our edge in AI competitiveness through this strategic partnership.”