Close

Presentation

How To Leverage CXL Memory Pooling and Sharing for AI and HPC Workloads
DescriptionAs AI and HPC workloads scale, traditional architecture faces critical memory and bandwidth limitations. Compute Express Link® (CXL®) offers a transformative solution, enabling low-latency, coherent communication across CPUs, GPUs, and memory devices. This session will explore how CXL 2.0 and 3.x support memory disaggregation and composable infrastructure to unlock scalable and flexible deployment of large models and simulations. Attendees will learn how memory pooling and sharing reduce overprovisioning, improve utilization, and lower costs. We invite system architects, researchers, hardware developers, and operators to discuss real-world CXL adoption, implementation challenges, and opportunities to reshape the next-generation AI and HPC systems.