Close

Presentation

Mojo: Python-Like MLIR-Based GPU Portable Science Kernels
DescriptionThis work investigates Mojo, a new MLIR-based language that combines Python-like syntax with portable, low-level GPU programming capabilities. We compare the performance of the Mojo portable GPU kernels against vendor-specific C++ NVIDIA CUDA and AMD HIP implementations on four representative scientific workloads: (1) BabelStream (memory-bound); (2) seven-point stencil (memory-bound); (3) miniBUDE (compute-bound); and (4) Hartree-Fock (compute-bound with atomic operations), evaluated on NVIDIA H100 and AMD MI300A GPUs. Results show that Mojo can match CUDA and HIP performance for memory-bound kernels, though gaps remain for atomic operations and certain compute-bound cases. This poster will present a general overview of the language, our benchmarking methodology, comparative results, the use of vendor profiling tools, and observations on Mojo’s potential to close the gap between high performance and developer productivity in scientific GPU programming. Our contribution is the first systematic evaluation of Mojo for HPC workloads, highlighting both its promise and current limitations.