Close

Presentation

ChatHPC: Building the Foundations for a Productive and Trustworthy AI-Assisted HPC Ecosystem
DescriptionChatHPC democratizes LLMs for HPC by providing the ecosystem and state-of-the-practice for the HPC community to create specific capabilities using AI for critical HPC components rapidly on reasonable computational resources. Our divide-and-conquer approach creates a collection of reliable and optimized AI assistants, which can be merged together, and is based on the cost-effective and fast Code Llama fine-tuning supervised by experts. We target major components of the HPC software stack, including programming models, runtime, I/O, tooling, and math libraries. ChatHPC provides a productive HPC ecosystem by boosting tasks related to portability, parallelization, optimization, scalability, and instrumentation, through the assistance of AI. With small data sets, ChatX assistants are capable of creating non-existence capabilities in the 7B parameter CodeLlama-Base model and producing high-quality software with a level of trustworthiness of up to 90% higher than the 1.8T parameter ChatGPT-4o model for critical programming tasks in the HPC software stack.