Presentation
Leveraging and Evaluating LLMs for Scientific Computing
DescriptionLarge language models (LLMs) are progressing at an impressive pace. They are becoming capable of solving complex problems while presenting the opportunity to leverage their capabilities for scientific computing. Despite their progress, even the most sophisticated models can struggle with simple reasoning tasks and make mistakes, necessitating careful verification of their outputs. This tutorial focuses on these two important aspects: (1) leveraging LLMs to assist and advance scientific computing code translation, and (2) presenting best practices for evaluating and comparing LLMs within the scientific computing context. Designed specifically for students, researchers, and engineers at beginner and intermediate levels, this half-day tutorial features presentations and demos. Attendees learn the fundamentals of LLM design, development, and use cases for scientific computing. The tutorial deep-dives into one key topic: code translation (Fortran to C++) with the CodeScribe tool. Attendees also learn various complementary methods to test and evaluate LLM responses rigorously. At the end of the tutorial, attendees are equipped with solid foundations, knowledge, and practical experience to leverage and evaluate LLMs for scientific computing and to transform theoretical insights into actionable solutions.
Event Type
Tutorial
TimeMonday, 17 November 20258:30am - 12:00pm CST
Location125
Livestreamed
Recorded




