Presentation
Enhancing ChatPORT with CUDA-to-SYCL Kernel Translation Capability
DescriptionThis work enhances the capabilities of code LLMs in CUDA-to-SYCL kernel translation with parameter-efficient fine-tuning. The resultant fine-tuned LLM, called ChatPORT, is an effort to provide high-fidelity translations from one programming model to another. We describe the preparation of datasets from heterogeneous computing benchmarks for model fine-tuning and testing, the parameter-efficient fine-tuning of 19 open-source code models ranging in size from 0.5 to 34 billion parameters and evaluate the correctness rates of the SYCL kernels by the fine-tuned models. The experimental results show that most code models fail to translate CUDA codes to SYCL correctly. However, fine-tuning these models using a small set of CUDA and SYCL kernels can enhance the capabilities of these models in kernel translation. Depending on the sizes of the models, the correctness rate ranges from 19.9% to 81.7% for a test dataset of 62 CUDA kernels.
Event Type
Workshop
TimeSunday, 16 November 202512:00pm - 12:20pm CST
Location264


