Presentation
Bridging the Quantum Coding Gap: Instruction-Tuned LLMs for Qiskit
DescriptionLarge language models (LLMs) have advanced code generation ability across many domains, but often struggle with quantum code due to limited domain-specific data and inherent domain complexity. To address this issue, we focus on the Qiskit framework and fine-tune pretrained LLMs using quantum code from GitHub and datasets including OASST1 and COMMITPACKFT. More importantly, we construct instruction-style prompt/completion pairs based on real-world Qiskit code to improve alignment during fine-tuning. Experiments show that our fine-tuned models significantly improve quantum code generation ability, validating the effectiveness of our approach.

Event Type
Research and ACM SRC Posters
TimeTuesday, 18 November 20258:00am - 5:00pm CST
LocationSecond Floor Atrium
Archive
view
