Presentation
SIGN IN TO VIEW THIS PRESENTATION Sign In
LLMTailor: A Layer-wise Tailoring Tool for Efficient Checkpointing of Large Language Models
DescriptionCheckpointing is essential for fault tolerance in training large language models (LLMs). However, existing methods, regardless of I/O strategies, periodically store the entire model and optimizer states, incurring substantial storage overhead and contention. Recent studies reveal that updates across LLM layers are highly non-uniform. During training, some layers may undergo more significant changes, while others remain stable or even unchanged. This suggests that selectively checkpointing only layers with significant updates could reduce overhead without harming training. Implementing such strategies requires fine-grained control over both weights and optimizer states, which no current tool provides. To address this gap, we propose LLMTailor, a checkpoint-merging-framework that filters and assembles layers from different checkpoints to form a composite checkpoint. Our evaluation indicates that LLMTailor can work with different checkpointing strategies and effectively reduce checkpoint size (e.g., 4.3 times smaller for Llama3.1-8B) and checkpoint time (e.g., 2.8 times faster for Qwen2.5-7B) while maintaining model quality.
Event Type
Workshop
TimeMonday, 17 November 202510:30am - 11:00am CST
Location230
Data Analytics
High Performance I/O, Storage, Archive, & File Systems
Storage
Livestreamed
Recorded
TP
W




