Close

Presentation

TT-LoRA MoE: Using Parameter-Efficient Fine Tuning and Sparse Mixture-of-Experts
DescriptionWe propose Tensor-Trained Low-Rank Adaptation Mixture-of-Experts (TT-LoRA MoE), a novel computational framework integrating parameter-efficient fine-tuning (PEFT) with sparse MoE routing to address scalability challenges in large model deployments. Unlike traditional MoE approaches, which face substantial computational overhead as expert counts grow, TT-LoRA MoE decomposes training into two distinct, optimized stages. First, we independently train lightweight, tensorized low-rank adapters (TT-LoRA experts), each specialized for specific tasks. Subsequently, these expert adapters remain frozen, eliminating inter-task interference and catastrophic forgetting in multi-task settings. A sparse MoE router, trained separately, dynamically leverages base model representations to select exactly one specialized adapter per input at inference time, automating expert selection without explicit task specification. Comprehensive experiments confirm our architecture retains the memory efficiency of low-rank adapters, seamlessly scales to large expert pools, and achieves robust task-level optimization. This structured decoupling significantly enhances computational efficiency and flexibility, enabling practical and scalable multi-task inference deployments.