Lessons and Insights from a Unifying Study of Parameter-Efficient Fine-Tuning (PEFT) in Visual Recognition (CVPR 2025)

1The Ohio State University, 2Google

mai.145@osu.edu

Abstract

Parameter-efficient fine-tuning (PEFT) has attracted significant attention due to the growth of pre-trained model sizes and the need to fine-tune (FT) them for superior downstream performance. Despite a surge in new PEFT methods, a systematic study to understand their performance and suitable application scenarios is lacking, leaving questions like "when to apply PEFT" and "which method to use" largely unanswered, especially in visual recognition. In this paper, we conduct a unifying empirical study of representative PEFT methods with Vision Transformers. We systematically tune their hyper-parameters to fairly compare their accuracy on downstream tasks. Our study offers a practical user guide and unveils several new insights. First, if tuned carefully, different PEFT methods achieve similar accuracy in the low-shot benchmark VTAB-1K. This includes simple approaches like FT the bias terms that were reported inferior. Second, despite similar accuracy, we find that PEFT methods make different mistakes and high-confidence predictions, likely due to their different inductive biases. Such an inconsistency (or complementarity) opens up the opportunity for ensemble methods, and we make preliminary attempts at this. Third, going beyond the commonly used low-shot tasks, we find that PEFT is also useful in many-shot regimes, achieving comparable or better accuracy than full FT while using significantly fewer parameters. Lastly, we investigate PEFT's ability to preserve a pre-trained model's robustness to distribution shifts (e.g., CLIP). Perhaps not surprisingly, PEFT approaches outperform full FT alone. However, with weight-space ensembles, full FT can better balance target distribution and distribution shift performance, suggesting a future research direction for robust PEFT.

Contributions

  • Fair Benchmarking of PEFT Methods: We provide a systematic framework with a comprehensive code base implementing 16 PEFT methods, which serves as a valuable resource for consistent and reproducible evaluation.
  • Practical recommendations in various scenarios: We provide empirical recommendations on when and how to use different PEFT methods in various scenarios, including low-shots, many-shots, different domain gaps, and robustness between in-distribution and OOD.
  • Inspiring Future Research: Our findings offer several insightful directions for future research including leveraging prediction differences in other learning paradigms such as semi-supervised learning, robust fine-tuning with PEFT, and providing empirical evidence for PEFT mechanism understanding.

Highlights of Insights

BibTeX


@article{mai2024lessons,
  title={Lessons learned from a unifying empirical study of parameter-efficient transfer learning (petl) in visual recognition},
  author={Mai, Zheda and Zhang, Ping and Tu, Cheng-Hao and Chen, Hong-You and Zhang, Li and Chao, Wei-Lun},
  journal={arXiv preprint arXiv:2409.16434},
  year={2024}
}