A collection of methods that have been implemented in the π€ PEFT library
-
Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
Paper β’ 2303.10512 β’ Published β’ 2 -
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
Paper β’ 2205.05638 β’ Published β’ 4 -
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Paper β’ 2303.16199 β’ Published β’ 4 -
FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning
Paper β’ 2108.06098 β’ Published β’ 2