
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models
This paper falls in the category of parameter efficient fine-tuning, where the goal is to use as few parameters as possible to achieve almost the same accuracy as if we were to fine-tune the whole model.