Position: Machine learning with limited (labelled) data
In recent years, the popular transformer models started to require not only a vast amount of computational resources for training but also huge amounts of training data. This leads the large generative models to be very power and cost inefficient. To address these problems, parameter-efficient fine-tuning methods have emerged. In my work, I analyze and utilize these methods and their usefulness in the limited data environment.