A Survey on Stability of Learning with Limited Labelled Data and its Sensitivity to the Effects of Randomness

Pecher, B., Srba, I., Bielikova, M.

Learning with limited labelled data, such as prompting, in-context learning, fine-tuning, meta-learning or few-shot learning, aims to effectively train a model using only a small amount of labelled samples. However, these approaches have been observed to be excessively sensitive to the effects of uncontrolled randomness caused by non-determinism in the training process. The randomness negatively affects the stability of the models, leading to large variances in results across training runs. When such sensitivity is disregarded, it can unintentionally, but unfortunately also intentionally, create an imaginary perception of research progress. Recently, this area started to attract research attention and the number of relevant studies is continuously growing. In this survey, we provide a comprehensive overview of 415 papers addressing the effects of randomness on the stability of learning with limited labelled data. We distinguish between four main tasks addressed in the papers (investigate/evaluate; determine; mitigate; benchmark/compare/report randomness effects), providing findings for each one. Furthermore, we identify and discuss seven challenges and open problems together with possible directions to facilitate further research. The ultimate goal of this survey is to emphasise the importance of this growing research area, which so far has not received an appropriate level of attention, and reveal impactful directions for future research.

Cite: Branislav Pecher, Ivan Srba, and Maria Bielikova. 2024. A Survey on Stability of Learning with Limited Labelled Data and its Sensitivity to the Effects of Randomness. ACM Comput. Surv. Just Accepted (September 2024). https://doi.org/10.1145/3691339


Authors

Branislav Pecher
PhD Student
More
Ivan Srba
Researcher
More
Maria Bielikova
Lead and Researcher
More