I finished my master’s studies in theoretical computer science at Comenius University. I wrote my thesis on knowledge distillation in the context of language models. Before that, I wrote my bachelor’s thesis on perfect matchings in graph theory.
PhD topic: Solving empirical risk minimization problems using efficient optimization methods
Supervising team: Peter Richtárik (King Abdullah University of Science and Technology)
The prevalent paradigm for training modern supervised machine learning models is to cast them as empirical risk minimization problems and solve them using specialized optimization methods. These methods need to be reliable, scalable, flexible and practically useful; and often involve distributed computing elements. While much progress happened in the field over the last decade, many open problems remain. These have to do with issues such as communication efficiency, variance reduction, random selection and reshuffling of training data, client drift reduction, local training in federated learning, differential privacy, generalization, lack of convexity and more.