Knowledge sharing seminar in the field of Theory and Practice of Optimisation in Machine Learning

Peter Richtárik, Professor of Computer Science at the King Abdullah University of Science and Technology (KAUST), gave a lecture on From the Ball-proximal (Broximal) Point Method to Efficient Training of LLMs.

Lecture abstract

I will present selected results from two recent related papers [1, 2]. The abstracts of both are included below:

Non-smooth and non-convex global optimisation poses significant challenges across various applications, where standard gradient-based methods often struggle. We propose the Ball-Proximal Point Method, Broximal Point Method, or Ball Point Method (BPM) for short – a novel algorithmic framework inspired by the classical Proximal Point Method (PPM) (Rockafellar, 1976), which, as we show, sheds new light on several foundational optimisation paradigms and phenomena, including non-convex and non-smooth optimisation, acceleration, smoothing, adaptive stepsize selection, and trust-region methods. At the core of BPM lies the ball-proximal (“broximal”) operator, which arises from the classical proximal operator by replacing the quadratic distance penalty with a ball constraint. Surprisingly, and in sharp contrast with the sublinear rate of PPM in the nonsmooth convex regime, we prove that BPM converges linearly and in a finite number of steps in the same regime. Furthermore, by introducing the concept of ball-convexity, we prove that BPM retains the same global convergence guarantees under weaker assumptions, making it a powerful tool for a broader class of potentially non-convex optimisation problems. Just like PPM plays the role of a conceptual method inspiring the development of practically efficient algorithms and algorithmic elements, e.g., gradient descent, adaptive step sizes, acceleration (Ahn & Sra, 2020), and “W” in AdamW (Zhuang et al., 2022), we believe that BPM should be understood in the same manner: as a blueprint and inspiration for further development.

Recent developments in deep learning optimisation have brought about radically new algorithms based on the Linear Minimisation Oracle (LMO) framework, such as Muon [3] and Scion [4]. After over a decade of Adam’s [5] dominance, these LMO-based methods are emerging as viable replacements, offering several practical advantages such as improved memory efficiency, better hyperparameter transferability, and most importantly, superior empirical performance on large-scale tasks, including LLM training. However, a significant gap remains between their practical use and our current theoretical understanding: prior analyses (1) overlook the layer-wise LMO application of these optimisers in practice, and (2) rely on an unrealistic smoothness assumption, leading to impractically small stepsizes. To address both, we propose a new LMO-based method called Gluon, capturing prior theoretically analysed methods as special cases, and introduce a new refined generalised smoothness model that captures the layer-wise geometry of neural networks, matches the layer-wise practical implementation of Muon and Scion, and leads to convergence guarantees with strong practical predictive power. Unlike prior results, our theoretical stepsizes closely match the fine-tuned values reported by Pethick et al. (2025). Our experiments with NanoGPT and CNN confirm that our assumption holds along the optimisation trajectory, ultimately closing the gap between theory and practice.

[1] Kaja Gruntkowska, Hanmin Li, Aadi Rane, and Peter Richtárik. The ball-proximal (=”broximal”) point method: a new algorithm, convergence theory, and applications. ArXiv preprint ArXiv:2502.02002, 2025

[2] Artem Riabinin, Kaja Gruntkowska, Egor Shulgin, and Peter Richtárik. Gluon: Making Muon & Scion great again! (Bridging theory and practice of LMO-based optimizers for LLMs). ArXiv preprint arXiv:2505.13416, 2025

[3] Keller Jordan, Yuchen Jin, Vlado Boza, Jiacheng You, Franz Cesista, Laker Newhouse, and Jeremy Bernstein. Muon: An optimizer for hidden layers in neural networks, 2024. URL https://kellerjordan.github.io/posts/muon/

[4] Thomas Pethick, Wanyun Xie, Kimon Antonakopoulos, Zhenyu Zhu, Antonio Silveti-Falls, and Volkan Cevher. Training deep learning models with norm-constrained LMOs. arXiv preprint arXiv:2502.07529, 2025

[5] Diederik P. Kingma, Jimmy Ba. Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980, 2014

Photos from the seminar