Volume 60, pp. 618-635, 2024.
LSEMINK: a modified Newton–Krylov method for Log-Sum-Exp minimization
Kelvin Kan, James G. Nagy, and Lars Ruthotto
Abstract
This paper introduces LSEMINK, an effective modified Newton–Krylov algorithm geared toward minimizing the log-sum-exp function for a linear model. Problems of this kind arise commonly, for example, in geometric programming and multinomial logistic regression. Although the log-sum-exp function is smooth and convex, standard line-search Newton-type methods can become inefficient because the quadratic approximation of the objective function can be unbounded from below. To circumvent this, LSEMINK modifies the Hessian by adding a shift in the row space of the linear model. We show that the shift renders the quadratic approximation to be bounded from below and that the overall scheme converges to a global minimizer under mild assumptions. Our convergence proof also shows that all iterates are in the row space of the linear model, which can be attractive when the model parameters do not have an intuitive meaning, as is common in machine learning. Since LSEMINK uses a Krylov subspace method to compute the search direction, it only requires matrix-vector products with the linear model, which is critical for large-scale problems. Our numerical experiments on image classification and geometric programming illustrate that LSEMINK considerably reduces the time-to-solution and increases the scalability compared to geometric programming and natural gradient descent approaches. It has significantly faster initial convergence than standard Newton–Krylov methods, which is particularly attractive in applications like machine learning. In addition, LSEMINK is more robust to ill-conditioning arising from the nonsmoothness of the problem. We share our MATLAB implementation at a GitHub repository (https://github.com/KelvinKan/LSEMINK).
Full Text (PDF) [1 MB], BibTeX
Key words
log-sum-exp minimization, Newton–Krylov method, modified Newton method, machine learning, geometric programming
AMS subject classifications
65K10
Links to the cited ETNA articles
[5] | Vol. 28 (2007-2008), pp. 149-167 Julianne Chung, James G. Nagy, and Dianne P. O'Leary: A weighted-GCV method for Lanczos-hybrid regularization |
[50] | Vol. 52 (2020), pp. 214-229 Samy Wu Fung, Sanna Tyrväinen, Lars Ruthotto, and Eldad Haber: ADMM-Softmax: an ADMM approach for multinomial logistic regression |
< Back