• search hit 6 of 30
Back to Result List

CSG: A stochastic gradient method for a wide class of optimization problems appearing in a machine learning or data-driven context

Submission Status:under review
  • In a recent article the so called continuous stochastic gradient method (CSG) for the efficient solution of a class of stochastic optimization problems was introduced. While the applicability of known stochastic gradient type methods is typically limited to so called expected risk functions, no such limitation exists for CSG. The key to this lies in the computation of design dependent integration weights, which allows for an optimal usage of available information leading to stronger convergence properties. However, due to the nature of the formula for these integration weights, the practical applicability was essentially limited to problems, in which stochasticity enters via a low-dimensional and suficiently simple probability distribution. In this paper the scope of the CSG method is significantly extended presenting new ways of calculating the integration weights. A full convergence analysis for this new variant of the CSG method is presented and its efficiency is demonstrated in comparison to more classical stochastic gradient methods by means of a number of problem classes, relevant in stochastic optimization and machine learning.
Metadaten
Author:Lukas Pflug, Max Grieshammer, Andrian Uihlein, Michael Stingl
Document Type:Preprint
Language:English
Date of Publication (online):2021/11/03
Release Date:2021/11/03
Tag:chance constraints; machine learning; nonlinear stochastic optimization; stochastic gradient method
Page Number:22
Institutes:Friedrich-Alexander-Universität Erlangen-Nürnberg
Subprojects:B06
Licence (German):License LogoCreative Commons - CC BY - Namensnennung 4.0 International