This paper proposes a framework to study the convergence of stochastic optimization and learning algorithms. The framework is modeled over the different challenges that these algorithms pose, such as (i) the presence of random additive errors (<italic>e.g.</italic> due to stochastic gradients), and (ii) random coordinate updates (<italic>e.g.</italic> due to asynchrony in distributed set-ups). The paper covers both convex and strongly convex problems, and it also analyzes online scenarios, involving changes in the data and costs. The paper relies on interpreting stochastic algorithms as the iterated application of stochastic operators, thus allowing us to use the powerful tools of operator theory. In particular, we consider operators characterized by additive errors with sub-Weibull distribution (which parameterize a broad class of errors by their tail probability), and random updates. In this framework we derive convergence results in mean and in high probability, by providing bounds ...

A Stochastic Operator Framework for Optimization and Learning With Sub-Weibull Errors

Carli R.;
2024

Abstract

This paper proposes a framework to study the convergence of stochastic optimization and learning algorithms. The framework is modeled over the different challenges that these algorithms pose, such as (i) the presence of random additive errors (e.g. due to stochastic gradients), and (ii) random coordinate updates (e.g. due to asynchrony in distributed set-ups). The paper covers both convex and strongly convex problems, and it also analyzes online scenarios, involving changes in the data and costs. The paper relies on interpreting stochastic algorithms as the iterated application of stochastic operators, thus allowing us to use the powerful tools of operator theory. In particular, we consider operators characterized by additive errors with sub-Weibull distribution (which parameterize a broad class of errors by their tail probability), and random updates. In this framework we derive convergence results in mean and in high probability, by providing bounds ...
File in questo prodotto:
File Dimensione Formato  
A_Stochastic_Operator_Framework_for_Optimization_and_Learning_With_Sub-Weibull_Errors.pdf

accesso aperto

Tipologia: Published (Publisher's Version of Record)
Licenza: Creative commons
Dimensione 875 kB
Formato Adobe PDF
875 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3533607
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 1
  • OpenAlex 2
social impact