We consider functional regression models with noisy outputs resulting from linear transformations. In the setting of regularization theory in Reproducing Kernel Hilbert Spaces (RKHSs), much work has been devoted to build uncertainty bounds around kernel-based estimates, hence characterizing their convergence rates. Such results are typically formulated using either the average squared loss for the prediction or the RKHS norm. However, in signal processing and in emerging areas like learning for control, measuring the estimation error through the $\mathcal {L}_{1}$ norm is often more advantageous. This can e.g. provide insights on the convergence rate in the Laplace/Fourier domain whose role is crucial in the analysis of dynamical systems. For this reason, we consider all the RKHSs $\mathcal {H}$ associated to Lebesgue measurable positive-definite kernels which induce subspaces of $\mathcal {L}_{1}$, also known as stable RKHSs in the literature. The inclusion $\mathcal {H} \subset \mathcal {L}_{1}$ is then characterized. This permits to convert all the error bounds which depend on the RKHS norm in terms of the $\mathcal {L}_{1}$ norm. We also show that our result is optimal: there does not exist any better reformulation of the bounds in $\mathcal {L}_{1}$ than the one here presented.

Learning for Control: $\mathcal {L}_{1}$-error Bounds for Kernel-based Regression

Pillonetto G.
2024

Abstract

We consider functional regression models with noisy outputs resulting from linear transformations. In the setting of regularization theory in Reproducing Kernel Hilbert Spaces (RKHSs), much work has been devoted to build uncertainty bounds around kernel-based estimates, hence characterizing their convergence rates. Such results are typically formulated using either the average squared loss for the prediction or the RKHS norm. However, in signal processing and in emerging areas like learning for control, measuring the estimation error through the $\mathcal {L}_{1}$ norm is often more advantageous. This can e.g. provide insights on the convergence rate in the Laplace/Fourier domain whose role is crucial in the analysis of dynamical systems. For this reason, we consider all the RKHSs $\mathcal {H}$ associated to Lebesgue measurable positive-definite kernels which induce subspaces of $\mathcal {L}_{1}$, also known as stable RKHSs in the literature. The inclusion $\mathcal {H} \subset \mathcal {L}_{1}$ is then characterized. This permits to convert all the error bounds which depend on the RKHS norm in terms of the $\mathcal {L}_{1}$ norm. We also show that our result is optimal: there does not exist any better reformulation of the bounds in $\mathcal {L}_{1}$ than the one here presented.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3526284
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex 0
social impact