The design of suitable clock servo is a well-known problem in the context of network-based synchronization systems. Several approaches can be found in the current literature, typically based on PI-controllers or Kalman filtering. These methods require a thorough knowledge of the environment, i.e. clock model, stability parameters, temperature variations, network traffic load, traffic profile and so on. This a-priori knowledge is required to optimize the servo parameters, such as PI constants or transition matrices in a Kalman filter. In this paper we propose instead a clock servo based on the recent Reinforcement Learning approach. In this case a self-learning algorithm based on a deep-Q network learns how to synchronize a local clock only from experience and by exploiting a limited set of predefined actions. Encouraging preliminary results reported in this paper represent a first step to explore the potentiality of the reinforcement learning in synchronization systems typically characterized by an initial lack of knowledge or by a great environmental variability.
Reinforcement Learning applied to Network Synchronization Systems
Giorgi, Giada
2022
Abstract
The design of suitable clock servo is a well-known problem in the context of network-based synchronization systems. Several approaches can be found in the current literature, typically based on PI-controllers or Kalman filtering. These methods require a thorough knowledge of the environment, i.e. clock model, stability parameters, temperature variations, network traffic load, traffic profile and so on. This a-priori knowledge is required to optimize the servo parameters, such as PI constants or transition matrices in a Kalman filter. In this paper we propose instead a clock servo based on the recent Reinforcement Learning approach. In this case a self-learning algorithm based on a deep-Q network learns how to synchronize a local clock only from experience and by exploiting a limited set of predefined actions. Encouraging preliminary results reported in this paper represent a first step to explore the potentiality of the reinforcement learning in synchronization systems typically characterized by an initial lack of knowledge or by a great environmental variability.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.