Fine-tuning adaptive stochastic optimizers: determining the optimal hyperparameter ϵ via gradient magnitude histogram analysis

Gustavo Silva, Paul Rodriguez

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

1 Cita (Scopus)

Resumen

Stochastic optimizers play a crucial role in the successful training of deep neural network models. To achieve optimal model performance, designers must carefully select both model and optimizer hyperparameters. However, this process is frequently demanding in terms of computational resources and processing time. While it is a well-established practice to tune the entire set of optimizer hyperparameters for peak performance, there is still a lack of clarity regarding the individual influence of hyperparameters mislabeled as “low priority”, including the safeguard factor ϵ and decay rate β, in leading adaptive stochastic optimizers like the Adam optimizer. In this manuscript, we introduce a new framework based on the empirical probability density function of the loss’ gradient magnitude, termed as the “gradient magnitude histogram”, for a thorough analysis of adaptive stochastic optimizers and the safeguard hyperparameter ϵ. This framework reveals and justifies valuable relationships and dependencies among hyperparameters in connection to optimal performance across diverse tasks, such as classification, language modeling and machine translation. Furthermore, we propose a novel algorithm using gradient magnitude histograms to automatically estimate a refined and accurate search space for the optimal safeguard hyperparameter ϵ, surpassing the conventional trial-and-error methodology by establishing a worst-case search space that is two times narrower.

Idioma originalInglés
Páginas (desde-hasta)22223-22243
Número de páginas21
PublicaciónNeural Computing and Applications
Volumen36
N.º35
DOI
EstadoPublicada - dic. 2024

Huella

Profundice en los temas de investigación de 'Fine-tuning adaptive stochastic optimizers: determining the optimal hyperparameter ϵ via gradient magnitude histogram analysis'. En conjunto forman una huella única.

Citar esto