Assessment of a Two-step Integration Method as an Optimizer for Deep Learning

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


It is a known fact that accelerated (non-stochastic) optimization methods can be understood as multi-step integration ones: e.g. the heavy ball's Polyak and Nesterov accelerations can be derived as particular instances of a two-step integration method applied to the gradient flow. However, in the stochastic context, to the best of our knowledge, multi-step integration methods have not been exploited as such, only as some particular instances, i.e. SGD (stochastic gradient descent) with momentum or with the Nesterov acceleration. In this paper we propose to directly use a two-step (TS) integration method in the stochastic context. Furthermore, we assess the computational effectiveness of selecting the TS method's weights after considering its lattice representation. Our experiments includes several well-known multiclass classification architectures (AlexNet, VGG16 and EfficientNetV2) as well as several established stochastic optimizer e.g. SGD along with momentum/Nesterov acceleration and ADAM. The TS based method attains a better test accuracy than the first two, whereas it is competitive with to a well-tuned (E/learning rate) ADAM.

Original languageEnglish
Title of host publication31st European Signal Processing Conference, EUSIPCO 2023 - Proceedings
PublisherEuropean Signal Processing Conference, EUSIPCO
Number of pages5
ISBN (Electronic)9789464593600
StatePublished - 2023
Event31st European Signal Processing Conference, EUSIPCO 2023 - Helsinki, Finland
Duration: 4 Sep 20238 Sep 2023

Publication series

NameEuropean Signal Processing Conference
ISSN (Print)2219-5491


Conference31st European Signal Processing Conference, EUSIPCO 2023


  • gradient flow
  • stochastic gradient descent


Dive into the research topics of 'Assessment of a Two-step Integration Method as an Optimizer for Deep Learning'. Together they form a unique fingerprint.

Cite this