TY - GEN
T1 - Improved Solution to the ℓ0 Regularized Optimization Problem via Dictionary-Reduced Initial Guess
AU - Rodriguez, Paul
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/8/27
Y1 - 2018/8/27
N2 - The ℓ0 regularized optimization (ℓ0-RO) problem is a nonconvex problem that is central to several applications such as sparse coding, dictionary learning, compressed sensing, etc. Iterative algorithms for ℓ0 - RO problem are only known to have local or subsequence convergence properties i.e. the solution is trapped in a saddle point or in an inferior local solution. Inspired by techniques used to improve the alternating optimization (AO) of nonconvex functions, we propose a simple yet effective two step iterative method to improve the solution to the ℓ0RO problem. Given an initial solution, we first find the vanilla solution to ℓ0RO via a descent method (in particular, Nesterov's accelerated gradient descent), to then estimate a new initial solution by using a scaled version of the dictionary involved in the ℓ0-RO problem, considering only a reduced number of its atoms. Our proposed algorithm is empirically demonstrated to have the best tradeoff between accuracy and computation time, when compared to state-of-the-art algorithms. Furthermore, due to its structure, our proposed algorithm can be directly apply to the convolutional formulation of ℓ0-RO.
AB - The ℓ0 regularized optimization (ℓ0-RO) problem is a nonconvex problem that is central to several applications such as sparse coding, dictionary learning, compressed sensing, etc. Iterative algorithms for ℓ0 - RO problem are only known to have local or subsequence convergence properties i.e. the solution is trapped in a saddle point or in an inferior local solution. Inspired by techniques used to improve the alternating optimization (AO) of nonconvex functions, we propose a simple yet effective two step iterative method to improve the solution to the ℓ0RO problem. Given an initial solution, we first find the vanilla solution to ℓ0RO via a descent method (in particular, Nesterov's accelerated gradient descent), to then estimate a new initial solution by using a scaled version of the dictionary involved in the ℓ0-RO problem, considering only a reduced number of its atoms. Our proposed algorithm is empirically demonstrated to have the best tradeoff between accuracy and computation time, when compared to state-of-the-art algorithms. Furthermore, due to its structure, our proposed algorithm can be directly apply to the convolutional formulation of ℓ0-RO.
KW - Nonsmooth/Nonconvex optimization
KW - escape procedure. Nesterov's AGD
KW - ℓ regularized optimization
UR - http://www.scopus.com/inward/record.url?scp=85053895347&partnerID=8YFLogxK
U2 - 10.1109/IVMSPW.2018.8448807
DO - 10.1109/IVMSPW.2018.8448807
M3 - Conference contribution
AN - SCOPUS:85053895347
SN - 9781538609514
T3 - 2018 IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop, IVMSP 2018 - Proceedings
BT - 2018 IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop, IVMSP 2018 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 13th IEEE Image, Video, and Multidimensional Signal Processing Workshop, IVMSP 2018
Y2 - 10 June 2018 through 12 June 2018
ER -