TY - GEN
T1 - Fast Convolutional Sparse Coding with ℓ0 Penalty
AU - Rodriguez, Paul
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/11/6
Y1 - 2018/11/6
N2 - Given a set of dictionary filters, the most widely used formulation of the convolutional sparse coding (CSC) problem is Convolutional BPDN (CBPDN), in which an image is represented as a sum over a set of convolutions of coefficient maps; usually, the coefficient maps are ℓ1-norm penalized in order to enforce a sparse solution. Recent theoretical results, have provided meaningful guarantees for the success of popular ℓ1-norm penalized CSC algorithms in the noiseless case. However, experimental results related to the ℓ0-norm penalized CSC case have not been addressed.In this paper we propose a two-step ℓ0-norm penalized CSC (ℓ0-CSC) algorithm, which outperforms (convergence rate, reconstruction performance and sparsity) known solutions to the ℓ0-CSC problem. Furthermore, our proposed algorithm, which is a convolutional extension of our previous work [1], originally develop for the ℓ0 regularized optimization problem, includes an escape strategy to avoid being trapped in a saddle points or in inferior local solutions, which are common in nonconvex optimization problems, such those that use the ℓ0-norm as the penalty function.
AB - Given a set of dictionary filters, the most widely used formulation of the convolutional sparse coding (CSC) problem is Convolutional BPDN (CBPDN), in which an image is represented as a sum over a set of convolutions of coefficient maps; usually, the coefficient maps are ℓ1-norm penalized in order to enforce a sparse solution. Recent theoretical results, have provided meaningful guarantees for the success of popular ℓ1-norm penalized CSC algorithms in the noiseless case. However, experimental results related to the ℓ0-norm penalized CSC case have not been addressed.In this paper we propose a two-step ℓ0-norm penalized CSC (ℓ0-CSC) algorithm, which outperforms (convergence rate, reconstruction performance and sparsity) known solutions to the ℓ0-CSC problem. Furthermore, our proposed algorithm, which is a convolutional extension of our previous work [1], originally develop for the ℓ0 regularized optimization problem, includes an escape strategy to avoid being trapped in a saddle points or in inferior local solutions, which are common in nonconvex optimization problems, such those that use the ℓ0-norm as the penalty function.
KW - Convolutional Sparse Coding
KW - Nonconvex optimization
KW - ℓ regularized optimization, escape procedure, Nesterov's accelerated gradient descent
UR - http://www.scopus.com/inward/record.url?scp=85053878442&partnerID=8YFLogxK
U2 - 10.1109/INTERCON.2018.8526377
DO - 10.1109/INTERCON.2018.8526377
M3 - Conference contribution
AN - SCOPUS:85053878442
T3 - Proceedings of the 2018 IEEE 25th International Conference on Electronics, Electrical Engineering and Computing, INTERCON 2018
BT - Proceedings of the 2018 IEEE 25th International Conference on Electronics, Electrical Engineering and Computing, INTERCON 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 25th IEEE International Conference on Electronics, Electrical Engineering and Computing, INTERCON 2018
Y2 - 8 August 2018 through 10 August 2018
ER -