TY - GEN
T1 - RANDOM GENERATED DICTIONARIES FOR CONVOLUTIONAL SPARSE CODING
T2 - 29th IEEE International Conference on Image Processing, ICIP 2022
AU - Rodriguez, Paul
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - The most basic ELM (extreme learning machines) architecture consists on a single-hidden layer feedforward neural network, with random input weights, plus a densely connected output layer whose weights must be learned. Among other interpretations, it can be understood as using an untrained dictionary (with random entries) along with a non-linear activation function to obtain a representation. Compared to Neural Networks (NN) or Convolutional NN (CNN), ELM is very fast to train. Inspired by the ELM methodology, in this paper we explore the usefulness of using a randomly generated filterbank (FB) as the convolutional dictionary in convolutional sparse coding (CSC) representations and assess its performance for simple applications such denoising and super resolution, when compared to learned FBs. Our main conclusions are that a randomly generated FB (i) has a competitive (restoration) performance when compared to a learned FB, (ii) its performance depends on the actual distribution of its values, i.e. Gaussian, uniform, lognormal, etc., and problem, and (iii) it may ease or potentially eliminate the need for the CDL (convolutional dictionary learning) step in CSR's applications.
AB - The most basic ELM (extreme learning machines) architecture consists on a single-hidden layer feedforward neural network, with random input weights, plus a densely connected output layer whose weights must be learned. Among other interpretations, it can be understood as using an untrained dictionary (with random entries) along with a non-linear activation function to obtain a representation. Compared to Neural Networks (NN) or Convolutional NN (CNN), ELM is very fast to train. Inspired by the ELM methodology, in this paper we explore the usefulness of using a randomly generated filterbank (FB) as the convolutional dictionary in convolutional sparse coding (CSC) representations and assess its performance for simple applications such denoising and super resolution, when compared to learned FBs. Our main conclusions are that a randomly generated FB (i) has a competitive (restoration) performance when compared to a learned FB, (ii) its performance depends on the actual distribution of its values, i.e. Gaussian, uniform, lognormal, etc., and problem, and (iii) it may ease or potentially eliminate the need for the CDL (convolutional dictionary learning) step in CSR's applications.
KW - convolutional sparse coding
KW - extreme learning machines
UR - http://www.scopus.com/inward/record.url?scp=85146700110&partnerID=8YFLogxK
U2 - 10.1109/ICIP46576.2022.9898050
DO - 10.1109/ICIP46576.2022.9898050
M3 - Conference contribution
AN - SCOPUS:85146700110
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 126
EP - 130
BT - 2022 IEEE International Conference on Image Processing, ICIP 2022 - Proceedings
PB - IEEE Computer Society
Y2 - 16 October 2022 through 19 October 2022
ER -