Definition of the problem
The theory of Compressed Sensing (CS) shows that many signals of interest can be reconstructed from a few linear, and typically random, observations. Interestingly, this reconstruction is made possible if the number of observations (or measurements) is adjusted to the intrinsic complexity of the signal, e.g., its sparsity for vectors or its low-rankness for matrices. Thus, this principle is a generalization of the Shannon-Nyquist sampling theorem, where the sampling rate is set by the bandwidth of the signal.
However, a significant aspect of CS systems is the effect of ‘‘quantization’’ on the acquired observations, in particular for the purpose of compression and transmission. This quantization is a non-linear transformation that both distorts the CS observations and increases, specially at low bit rates, the reconstruction error of CS reconstruction procedures.
Our contributions
This project focuses on minimizing the impact of (scalar) quantization during the reconstruction of a signal from its quantized compressive observations. While accurate, more efficient quantization procedures, e.g., $\Sigma\Delta,$ universal, binned or vector quantizations exist in the literature, scalar quantization remains appealing for its implementation simplicity in most electronic devices, and for its robustness against measurement lost.
Conversely to former attempts, which consider quantization distortion as additive Gaussian measurement noise and promote a Euclidean $(\ell_2)$ fidelity with the signal observations as in the Basis Pursuit Denoise (BPDN) program, better signal reconstruction methods are reached by forcing consistency between the re-observed signal estimate and the quantized observations.
The simulation results, shown below, clearly highlights the advantage of consistent signal reconstruction when $M=K$ is large. Moreover, CoBP approaches an error decay of $M^{-1}$ similar to the distance decay of consistent K-sparse vectors.
Related References: (Moshtaghpour et al. 2016) (Jacques 2017)