Compressed Sensing and High Resolution Quantization

Measurement quantization is a critical step in the design and in the dissemination of new technologies implementing the Compressed Sensing (CS) paradigm. Quantization is indeed mandatory for transmitting, storing and even processing any data sensed by a CS device.

In its most popular version, CS provides uniform theoretical guarantees for stably recovering any sparse (or compressible) signal at a sensing rate proportional to the signal intrinsic dimension (i.e., its sparsity level). However, the distortion introduced by any quantization step is often still crudely modeled as a noise with bounded $\ell_2$-norm.

The present research topic addresses the problem of recovering sparse or compressive signals in a given uniform or non-uniform Quantized Compressed Sensing (QCS) scenario. In particular, we assume that the signal measurements have undergone a uniform or an optimal non-uniform scalar quantization process (i.e., optimized a priori according to a common minimal distortion standpoint with respect to a source with known probability density function, or pdf).

We show that solving a General Basis Pursuit DeNoising program (GBPDN) – an $\ell_1$-minimization problem constrained by a (weighted) $\ell_p$-norm whose radius is appropriately estimated – stably recovers strictly sparse or compressible signals. The rationale is that this $\ell_p$-norm bounds more faithfully the distortion introduced by the quantization procedure than a traditional $\ell_2$ bound.

In the context of non-uniform quantization, we prove also that the well-known theory of “companders” provides an elegant framework for estimating this distortion. In short, when the bit budget of the quantizer is high and the quantization bins are narrow, the compander theory provides an equivalent description of the action of a quantizer through sequential application of a “compressor”, a uniform quantization, then an “expander”.

Our results lead to a set of tradeoff for setting the optimal $\ell_p$ norm. On the one hand, given a budget of B bits per measurement and for a high number of measurements M, the error decays as $O(\frac{2^B}{p+1})$ when $p$ increases, i.e., a favorable situation since then GBPDN reduces the reconstruction error (being ultimately consistent with the quantized CS measurements).

On the other hand, the larger p, the greater the number of measurements required to ensure that the problem is “well conditioned” with respect to the CS theory, i.e., that the sensing matrix satisfies a generalized restricted isometry property (RIP). Put differently, given a certain number of measurements, the range of theoretically admissible p is upper bounded, an effect which is expected since the error due to quantization cannot be totally eliminated in the reconstruction.

Collaborators

  • David K. Hammond (Oregon Institute of Technology - Wilsonville, OR, USA)
  • Jalal Fadili (GREYC, Caen, France)

References