ISP Seminars

Previous Seminars

Blind Deconvolution of PET Images using Anatomical Priors
Adriana González and Stéphanie Guérit
28 April 2016

Positron emission tomography (PET) imaging provides clinicians with relevant information about the metabolic activity of a patient. This imaging modality suffers, however, from a low spatial resolution due to both physical and instrumental factors. These factors can be modeled by a blurring function, often estimated experimentally by imaging a radioactive linear source. This estimate is useful to restore the PET image, either during the reconstruction process or in a post-processing phase. Nevertheless, in some situations (e.g., in the context of multicentric studies where images come from different clinical centers) we do not have access to raw data neither to the scanner and, hence, the blurring function cannot be directly estimated. Is it still possible to enhance the PET image when the blurring function is unknown? Blind deconvolution methods allow to simultaneously estimate the blurring function and restore the image. By using the available prior information we are able to regularize the considered inverse problem and, hence, reduce the number of potential solutions to those that are meaningful. In this talk, we will explain the blind deconvolution method that we developed, which uses prior information on the image based on the usual distribution of the radiotracer in the body and on the high resolution anatomical images from CT imaging.

Ressources :

Shannon Seminar Room, Place du Levant 3, Maxwell Building, 1st floor -- Thursday, 28 April 2016 at 09:30 (45 min.)

On the decomposition of blurring operators in wavelet bases
Prof. Pierre Weiss, ITAV, U. Toulouse, France. (invited talk)
11 March 2016

Image deblurring is a fundamental image processing problem usually solved with computationally intensive procedures. After significant progresses due to advances in convex optimization (e.g. accelerated gradient schemes à la Nesterov), this problem however seemingly stalled for the last 5 years. In this talk, I will first provide a few properties of blurring operators (convolutions or more generally regularizing linear integral operators) when expressed in wavelet bases. These properties were one of the initial motivations of Y. Meyer to develop the wavelet theory. Surprisingly, they have been used extremely scarcely in image processing, despite the success of wavelets in that domain. We will then show that those theoretical results may have an important impact on the practical resolution of inverse problems and in particular deblurring. As an example, we will show that they allow gaining from one to two orders of magnitude in the speed of resolution of convex l1-l2 deblurring problems.

Shannon Seminar Room, Place du Levant 3, Maxwell Building, 1st floor -- Friday, 11 March 2016 at 10:45 (45 min.)

FBMC/OQAM transceivers for 5G mobile communication systems
François Rottenberg (DIGICOM, ICTEAM, UCL)
2 March 2016

One of the main challenges investigated for 5G wireless communication systems is the ability to take care of the largely increasing number of devices, establishing connections with a wide variety of requirements: bit rate, flexibility, energy efficiency... Under those aspects, Offset-QAM-based filterbank multicarrier (FBMC-OQAM) has been shown to be a promising alternative to cyclic prefix-orthogonal division multiplexing (CP-OFDM), the modulation used in most wireless communication systems nowadays, such as, Wi-Fi, DSL, LTE, DVB...

Ressources :

Nysquist Seminar Room, Place du Levant 3, Maxwell Building, 1st floor -- Wednesday, 2 March 2016 at 16:15 (45 min.)

(invited talk) "Applications of PCA and low-rank plus sparse decompositions in high-contrast Exoplanet imaging
Carlos Gomez, VORTEX Project, Department of Astrophysics, Geophysics & Oceanography, ULg
4 February 2016

Only a small fraction of the confirmed exoplanet candidates known to date have been discovered through direct imaging. Indeed the task of observing planets is very challenging due to the huge difference in contrast between the host star and its potential companions, the small angular separation and image degradation caused by Earth’s turbulent atmosphere. Post-processing algorithms play a critical role in direct imaging of exoplanets by boosting the detectability of real companions in a noisy background. Among these data processing techniques, the most recently proposed is the Principal Component Analysis (PCA), a ubiquitous statistical technique already used in background subtraction problems. Inspired by recent advances in machine learning algorithms such as robust PCA, we propose a local three-term decomposition (LLSG) that surpasses current PCA-based post-processing algorithms in terms of detectability of companions at near real-time speed. We test the performance of our new algorithm on a training dataset and show how LLSG decomposition reaches higher signal-to-noise ratio and has an overall better performance in the Receiver Operating Characteristic (ROC) space.

Ressources :

Shannon Seminar Room, Place du Levant 3, Maxwell Building, 1st floor -- Thursday, 4 February 2016 at 11:00 (45 min.)

Sensing-based Resource Allocation in Cognitive Radio Networks
Dr Nafiseh Janatian (DIGICOM, ICTEAM, UCL)
19 November 2015

One of the main requirements of cognitive radio (CR) systems is the ability to reliably detect the presence of licensed primary users. After determining the availability of the licensed spectrum bands, CR users must select appropriate transmission parameters to better utilize these bands and avoid possible interference to the licensed users. In this presentation, the problem of joint sensing and resource allocation with the goal of energy consumption minimization in code-division multiple access (CDMA)-based CR networks is addressed. A network consisting of multiple secondary users (SUs) and a secondary base station (BS) implementing a two-phase protocol is considered. In the first phase, censor-based cooperative spectrum sensing is carried out to detect the PU’s presence. When the channel is estimated to be free, SUs transmit data in the uplink to the BS in the second phase by using CDMA. The sensing parameters and transmit power of SUs are optimized jointly to minimize the total energy consumption with the constraints on SUs’ quality of service (QoS) and detection probability of the PU. Furthermore, the separate optimization problem in which the sensing parameters of SUs are set regardless of the allocated resources is studied. The energy saving of joint versus separate optimization is investigated using numerical experiments. Numerical results show that joint optimization can save the energy consumption significantly in comparison with separate optimization in low sensing SNRs.

Ressources :

Shannon Seminar Room, Place du Levant 3, Maxwell Building, 1st floor -- Thursday, 19 November 2015 at 14:00 (45 min.)

Pilots allocation for sparse channel estimation in multicarrier systems
François Rottenberg (DIGICOM, ICTEAM, UCL)
5 November 2015

Wireless channels experience multipath fading which can be modeled by a sparse discrete multi-tap impulse response. Estimating this channel is of crucial importance to allow the receiver to properly recover the transmitted signal. This presentation investigates the issue of allocating the pilots for sparse channel estimation applied to multicarrier systems. When the number of pilots is larger than or equal to the channel maximal length, this issue is well-known and the optimal allocation is equispaced. However for long channels, this would require a very large number of pilots decreasing the throughput of the system. Therefore, compressed sensing (CS) techniques are considered to estimate the sparse channel from a limited number of pilots. In that case, the problem of placing the pilots remains an open issue. This paper proposes a two-step hybrid allocation of the pilots that takes the maximal channel length into account to restrict the frequency candidates. The performance of this allocation is demonstrated through simulations and comparisons with other classical allocations.

Shannon Seminar Room, Place du Levant 3, Maxwell Building, 1st floor -- Thursday, 5 November 2015 at 14:00 (45 min.)

Blind Interference Alignment for Cellular Networks
Máximo Morales (DIGICOM, ICTEAM, UCL)
15 October 2015

As more dense and heterogeneous cellular networks are required to satisfy the user demands on mobile communications interference becomes to be the principal limiting factor. Several schemes such as Linear Zero Forcing Beamforming (LZFB) or Interference Alignment (IA) have been proposed as means of achieving enormous data rates. These transmission schemes are based on exploiting the Channel State Information at the Transmitter (CSIT) to achieve the optimal Degrees of Freedom (DoF), also known as multiplexing gain. It is interesting to remark that satisfying this requirement in cellular environments involves to waste resources for channel feedback and backhaul links among the BSs. In consequence, the increase on the rates achieved by these schemes results futile because of the costs of providing CSIT.

Shannon Seminar Room, Place du Levant 3, Maxwell Building, 1st floor -- Thursday, 15 October 2015 at 14:00 (45 min.)

Compressive Classification: A Guided Tour
Valerio Cambareri
24 September 2015

The mature concept of compressed sensing (CS) is being transferred to the application level as a means to save the physical resources spent in the analog-to-digital interface of challenging signal and image acquisition tasks, i.e., when the underlying sensing process requires a critical amount of time, power, sensor area and cost. For most structured signals, this method amounts to applying a dimensionality-reducing random matrix followed by an accurate, yet computationally expensive recovery algorithm that is capable of producing full-resolution signal recoveries from such low-dimensional measurements.

Ressources :

Shannon Seminar Room, Place du Levant 3, Maxwell Building, 1st floor -- Thursday, 24 September 2015 at 14:00 (45 min.)

Fusion-based techniques in computer vision
Cosmin Ancuti
11 September 2015

Image fusion is a well-studied process that aims to blend seamlessly the input information by preserving only the specific features of the composite output image. In general image fusion combines multiple-source complementary imagery in order to enhance the information apparent in the respective source images, as well as to increase the reliability of interpretation and classification.

Shannon Seminar Room, Place du Levant 3, Maxwell Building, 1st floor -- Friday, 11 September 2015 at 11:00 (45 min.)

1930s Analysis for 2010s Signal Processing: Recent Progress on the Superresolution Question
Prof. Laurent Demanet, Imaging and Computing Group, MIT, USA
30 July 2015

The ability to access signal features below the diffraction limit of an imaging system is a delicate nonlinear phenomenon called superresolution. The main theoretical question in this area is still mostly open: it concerns the precise balance of noise, bandwidth, and signal structure that enables super-resolved recovery. When structure is understood as sparsity on a grid, we show that there is a precise scaling law that extends Shannon-Nyquist theory, and which governs the asymptotic performance of a class of simple "subspace-based" algorithms. This law is universal in the minimax sense that no statistical estimator can outperform it significantly. By contrast, compressed sensing is in many cases suboptimal for the same task. Joint work with Nam Nguyen.

Shannon Seminar Room, Place du Levant 3, Maxwell Building, 1st floor -- Thursday, 30 July 2015 at 14:00 (45 min.)