Nowadays, when a viewer watches a video content, his/her viewpoint is fixed to one of the cameras that have recorded the scene. In order to increase the viewer's immersivity, the next generation of video content will allow him/her to interactively define his/her viewpoint. This domain is known as free-viewpoint rendering, and consists of the interpolation of views from images captured by some real cameras. However, the state-of-the-art solutions require that the real cameras share very similar viewpoints, meaning that a close scene can be rendered only with a dense camera network, and that far scenes can be rendered only with very high-resolution cameras. These requirements make free-viewpoint rendering an expensive technology, slowing down its entry on the market.