Multiview video (MVV) plus depths formats use view synthesis to build intermediate views from existing adjacent views at the receiver side. Traditional view synthesis exploits the disparity information to interpolate an intermediate view by considered inter-view correlations. However, temporal correlation between different frames of the intermediate view can be used to improve the synthesis. We propose a new coding scheme for 3-D High Efficiency Video Coding (HEVC) that allows us to take full advantage of temporal correlations in the intermediate view and improve the existing synthesis from adjacent views. We use optical flow techniques to derive dense motion vector fields (MVF) from the adjacent views and then warp them at the level of the intermediate view. This allows us to construct multiple temporal predictions of the synthesized frame. A second contribution is an adaptive fusion method that judiciously selects between temporal and inter-view prediction to eliminate artifacts associated with each prediction type. The proposed system is compared against the state-of-the-art view synthesis reference software 1-D Fast technique used in 3-D HEVC standardization. Three intermediary views are synthesized. Gains of up to 1.21-dB Bjontegaard Delta peak SNR are shown when evaluated on several standard MVV test sequences.
Multiview Plus Depth Video Coding with Temporal Prediction View Synthesis
Cagnazzo M.;
2016
Abstract
Multiview video (MVV) plus depths formats use view synthesis to build intermediate views from existing adjacent views at the receiver side. Traditional view synthesis exploits the disparity information to interpolate an intermediate view by considered inter-view correlations. However, temporal correlation between different frames of the intermediate view can be used to improve the synthesis. We propose a new coding scheme for 3-D High Efficiency Video Coding (HEVC) that allows us to take full advantage of temporal correlations in the intermediate view and improve the existing synthesis from adjacent views. We use optical flow techniques to derive dense motion vector fields (MVF) from the adjacent views and then warp them at the level of the intermediate view. This allows us to construct multiple temporal predictions of the synthesized frame. A second contribution is an adaptive fusion method that judiciously selects between temporal and inter-view prediction to eliminate artifacts associated with each prediction type. The proposed system is compared against the state-of-the-art view synthesis reference software 1-D Fast technique used in 3-D HEVC standardization. Three intermediary views are synthesized. Gains of up to 1.21-dB Bjontegaard Delta peak SNR are shown when evaluated on several standard MVV test sequences.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.