The paper addresses the problem of improving the MPEG compression of synthetic video sequences by exploiting the knowledge about the original 3D model. Two techniques are proposed for the specific case of a virtual walkthrough in which the point of view is the unique moving object in the scene. Technique 1 consists of using only P-frames when position and direction of the point of view do not change since, in this case, each frame is equal to the previous one; P-frames can be simply repeated without any encoding effort thus reducing the computational complexity. Technique 2 consists of increasing the quantization parameter when the direction of the point of view is changing, since the resulting increase of distortion is not perceived clearly for fast-moving objects because of the temporal masking effect. Experimental results compared with model-unaware encoding shows that Technique 1 reduces the bitstream size by about 9% without any appreciable decrease of perceptual quality while CPU encoding time is reduced by about 18%. The combination of both techniques reduces the bitrate by about 13% with a slight increase of the quantization noise which is partially hidden by the temporal masking effect. Video samples are available at <a href="http://media.polito.it/mpeg3d/"> http://media.polito.it/mpeg3d/</a>.

Model-Based MPEG Compression of Synthetic Video Sequences

QUAGLIA, Davide;
2004

Abstract

The paper addresses the problem of improving the MPEG compression of synthetic video sequences by exploiting the knowledge about the original 3D model. Two techniques are proposed for the specific case of a virtual walkthrough in which the point of view is the unique moving object in the scene. Technique 1 consists of using only P-frames when position and direction of the point of view do not change since, in this case, each frame is equal to the previous one; P-frames can be simply repeated without any encoding effort thus reducing the computational complexity. Technique 2 consists of increasing the quantization parameter when the direction of the point of view is changing, since the resulting increase of distortion is not perceived clearly for fast-moving objects because of the temporal masking effect. Experimental results compared with model-unaware encoding shows that Technique 1 reduces the bitstream size by about 9% without any appreciable decrease of perceptual quality while CPU encoding time is reduced by about 18%. The combination of both techniques reduces the bitrate by about 13% with a slight increase of the quantization noise which is partially hidden by the temporal masking effect. Video samples are available at http://media.polito.it/mpeg3d/.
0780385543
Synthetic Video, Video coding, MPEG
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11562/27327
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact