European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Optimized Dynamic Point Cloud Compression

Periodic Reporting for period 1 - OPT-PCC (Optimized Dynamic Point Cloud Compression)

Periodo di rendicontazione: 2020-11-23 al 2021-11-22

Point clouds are representations of three-dimensional (3D) objects in the form of a sample of points on their surface. Point clouds can be used in real-time 3D immersive telepresence, automotive and robotic navigation, as well as medical imaging. Compared to traditional video technology, point cloud systems allow free viewpoint rendering, as well as mixing of natural and synthetic objects. However, this improved user experience comes at the cost of increased storage and bandwidth requirements as point clouds are typically represented by the geometry and colour (texture) of millions of 3D points. For this reason, major efforts are being made to develop efficient point cloud compression schemes. To standardize point cloud compression (PCC) technologies, the Moving Picture Experts Group (MPEG) launched a call for proposals in 2017. As a result, three point cloud compression technologies were developed: surface point cloud compression (S-PCC) for static point cloud data, video-based point cloud compression (V-PCC) for dynamic content, and LIDAR point cloud compression (L-PCC) for dynamically acquired point clouds. Later, L-PCC and S-PCC were merged under the name geometry-based point cloud compression (G-PCC). In V-PCC, the input point cloud is first decomposed into a set of patches, which are independently mapped to a two-dimensional grid of uniform blocks. This mapping is then used to store the geometry and colour information as one geometry video and one colour video. Next, the generated geometry video and colour video are compressed with a video coder, e.g. H.265/HEVC. Finally, the geometry and colour videos, together with metadata (occupancy map for the two-dimensional grid, auxiliary patch, and block information) are multiplexed to generate the bit stream. In the video coding step, compression is achieved with quantization, which is determined by a quantization step or, equivalently, a quantization parameter (QP). The aim of the OPT-PCC project is to develop algorithms that optimise the rate-distortion performance of V-PCC, i.e. algorithms that minimize the reconstruction error (distortion) for a given bit budget, or, equivalently, minimize the bitrate for the same reconstruction error.

The scientific and training objectives of the project are as follows.

1. O1: build analytical models that accurately describe the effect of the geometry and colour quantization of a point cloud on the bitrate and distortion;
2. O2: use O1 to develop fast search algorithms that optimise the allocation of the available bit budget between the geometry information and colour information;
3. O3: implement a compression scheme for dynamic point clouds that exploits O2 to outperform the state-of-the-art in terms of rate-distortion performance. The target is to reduce the bitrate by at least 20% for the same reconstruction quality;
4. O4: provide multi-disciplinary training to the researcher in algorithm design, metaheuristic optimisation, computer graphics, media production, and leadership and management skills.
The project has fully achieved its objectives and milestones.

1. O1: Analytical models that give the distortion and bitrate of V-PCC as a function of the geometry and colour quantization steps were built. Experimental results on several point clouds show that the bitrates and distortions computed by our analytical models have a high squared correlation coefficient (0.9969 for the bitrate and 0.9989 for the distortion) with the actual values computed by encoding and decoding the point clouds. This shows that the models are accurate.
2. O2: A fast search algorithm based on a variant of differential evolution (DE) was developed. The algorithm exploits the analytical models to optimize the allocation of the available bit budget between the geometry information and colour information. Another algorithm based on DE was also developed. This algorithm minimizes the actual distortion instead of its analytical model. It is slower than the first algorithm but produces higher-quality solutions.
3. O3: The algorithms developed in O2 were combined with V-PCC Test Model v12. The first algorithm did not outperform the state-of-the-art in terms of rate-distortion performance. However, the second one reduced the bitrate of the state-of-the-art for the same point cloud quality by 31% on average.
4. O4: The researcher was trained in algorithm design and metaheuristic optimisation by the two DMU supervisors and one supervisor from the University of Nottingham. He was trained in computer graphics and media production as part of his secondment at Fraunhofer HHI in Berlin. DMU also provided extensive training in leadership and management skills.

The researcher presented the results of the project in [1] and [2].

Data files from the research, including source codes were made available on Zenodo. The accepted manuscripts of the papers were made available on DORA https://www.dora.dmu.ac.uk.


The researcher presented the OPT-PCC project to the COL Lab at the University of Nottingham.

One workshop on point cloud compression was organized at ICIG 2021.

A patent on the novel ideas of OPT-PCC was filed.

A proposal to include the OPT-PCC rate-distortion optimisation technique in TMC2 was submitted to MPEG.

A project website (https://www.dmu.ac.uk/research/centres-institutes/ioes/optimised-dynamic-point-cloud-compression-project.aspx/) was created.


[1] H. Yuan, R. Hamzaoui, F. Neri, S. Yang, Model-based rate-distortion optimized video-based point cloud compression with differential evolution, in: Proc. 11th International Conference on Image and Graphics (ICIG 2021), Haikou, China, August 2021.
[2] H. Yuan, R. Hamzaoui, F. Neri, S. Yang, T. Wang, Global rate-distortion optimization of video-based point cloud compression with differential evolution, to appear in: Proc. 23rd International Workshop on Multimedia Signal Processing (IEEE MMSP 2021), Tampere, Oct. 2021.
Data fitting was used to build analytical models for the rate and distortion. The models were published in [1].

To solve the rate-distortion optimization problem, a DE variant was applied to the analytical models [1]. Starting from a population of randomly selected solutions, DE generates for each solution an offspring by perturbing another solution from the population with a scaled difference of two randomly selected solutions from the population. If the offspring is a better solution than the parent, the parent is replaced by the offspring. This procedure is repeated for a given number of iterations.

An alternative to the model-based optimization is to apply the DE variant to the actual rate and distortion functions [2].

The rate-distortion performance of the first DE algorithm was slightly lower than that of the state-of-the-art method [3]. However, the bit allocation error (BE) was lower. The average BE for the method in [3] was 11.94%, while that of the proposed algorithm was only 4.65% [1].

The second algorithm outperformed the state-of-the art method [3] in terms of rate-distortion performance and bit allocation accuracy. The average decrease in bitrate for the same distortion was 31%. The average BE was 0.45% while it was 10.75% for the method in [3], see [2].

[3] Q. Liu, H. Yuan, J. Hou, R. Hamzaoui, H. Su, Model-based joint bit allocation between geometry and color for video-based 3D point cloud compression, IEEE Transactions on Multimedia, doi: 10.1109/TMM.2020.3023294.
OPT-PCC vs. state-of-the-art video-based point cloud compression