2019, Kivi, P., Master of Science thesis, Tampere University.
Abstract: State of the art scanning and capturing devices are able to produce surface point cloud models of a wide range of real world objects. The visualization and rendering of enormous point clouds with millions or billions of points is demanding. VR- and AR-applications can utilize embedded real world objects in generating visually pleasing and immersive virtual worlds. In order to achieve convincing real life equivalents in VR, rendering techniques that can replicate realistic material and lighting effects are needed. This can be achieved by utilizing ray tracing methods to render the virtual world onto a monitor or a head-mounted display.
Virtual reality applications need real-time stereoscopic rendering with high frame rates and resolution to produce a realistic and comfortable experience. This sets high demands on a point cloud ray tracing pipeline, which needs efficient intersection testing between rays and point cloud models. An easily intersectable global surface can be reconstructed from the point cloud model with, e.g., triangle mesh reconstruction. However, this can be computationally demanding and even wasteful if parts of the model are out of view or occluded. Direct point cloud ray tracing methods consider local features of the point cloud to generate intersectable surfaces only when needed.
In this thesis, we survey and compare different methods for directly ray tracing point cloud models without global surface reconstruction. Methods are compared with asymptotic complexity analysis and it is concluded that direct ray tracing of point clouds can be computationally more efficient compared to global surface reconstruction.