Robust Point Cloud Processing
through Positional Embedding

Jianqiao Zheng1 Xueqian Li1 Sameera Ramasinghe2 Simon Lucey1
1 The University of Adelaide 2 Amazon

Illustration of different point cloud processing architectures. SG: sampling and grouping. PE: positional encoding. In per-point embedding (PPE), light blue PPEs are trained end-to-end while purple ones are randomly initialized. Both PointNet-based and PCT-based methods process points with a per-point encoder first and then use a pooling (usually max-pool) to get a feature vector for the point cloud. We found that all the pre-processing stages in PCT could be replaced by a simple PE (our PE-AT model) while maintaining good robustness to OOD noise. Furthermore, we can use PE as a non-learned PPE (our model) and achieves comparable performance. A pooling operation is used to get the global feature vector. And all these models can be used in various downstream tasks, such as classification.

Abstract

End-to-end trained per-point embeddings are an essential ingredient of any state-of-the-art 3D point cloud processing such as detection or alignment. Methods like PointNet, or the more recent point cloud transformer---and its variants---all employ learned per-point embeddings. Despite impressive performance, such approaches are sensitive to out-of-distribution (OOD) noise and outliers. In this paper, we explore the role of an analytical per-point embedding based on the criterion of bandwidth. The concept of bandwidth enables us to draw connections with an alternate per-point embedding---positional embedding, particularly random Fourier features. We present compelling robust results across downstream tasks such as point cloud classification and registration with several categories of OOD noise.



Citation

@misc{zheng2023robust,
    title={Robust Point Cloud Processing through Positional Embedding},
    author={Jianqiao Zheng and Xueqian Li and Sameera Ramasinghe and Simon Lucey},
    year={2023},
    eprint={2309.00339},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}


This template was inspired by project pages from Chen-Hsuan Lin and Richard Zhang.