Papers Read on AI

Papers Read on AI header image 1
October 24, 2022  

Time Will Tell: New Outlooks and A Baseline for Temporal Multi-View 3D Object Detection

October 24, 2022

While recent camera-only 3D detection methods leverage multiple timesteps, the limited history they use significantly hampers the extent to which temporal fusion can improve object perception. Observing that existing works’ fusion of multiframe images are instances of temporal stereo matching, we find that performance is hindered by the interplay between 1) the low granularity of matching resolution and 2) the sub-optimal multi-view setup produced by limited history usage. Our theoretical and empirical analysis demonstrates that the optimal temporal difference between views varies significantly for different pixels and depths, making it necessary to fuse many timesteps over long-term history. Building on our investigation, we propose to generate a cost volume from a long history of image observations, compensating for the coarse but efficient matching resolution with a more optimal multi-view matching setup. Further, we augment the per-frame monocular depth predictions used for long-term, coarse matching with short-term, fine-grained matching and find that long and short term temporal fusion are highly complementary. While maintaining high efficiency, our framework sets new stateof-the-art on nuScenes, achieving first place on the test set and outperforming previous best art by 5.2% mAP and 3.7% NDS on the validation set. Code will be released here: https://github.com/Divadi/SOLOFusion.

2022: Jinhyung D. Park, Chenfeng Xu, Shijia Yang, K. Keutzer, Kris Kitani, M. Tomizuka, W. Zhan

https://arxiv.org/pdf/2210.02443v1.pdf