by Geng, H, Chien, H-J and Klette, R
Abstract:
This paper presents an approach for incrementally adding missing information into a point cloud generated for 3D roadside reconstruction. We use a series of video sequences recorded while driving repeatedly through the road to be reconstructed. The video sequences can also be recorded while driving in opposite directions. We call this a multi-run scenario. The only extra input data other than stereo images is the reading from a GPS sensor, which is used as guidance for merging point clouds from different sequences into one. The quality of the 3D roadside reconstruction is in direct relationship to the accuracy of the applied egomotion estimation method. A main part of our motion analysis method is defined by visual odometry following a traditional workflow in this area: first, establish correspondences of tracked features between two subsequent frames; second, use a stereo-matching algorithm to calculate the depth information of the tracked features; then compute the motion data between every two frames using a perspective-n-point solver. Additionally, we propose a technique that uses a Kalman-filter fusion to track the selected feature points, and to filter outliers. Furthermore, we use the GPS data to bound the overall propagation of the positioning errors. Experiments are given with trajectory estimation and 3D scene reconstruction. We evaluate our approach by estimating the recovery of (so far) missing information when analysing data recorded in a subsequent run.
Reference:
Multi-run: An approach for filling in missing information of 3D roadside reconstruction (Geng, H, Chien, H-J and Klette, R), In Image and Video Technology – PSIVT 2015 Workshops (Huang, F, Sugimoto, A, eds.), Springer Verlag, volume 9555, 2016.
Bibtex Entry:
@inproceedings{geng2016multi-run:reconstruction, author = "Geng, H and Chien, H-J and Klette, R", booktitle = "Image and Video Technology – PSIVT 2015 Workshops", editor = "Huang, F and Sugimoto, A", pages = "192--205", publisher = "Springer Verlag", title = "Multi-run: An approach for filling in missing information of 3D roadside reconstruction", volume = "9555", year = "2016", abstract = "This paper presents an approach for incrementally adding missing information into a point cloud generated for 3D roadside reconstruction. We use a series of video sequences recorded while driving repeatedly through the road to be reconstructed. The video sequences can also be recorded while driving in opposite directions. We call this a multi-run scenario. The only extra input data other than stereo images is the reading from a GPS sensor, which is used as guidance for merging point clouds from different sequences into one. The quality of the 3D roadside reconstruction is in direct relationship to the accuracy of the applied egomotion estimation method. A main part of our motion analysis method is defined by visual odometry following a traditional workflow in this area: first, establish correspondences of tracked features between two subsequent frames; second, use a stereo-matching algorithm to calculate the depth information of the tracked features; then compute the motion data between every two frames using a perspective-n-point solver. Additionally, we propose a technique that uses a Kalman-filter fusion to track the selected feature points, and to filter outliers. Furthermore, we use the GPS data to bound the overall propagation of the positioning errors. Experiments are given with trajectory estimation and 3D scene reconstruction. We evaluate our approach by estimating the recovery of (so far) missing information when analysing data recorded in a subsequent run.", doi = "10.1007/978-3-319-30285-0_16", isbn = "9783319302843", issn = "0302-9743", eissn = "1611-3349", keyword = "3D reconstruction", keyword = "GPS data", keyword = "Kalman filter", keyword = "Motion analysis", keyword = "Multi-run scenario", keyword = "Multi-sensory integration", keyword = "Visual odometry", language = "eng", conference = "PSIVT 2015 Workshops", }