赞
踩
动态环境SLAM是目前slam方向的一个热门研究领域。
1.DynaSLAM(IROS 2018)
2.DS-SLAM(IROS 2018, 清华大学)
代码:https://github.com/ivipsourcecode/DS-SLAM
主要思想:(语义+几何)
1.SegNet进行语义分割(单独一个线程);
2.对于前后两帧图像,通过极线几何检测外点;
3.如果某一物体外点数量过多,则认为是动态,剔除;
4.建立了语义八叉树地图。
讨论:这种四线程的结构以及极线约束的外点检测方法得到了很多论文的采纳,其缺点在于:1.极线约束的外点检测方法并不能找到所有外点,当物体沿极线方向运动时这种方法会失效 2.用特征点中的外点的比例来判断该物体是否运动,这用方法存在局限性,特征点的数量受物体纹理的影响较大 3.SegNet是2016年剔除的语义分割网络,分割效果有很大提升空间,实验效果不如Dyna-SLAM
论文:Detect-SLAM: Making Object Detection and SLAM Mutually Beneficial
代码:https://github.com/liadbiz/detect-slam
主要思想:
目标检测的网络并不能实时运行,所以只在关键帧中进行目标检测,然后通过特征点的传播将其结果传播到普通帧中
1.只在关键帧中用SSD网络进行目标检测(得到的是矩形区域及其置信度),图割法剔除背景,得到更加精细的动态区域;
2.在普通帧中,利用feature matching + matching point expansion两种机制,对每个特征点动态概率传播,至此得到每个特征点的动态概率;
3.object map帮助提取候选区域。
论文:VDO-SLAM: A Visual Dynamic Object-aware SLAM System
代码: https://github.com/halajun/vdo_slam
主要思想:
1.运动物体跟踪,比较全的slam+运动跟踪的系统;
2.光流+语义分割。
论文:Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects
代码:https://github.com/martinruenz/co-fusion
主要思想: 学习和维护每个物体的3D模型,并通过随时间的融合提高模型结果。这是一个经典的系统,很多论文都拿它进行对比
论文:Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation
代码:https://github.com/NVlabs/learningrigidity.git
主要思想:
1.RTN网络用于计算位姿以及刚体区域,PWC网络用于计算稠密光流;
2.基于以上结果,估计刚体区域的相对位姿;
3.计算刚体的3D场景流;
此外还开发了一套用于生成半人工动态场景的工具REFRESH。
论文:ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals
代码:https://github.com/PRBonn/refusion
主要思想:
主流的动态slam方法需要用神经网络进行分实例分割,此过程需要预先定义可能动态的对象并在数据集上进行大量的训练,使用场景受到很大的限制。而ReFusion则使用纯几何的方法分割动态区域,具体的:在KinectFusion稠密slam系统的基础上,计算每个像素点的残差,通过自适应阈值分割得到大致动态区域,形态学处理得到最终动态区域,与此同时,可得到静态背景的TSDF地图。
讨论:为数不多的不使用神经网络的动态slam系统
论文:RGB-D SLAM in Dynamic Environments Using Static Point Weighting
代码:https://github.com/VitoLing/RGB_D-SLAM-with-SWIAICP
主要思想:
1.仅使用前景的边缘点进行跟踪( Foreground Depth Edge Extraction);
2.每隔n帧插入关键帧,通过当前帧与关键帧计算位姿;
3.通过投影误差计算每个点云的静态-动态质量,为下面的IAICP提供每个点云的权重;
4.提出了融合灰度信息的ICP算法–IAICP,用于计算帧与帧之间的位姿。
讨论: 前景物体的边缘点能够很好地表征整个物体,实验效果令人耳目一新。但是文中所用的ICP算法这并不是边缘slam常用的算法,边缘slam一般使用距离变换(DT)描述边缘点的误差,其可以避免点与点之间的匹配。论文Robust RGB-D visual odometry based on edges and points提出了一种很有意思的方案,用特征点计算位姿,用边缘点描述动态区域,很好地汲取了二者的优势。
论文:RDS-SLAM: Real-Time Dynamic SLAM Using Semantic Segmentation Methods
代码:https://github.com/yubaoliu/RDS-SLAM.git
主要思想:克服不能实时进行语义分割的问题
1.选择最近的关键帧进行语义分割;
2.基于贝叶斯的概率传播;
3.通过上一帧和局部地图得到当前帧的外点;
4.根据运动概率加权计算位姿。
Pfreundschuh, Patrick, et al. “Dynamic Object Aware LiDAR SLAM Based on Automatic Generation of Training Data.” (ICRA 2021)
Canovas Bruce, et al. “Speed and Memory Efficient Dense RGB-D SLAM in Dynamic Scenes.” (IROS 2020)
Yuan Xun and Chen Song. “SaD-SLAM: A Visual SLAM Based on Semantic and Depth Information.” (IROS 2020)
Dong, Erqun, et al. “Pair-Navi: Peer-to-Peer Indoor Navigation with Mobile Visual SLAM.” (ICCC 2019)
Ji Tete, et al. “Towards Real-Time Semantic RGB-D SLAM in Dynamic Environments.” (ICRA 2021)
Palazzolo Emanuele, et al. “ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals.” (IROS 2019)
Arora Mehul, et al. Mapping the Static Parts of Dynamic Scenes from 3D LiDAR Point Clouds Exploiting Ground Segmentation. p. 6.
Chen Xieyuanli, et al. “Moving Object Segmentation in 3D LiDAR Data: A Learning-Based Approach Exploiting Sequential Data.” IEEE Robotics and Automation Letters, 2021
Zhang Tianwei, et al. “FlowFusion: Dynamic Dense RGB-D SLAM Based on Optical Flow.”(ICRA 2020)
Zhang Tianwei, et al. “AcousticFusion: Fusing Sound Source Localization to Visual SLAM in Dynamic Environments.”,IROS 2021
Liu Yubao and Miura Jun. “RDS-SLAM: Real-Time Dynamic SLAM Using Semantic Segmentation Methods.” IEEE Access 2021
Liu Yubao and Miura Jun. “RDMO-SLAM: Real-Time Visual SLAM for Dynamic Environments Using Semantic Label Prediction With Optical Flow.” IEEE Access, vol. 9, 2021, pp. 106981–97. IEEE Xplore, https://doi.org/10.1109/ACCESS.2021.3100426.
Cheng Jiyu, et al. “Improving Visual Localization Accuracy in Dynamic Environments Based on Dynamic Region Removal.” IEEE Transactions on Automation Science and Engineering, vol. 17, no. 3, July 2020, pp. 1585–96. IEEE Xplore, https://doi.org/10.1109/TASE.2020.2964938.
Soares João Carlos Virgolino, et al. “Crowd-SLAM: Visual SLAM Towards Crowded Environments Using Object Detection.” Journal of Intelligent & Robotic Systems 2021
Kaveti Pushyami and Singh Hanumant. “A Light Field Front-End for Robust SLAM in Dynamic Environments.”.
Kuen-Han Lin and Chieh-Chih Wang. “Stereo-Based Simultaneous Localization, Mapping and Moving Object Tracking.” IROS 2010
Fu, H.; Xue, H.; Hu, X.; Liu, B. LiDAR Data Enrichment by Fusing Spatial and Temporal Adjacent Frames. Remote Sens. 2021, 13, 3640.
Qian, Chenglong, et al. RF-LIO: Removal-First Tightly-Coupled Lidar Inertial Odometry in High Dynamic Environments. p. 8. IROS2021, XJTU
K. Minoda, F. Schilling, V. Wüest, D. Floreano, and T. Yairi, “VIODE: A Simulated Dataset to Address the Challenges of Visual-Inertial Odometry in Dynamic Environments,”RAL 2021
W. Dai, Y. Zhang, P. Li, Z. Fang, and S. Scherer, “RGB-D SLAM in Dynamic Environments Using Point Correlations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2020
C. Huang, H. Lin, H. Lin, H. Liu, Z. Gao, and L. Huang, “YO-VIO: Robust Multi-Sensor Semantic Fusion Localization in Dynamic Indoor Environments,” in 2021 International Conference on Indoor Positioning and Indoor Navigation (IPIN), 2021.
Dynamic-VINS:RGB-D Inertial Odometry for a Resource-restricted Robot in Dynamic Environments.
“AirDOS: Dynamic SLAM benefits from Articulated Objects,” Qiu Yuheng, et al. 2021(Arxiv)
“DOT: Dynamic Object Tracking for Visual SLAM.” Ballester, Irene, et al.(ICRA 2021)
Liu Yubao and Miura Jun. “RDMO-SLAM: Real-Time Visual SLAM for Dynamic Environments Using Semantic Label Prediction With Optical Flow.” IEEE Access.
Kim Aleksandr, et al. “EagerMOT: 3D Multi-Object Tracking via Sensor Fusion.” (ICRA 2021)
Rosen, David M., et al. “Towards Lifelong Feature-Based Mapping in Semi-Static Environments.” (ICRA 2016)
Minoda, Koji, et al. “VIODE: A Simulated Dataset to Address the Challenges of Visual-Inertial Odometry in Dynamic Environments.” (RAL 2021)
Vincent, Jonathan, et al. “Dynamic Object Tracking and Masking for Visual SLAM.”, (IROS 2020)
Huang, Jiahui, et al. “ClusterVO: Clustering Moving Instances and Estimating Visual Odometry for Self and Surroundings.” (CVPR 2020)
Liu, Yuzhen, et al. “A Switching-Coupled Backend for Simultaneous Localization and Dynamic Object Tracking.” (RAL 2021)
Yang Charig, et al. “Self-Supervised Video Object Segmentation by Motion Grouping.”(ICCV 2021)
Long Ran, et al. “RigidFusion: Robot Localisation and Mapping in Environments with Large Dynamic Rigid Objects.” ,(RAL 2021)
Yang Bohong, et al. “Multi-Classes and Motion Properties for Concurrent Visual SLAM in Dynamic Environments.” IEEE Transactions on Multimedia, 2021
Yang Gengshan and Ramanan Deva. “Learning to Segment Rigid Motions from Two Frames.” CVPR 2021
Thomas Hugues, et al. “Learning Spatiotemporal Occupancy Grid Maps for Lifelong Navigation in Dynamic Scenes.”
Jung Dongki, et al. “DnD: Dense Depth Estimation in Crowded Dynamic Indoor Scenes.” (ICCV 2021)
Luiten Jonathon, et al. “Track to Reconstruct and Reconstruct to Track.”, (RAL+ICRA 2020)
Grinvald, Margarita, et al. “TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction.”(ICRA 2021)
Wang Chieh-Chih, et al. “Simultaneous Localization, Mapping and Moving Object Tracking.” The International Journal of Robotics Research 2007
Ran Teng, et al. “RS-SLAM: A Robust Semantic SLAM in Dynamic Environments Based on RGB-D Sensor.”
Xu Hua, et al. “OD-SLAM: Real-Time Localization and Mapping in Dynamic Environment through Multi-Sensor Fusion.” * (ICARM 2020)* https://doi.org/10.1109/ICARM49381.2020.9195374.
Wimbauer Felix, et al. “MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera.” (CVPR 2021)
Liu Yu, et al. “Dynamic RGB-D SLAM Based on Static Probability and Observation Number.” IEEE Transactions on Instrumentation and Measurement, vol. 70, 2021, pp. 1–11. IEEE Xplore, https://doi.org/10.1109/TIM.2021.3089228.
P. Li, T. Qin, and S. Shen, “Stereo Vision-based Semantic 3D Object and Ego-motion Tracking for Autonomous Driving,” arXiv 2018
G. B. Nair et al., “Multi-object Monocular SLAM for Dynamic Environments,” IV2020
M. Rünz and L. Agapito, “Co-fusion: Real-time segmentation, tracking and fusion of multiple objects,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), May 2017, pp. 4471–4478.
TwistSLAM: Constrained SLAM in Dynamic Environment,
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。