前言

历经波折,终于用Kinect2跑通了RGB-D SLAM。
起初,在笔记本电脑上装不上iai_kinect2驱动,因为在/usr/lib/x86_64-linux-gnu
下缺少很多关于liopencv的文件,不确定是ROS没装好还是OpenCV没装好,台式机装的OpenCV 3,也有其他问题,最后我把台式机
/usr/lib/x86_64-linux-gnu下的所有文件都拷贝到笔记本电脑对应目录下,才编译成功。
真是玄学啊!
代价是我一口气通了个宵,问题是现在还贼精神。

环境

* Ubuntu14.04
* ROS indigo
* OpenCV 2.4.11(好像不安装OpenCV也可以,会用到ROS indigo自带的OpenCV 2.4.8)
安装Kinect2的驱动

Kinect2的驱动包括libfreenect2和iai_kinect2,后者为使用ROS时要安装的驱动,而且依赖前者。

安装libfreenect2

参考:https://github.com/OpenKinect/libfreenect2
<https://github.com/OpenKinect/libfreenect2>,很简单,我就不翻译了。

* 下载libfreenect2 git clone https://github.com/OpenKinect/libfreenect2.git cd
libfreenect2
* Download upgrade deb files cd depends ./download_debs_trusty.sh
* Install build tools sudo apt-get install build-essential cmake pkg-config
* Install libusb. The version must be >= 1.0.20. sudo dpkg -i debs/libusb*deb
* Install TurboJPEG sudo apt-get install libturbojpeg libjpeg-turbo8-dev
* Install OpenGL sudo dpkg -i debs/libglfw3*deb; sudo apt-get install -f
* Build (if you have run cd depends previously, cd .. back to the
libfreenect2 root directory first.) mkdir build && cd build cmake ..
-DCMAKE_INSTALL_PREFIX=$HOME/freenect2 make make install
* Set up udev rules for device access: sudo cp ../platform/linux/udev/90
-kinect2.rules /etc/udev/rules.d/
then replug the Kinect.

* Run the test program: ./bin/Protonect
如果安装成功,会打开以下界面:




安装iai_kinect2

参考:https://github.com/code-iai/iai_kinect2
<https://github.com/code-iai/iai_kinect2>,同样不翻译了。

* 安装ROS indigo,Instructions for Ubuntu 14.04
<http://wiki.ros.org/indigo/Installation/Ubuntu>
* Setup your ROS environment
<http://wiki.ros.org/ROS/Tutorials/InstallingandConfiguringROSEnvironment>
* 安装 libfreenect2
<https://github.com/OpenKinect/libfreenect2/blob/master/README.md#installation>
Enable C++11 by using cmake .. -DENABLE_CXX11=ON instead of cmake .. .
If you are compiling libfreenect2 with CUDA, use cmake .. -DENABLE_CXX11=ON
-DCUDA_PROPAGATE_HOST_FLAGS=off.
* Clone this repository into your catkin workspace, install the dependencies
and build it: cd ~/catkin_ws/src/ git clone https:
//github.com/code-iai/iai_kinect2.git cd iai_kinect2 rosdep install -r --from
-paths . cd ~/catkin_ws catkin_make -DCMAKE_BUILD_TYPE="Release"
* Connect your sensor and run kinect2_bridge: roslaunch kinect2_bridge
kinect2_bridge.launch
* Calibrate your sensor using the kinect2_calibration. Further details
<https://github.com/code-iai/iai_kinect2/tree/master/kinect2_calibration#calibrating-the-kinect-one>
* Add the calibration files to the kinect2_bridge/data/ folder. Further
details
<https://github.com/code-iai/iai_kinect2/tree/master/kinect2_bridge#first-steps>
* Restart kinect2_bridge and view the results using: rosrun kinect2_viewer
kinect2_viewer kinect2 sd cloud
可将sd替换为qhd、hd,表示不同的分辨率。
如果安装成功,会打开以下界面:




安装RGB-D SLAM

参考:https://github.com/felixendres/rgbdslam_v2/tree/indigo
<https://github.com/felixendres/rgbdslam_v2/tree/indigo>
安装步骤如下,比较简单,rosdep update
可能会失败,一般是因为网络问题,这个比较坑,如果这个步骤过不去,可直接跳过,因为安装ROS时已经执行过这个命令。
建议用OpenCV 2,OpenCV 3可能会存在问题。
#Prepare Workspace source /opt/ros/indigo/setup.bash mkdir -p
~/rgbdslam_catkin_ws/srccd ~/rgbdslam_catkin_ws/src catkin_init_workspace cd
~/rgbdslam_catkin_ws/ catkin_makesource devel/setup.bash #Get RGBDSLAM cd
~/rgbdslam_catkin_ws/src wget -q
http://github.com/felixendres/rgbdslam_v2/archive/indigo.zip unzip -q indigo.zip
cd ~/rgbdslam_catkin_ws/ #Install rosdep update rosdep install rgbdslam
catkin_make
用Kinect2跑RGB-D SLAM

通过以上两步,我们安装了Kinect2的驱动和RGB-D SLAM。
注意:rgbdslam.launch和openni+rgbdslam.launch只适用于Kinect1。
如果要用Kinect2,需要修改launch文件,参考:
https://answers.ros.org/question/230412/solved-slam-with-kinect2/
<https://answers.ros.org/question/230412/solved-slam-with-kinect2/>
新建rgbdslam_kinect2.launch:
<launch> <node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node"
required="true" output="screen"> <!-- Input data settings--> <param name=
"config/topic_image_mono" value="/kinect2/qhd/image_color_rect"/> <param name=
"config/camera_info_topic" value="/kinect2/qhd/camera_info"/> <param name=
"config/topic_image_depth" value="/kinect2/qhd/image_depth_rect"/> <param name=
"config/topic_points" value=""/> <!--if empty, poincloud will be reconstructed
from image and depth --> <!-- These are the default values of some important
parameters --> <param name="config/feature_extractor_type" value="SIFTGPU"/>
<!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. --> <
param name="config/feature_detector_type" value="SIFTGPU"/><!-- also available:
SIFT, SURF, GFTT (good features to track), ORB. --> <param name=
"config/detector_grid_resolution" value="3"/><!-- detect on a 3x3 grid (to
spread ORB keypoints and parallelize SIFT and SURF) --> <param name=
"config/optimizer_skip_step" value="15"/><!-- optimize only every n-th frame -->
<param name="config/cloud_creation_skip_step" value="2"/><!-- subsample the
images' pixels (in both, width and height), when creating the cloud (and
therefore reduce memory consumption) --> <param name="config/backend_solver"
value="csparse"/><!-- pcg is faster and good for continuous online
optimization, cholmod and csparse are better for offline optimization (without
good initial guess)--> <param name="config/pose_relative_to" value="first"/>
<!-- optimize only a subset of the graph: "largest_loop" = Everything from the
earliest matched frame to the current one. Use "first" to optimize the full
graph, "inaffected" to optimize only the frames that were matched (not those
inbetween for loops) --> <param name="config/maximum_depth" value="2"/> <param
name="config/subscriber_queue_size" value="20"/> <param name=
"config/min_sampled_candidates" value="30"/><!-- Frame-to-frame comparisons to
random frames (big loop closures) --> <param name=
"config/predecessor_candidates" value="20"/><!-- Frame-to-frame comparisons to
sequential frames--> <param name="config/neighbor_candidates" value="20"/><!--
Frame-to-frame comparisons to graph neighbor frames--> <param name=
"config/ransac_iterations" value="140"/> <param name=
"config/g2o_transformation_refinement" value="1"/> <param name=
"config/icp_method" value="gicp"/> <!-- icp, gicp ... --> <!-- <param
name="config/max_rotation_degree" value="20"/> <param
name="config/max_translation_meter" value="0.5"/> <param
name="config/min_matches" value="30"/> <param
name="config/min_translation_meter" value="0.05"/> <param
name="config/min_rotation_degree" value="3"/> <param
name="config/g2o_transformation_refinement" value="2"/> <param
name="config/min_rotation_degree" value="10"/> <param
name="config/matcher_type" value="SIFTGPU"/> --> </node> </launch>
打开终端,执行:
roslaunch kinect2_bridge kinect2_bridge.launch
打开另一个终端,执行:
roslaunch rgbdslam rgbdslam_kinect2.launch
注意:打开终端后,执行source path-to-catkin_ws/devel/setup.bash,否则会报以下错误:

[rgbdslam_kinect2.launch] is neither a launch file in package [rgbdslam] nor
is [rgbdslam] a launch file name
The traceback for the exception was written to the log file

运行截图如下:




保存点云和轨迹

作者精心设计了UI,我们可以通过菜单栏的“Save”保存点云图和轨迹等数据。
可通过以下命令显示点云:
pcl_viewer path-to-pcd
绘制轨迹可参考:https://blog.csdn.net/Felaim/article/details/80830479
<https://blog.csdn.net/Felaim/article/details/80830479>

总结

本来很快就能实现的事情,却被电脑奇怪的问题拖了很久。
因为电脑配置了很多东西,不敢轻易改动,担心会出问题,这就延缓了我解决问题的进度。
这次投机的方法虽然表面上解决了问题,但是深层次的原因还未知,将来还可能会遇到其他问题。
所以,最好多备一台设备,但更重要的是推倒重来的勇气!现在怕麻烦,以后会更麻烦。