视觉SLAM实战(二):ORB-SLAM2 with Kinect2

    xiaoxiao2021-04-12  40

    前言

      实战系列很久没有更新了。近期拿到了一台不错的Thinkpad和Kinect v2,前两天orbslam2又放出,于是想要在kinect2下尝试一下orb slam。整个过程没有特别多的技术含量,读者可以把它当成一个实验步骤的总结。


    ORB-slam

      orb-slam是15年出的一个单目SLAM,也可以说是单目中做的非常好的一个实现。另一方面,他的代码也极其清爽,编译十分贴心,十分注重我等程序员的用户体验,受到了广大欢迎。前几天,orb-slam作者推出了orb-slam2,在原来的单目基础上增加了双目和RGBD的接口,尽管地图还是单目常见的稀疏特征点图。于是我们就能通过各种传感器来玩orb-slam啦!这里正巧我手上有一个Kinectv2,咱们就拿它做个实验吧!

      博主的系统就不多说了,ubuntu14.04, Thinkpad T450。Kinect2 for xbox.

      


    编译Kinect v2

      要在ubuntu下使用Kinect V2,需要做两件事。一是编译Kinectv2的开源驱动libfreenect2,二是使用kinect2_bridge在ROS下采集它的图像。这两者在hitcm的博客中已经说的很清楚了,咱就不罗嗦了。

      请参照hitcm的博客安装好libfreenect2和iai_kinect2系列软件:

      http://www.cnblogs.com/hitcm/p/5118196.html

      iai_kinect2中含有四个模块。我们主要用它的bridge进行图像间的转换。此外,还要使用kinect2_registration进行标定。没标定过的kinect2,深度图和彩色图之间是不保证一一对应的,在做slam时就会出错。所以请务必做好它的标定。好在作者十分良心地写了标定的详细过程,即使像博主这样的小白也能顺利完成标定啦!

      kinect2_calibration模块,有详细的标定过程解释:

      https://github.com/code-iai/iai_kinect2/tree/master/kinect2_calibration

      标定之后呢,会得到kinect2彩色头、深度头、红外头的内参和外参,它们都以(万恶的)yaml模式存储在你的机器内。kinect2_bridge会自动检测你的标定文件,对深度图进行校正。之后slam过程主要使用彩色头的内参哦!同时,你也可以使用kinect2_viewer模块来看获取的点云和kinect2的图像哦!

      

      


    编译orb-slam2

      orb-slam2的github位于:https://github.com/raulmur/ORB_SLAM2 直接clone到本地即可。

      这版的orb-slam可以脱离ros编译,不需要事先安装ros。(但是由于我们要用kinect2还是装了ros)它的主要依赖项是opencv2.4, eigen3, dbow2和g2o,另外还有一些我没怎么听说过的UI库:pangolin。其中Dbow2和g2o已经包含在它的Thirdparty文件夹中,不需要另外下载啦!opencv的安装方式参见一起做第一篇。(g2o版本问题一直是个坑) 所以,只需要去下载pangolin即可。

      pangolin的github: https://github.com/stevenlovegrove/Pangolin 它是一个cmake工程,没什么特别的依赖项,直接下载编译安装即可。

      随后,进入orb-slam2的文件夹。作者很贴心的为我们准备了 build.sh 文件,直接运行这个文件即可完成编译。

      但愿你也顺利编译成功了。orb-slam作者为我们提供了几个example,包括kitti的双目和tum的单目/rgbd。我们可以参照着它去写自己的输入。如果你只想把orb-slam2作为一个整体的模块,可以直接调用include/System.h文件里定义的SLAM System哦。现在我们就把Kinect2丢给它试试。


    在Kinect2上运行orb-slam2

      Kinect2的topic一共有三种,含不同的分辨率。其中hd是1920的,qhd是四分之一的960的,而sd是最小的。博主发现sd的效果不理想,而hd的图像又太大了,建议大家使用qhd的920大小啦!在调用orb-slam时,需要把相机参数通过一个yaml来告诉它,所以需要简单写一下你的kinect参数喽。比如像这样:

    %YAML:1.0 #-------------------------------------------------------------------------------------------- # Camera Parameters. Adjust them! #-------------------------------------------------------------------------------------------- # Camera calibration and distortion parameters (OpenCV) Camera.fx: 529.97 Camera.fy: 526.97 Camera.cx: 477.44 Camera.cy: 261.87 Camera.k1: 0.05627 Camera.k2: -0.0742 Camera.p1: 0.00142 Camera.p2: -0.00169 Camera.k3: 0.0241 Camera.width: 960 Camera.height: 540 # Camera frames per second Camera.fps: 30.0 # IR projector baseline times fx (aprox.) Camera.bf: 40.0 # Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale) Camera.RGB: 1 # Close/Far threshold. Baseline times. ThDepth: 50.0 # Deptmap values factor DepthMapFactor: 1000.0 #-------------------------------------------------------------------------------------------- # ORB Parameters #-------------------------------------------------------------------------------------------- # ORB Extractor: Number of features per image ORBextractor.nFeatures: 1000 # ORB Extractor: Scale factor between levels in the scale pyramid ORBextractor.scaleFactor: 1.2 # ORB Extractor: Number of levels in the scale pyramid ORBextractor.nLevels: 8 # ORB Extractor: Fast threshold # Image is divided in a grid. At each cell FAST are extracted imposing a minimum response. # Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST # You can lower these values if your images have low contrast ORBextractor.iniThFAST: 20 ORBextractor.minThFAST: 7 #-------------------------------------------------------------------------------------------- # Viewer Parameters #-------------------------------------------------------------------------------------------- Viewer.KeyFrameSize: 0.05 Viewer.KeyFrameLineWidth: 1 Viewer.GraphLineWidth: 0.9 Viewer.PointSize:2 Viewer.CameraSize: 0.08 Viewer.CameraLineWidth: 3 Viewer.ViewpointX: 0 Viewer.ViewpointY: -0.7 Viewer.ViewpointZ: -1.8 Viewer.ViewpointF: 500  ORB部分的参数我们就不用动啦。然后,对kinect2_viewer进行一定程度的改写,加入ORBSLAM,就可以跑起来喽:

    #include <stdlib.h> #include <stdio.h> #include <iostream> #include <sstream> #include <string> #include <vector> #include <cmath> #include <mutex> #include <thread> #include <chrono> #include <ros/ros.h> #include <ros/spinner.h> #include <sensor_msgs/CameraInfo.h> #include <sensor_msgs/Image.h> #include <cv_bridge/cv_bridge.h> #include <image_transport/image_transport.h> #include <image_transport/subscriber_filter.h> #include <message_filters/subscriber.h> #include <message_filters/synchronizer.h> #include <message_filters/sync_policies/exact_time.h> #include <message_filters/sync_policies/approximate_time.h> #include <kinect2_bridge/kinect2_definitions.h> #include "orbslam2/System.h" class Receiver { public: enum Mode { IMAGE = 0, CLOUD, BOTH }; private: std::mutex lock; const std::string topicColor, topicDepth; const bool useExact, useCompressed; bool updateImage, updateCloud; bool save; bool running; size_t frame; const size_t queueSize; cv::Mat color, depth; cv::Mat cameraMatrixColor, cameraMatrixDepth; cv::Mat lookupX, lookupY; typedef message_filters::sync_policies::ExactTime<sensor_msgs::Image, sensor_msgs::Image, sensor_msgs::CameraInfo, sensor_msgs::CameraInfo> ExactSyncPolicy; typedef message_filters::sync_policies::ApproximateTime<sensor_msgs::Image, sensor_msgs::Image, sensor_msgs::CameraInfo, sensor_msgs::CameraInfo> ApproximateSyncPolicy; ros::NodeHandle nh; ros::AsyncSpinner spinner; image_transport::ImageTransport it; image_transport::SubscriberFilter *subImageColor, *subImageDepth; message_filters::Subscriber<sensor_msgs::CameraInfo> *subCameraInfoColor, *subCameraInfoDepth; message_filters::Synchronizer<ExactSyncPolicy> *syncExact; message_filters::Synchronizer<ApproximateSyncPolicy> *syncApproximate; std::thread imageViewerThread; Mode mode; std::ostringstream oss; std::vector<int> params; //RGBDSLAM slam; //the slam object ORB_SLAM2::System* orbslam =nullptr; public: Receiver(const std::string &topicColor, const std::string &topicDepth, const bool useExact, const bool useCompressed) : topicColor(topicColor), topicDepth(topicDepth), useExact(useExact), useCompressed(useCompressed), updateImage(false), updateCloud(false), save(false), running(false), frame(0), queueSize(5), nh("~"), spinner(0), it(nh), mode(CLOUD) { cameraMatrixColor = cv::Mat::zeros(3, 3, CV_64F); cameraMatrixDepth = cv::Mat::zeros(3, 3, CV_64F); params.push_back(cv::IMWRITE_JPEG_QUALITY); params.push_back(100); params.push_back(cv::IMWRITE_PNG_COMPRESSION); params.push_back(1); params.push_back(cv::IMWRITE_PNG_STRATEGY); params.push_back(cv::IMWRITE_PNG_STRATEGY_RLE); params.push_back(0); string orbVocFile = "/home/xiang/catkin_ws/src/walle/config/ORBvoc.txt"; string orbSetiingsFile = "/home/xiang/catkin_ws/src/walle/config/kinect2_sd.yaml"; orbslam = new ORB_SLAM2::System( orbVocFile, orbSetiingsFile ,ORB_SLAM2::System::RGBD, true ); } ~Receiver() { if (orbslam) { orbslam->Shutdown(); delete orbslam; } } void run(const Mode mode) { start(mode); stop(); } void finish() { } private: void start(const Mode mode) { this->mode = mode; running = true; std::string topicCameraInfoColor = topicColor.substr(0, topicColor.rfind('/')) + "/camera_info"; std::string topicCameraInfoDepth = topicDepth.substr(0, topicDepth.rfind('/')) + "/camera_info"; image_transport::TransportHints hints(useCompressed ? "compressed" : "raw"); subImageColor = new image_transport::SubscriberFilter(it, topicColor, queueSize, hints); subImageDepth = new image_transport::SubscriberFilter(it, topicDepth, queueSize, hints); subCameraInfoColor = new message_filters::Subscriber<sensor_msgs::CameraInfo>(nh, topicCameraInfoColor, queueSize); subCameraInfoDepth = new message_filters::Subscriber<sensor_msgs::CameraInfo>(nh, topicCameraInfoDepth, queueSize); if(useExact) { syncExact = new message_filters::Synchronizer<ExactSyncPolicy>(ExactSyncPolicy(queueSize), *subImageColor, *subImageDepth, *subCameraInfoColor, *subCameraInfoDepth); syncExact->registerCallback(boost::bind(&Receiver::callback, this, _1, _2, _3, _4)); } else { syncApproximate = new message_filters::Synchronizer<ApproximateSyncPolicy>(ApproximateSyncPolicy(queueSize), *subImageColor, *subImageDepth, *subCameraInfoColor, *subCameraInfoDepth); syncApproximate->registerCallback(boost::bind(&Receiver::callback, this, _1, _2, _3, _4)); } spinner.start(); std::chrono::milliseconds duration(1); while(!updateImage || !updateCloud) { if(!ros::ok()) { return; } std::this_thread::sleep_for(duration); } createLookup(this->color.cols, this->color.rows); switch(mode) { case IMAGE: imageViewer(); break; case BOTH: imageViewerThread = std::thread(&Receiver::imageViewer, this); break; } } void stop() { spinner.stop(); if(useExact) { delete syncExact; } else { delete syncApproximate; } delete subImageColor; delete subImageDepth; delete subCameraInfoColor; delete subCameraInfoDepth; running = false; if(mode == BOTH) { imageViewerThread.join(); } } void callback(const sensor_msgs::Image::ConstPtr imageColor, const sensor_msgs::Image::ConstPtr imageDepth, const sensor_msgs::CameraInfo::ConstPtr cameraInfoColor, const sensor_msgs::CameraInfo::ConstPtr cameraInfoDepth) { cv::Mat color, depth; readCameraInfo(cameraInfoColor, cameraMatrixColor); readCameraInfo(cameraInfoDepth, cameraMatrixDepth); readImage(imageColor, color); readImage(imageDepth, depth); // IR image input if(color.type() == CV_16U) { cv::Mat tmp; color.convertTo(tmp, CV_8U, 0.02); cv::cvtColor(tmp, color, CV_GRAY2BGR); } lock.lock(); this->color = color; this->depth = depth; updateImage = true; updateCloud = true; lock.unlock(); } void imageViewer() { cv::Mat color, depth; for(; running && ros::ok();) { if(updateImage) { lock.lock(); color = this->color; depth = this->depth; updateImage = false; lock.unlock(); if (orbslam) { orbslam->TrackRGBD( color, depth, ros::Time::now().toSec() ); } } } cv::destroyAllWindows(); cv::waitKey(100); } void readImage(const sensor_msgs::Image::ConstPtr msgImage, cv::Mat &image) const { cv_bridge::CvImageConstPtr pCvImage; pCvImage = cv_bridge::toCvShare(msgImage, msgImage->encoding); pCvImage->image.copyTo(image); } void readCameraInfo(const sensor_msgs::CameraInfo::ConstPtr cameraInfo, cv::Mat &cameraMatrix) const { double *itC = cameraMatrix.ptr<double>(0, 0); for(size_t i = 0; i < 9; ++i, ++itC) { *itC = cameraInfo->K[i]; } } void dispDepth(const cv::Mat &in, cv::Mat &out, const float maxValue) { cv::Mat tmp = cv::Mat(in.rows, in.cols, CV_8U); const uint32_t maxInt = 255; #pragma omp parallel for for(int r = 0; r < in.rows; ++r) { const uint16_t *itI = in.ptr<uint16_t>(r); uint8_t *itO = tmp.ptr<uint8_t>(r); for(int c = 0; c < in.cols; ++c, ++itI, ++itO) { *itO = (uint8_t)std::min((*itI * maxInt / maxValue), 255.0f); } } cv::applyColorMap(tmp, out, cv::COLORMAP_JET); } void combine(const cv::Mat &inC, const cv::Mat &inD, cv::Mat &out) { out = cv::Mat(inC.rows, inC.cols, CV_8UC3); #pragma omp parallel for for(int r = 0; r < inC.rows; ++r) { const cv::Vec3b *itC = inC.ptr<cv::Vec3b>(r), *itD = inD.ptr<cv::Vec3b>(r); cv::Vec3b *itO = out.ptr<cv::Vec3b>(r); for(int c = 0; c < inC.cols; ++c, ++itC, ++itD, ++itO) { itO->val[0] = (itC->val[0] + itD->val[0]) >> 1; itO->val[1] = (itC->val[1] + itD->val[1]) >> 1; itO->val[2] = (itC->val[2] + itD->val[2]) >> 1; } } } void createLookup(size_t width, size_t height) { const float fx = 1.0f / cameraMatrixColor.at<double>(0, 0); const float fy = 1.0f / cameraMatrixColor.at<double>(1, 1); const float cx = cameraMatrixColor.at<double>(0, 2); const float cy = cameraMatrixColor.at<double>(1, 2); float *it; lookupY = cv::Mat(1, height, CV_32F); it = lookupY.ptr<float>(); for(size_t r = 0; r < height; ++r, ++it) { *it = (r - cy) * fy; } lookupX = cv::Mat(1, width, CV_32F); it = lookupX.ptr<float>(); for(size_t c = 0; c < width; ++c, ++it) { *it = (c - cx) * fx; } } }; int main(int argc, char **argv) { ros::init(argc, argv, "kinect2_slam", ros::init_options::AnonymousName); if(!ros::ok()) { return 0; } std::string topicColor = "/kinect2/sd/image_color_rect"; std::string topicDepth = "/kinect2/sd/image_depth_rect"; bool useExact = true; bool useCompressed = false; Receiver::Mode mode = Receiver::IMAGE; // 初始化receiver Receiver receiver(topicColor, topicDepth, useExact, useCompressed); //OUT_INFO("starting receiver..."); receiver.run(mode); receiver.finish(); ros::shutdown(); return 0; }

    编译方面,只要在CMakeLists.txt中加入orb-slam的头文件和库,告诉cmake你想链接它即可。甚至你可以把整个orb-slam放到你的代码目录中一块儿编译,不过我还是简单地把liborb_slam2.so文件和头文件拷了过来而已。

      实际的手持kinect2运行效果(由于博客园无法传视频,暂时把百度云当播放器使一使):http://pan.baidu.com/s/1eRcyW1s (感谢也冬同学友情出演……)

      一起做rgbd slam的数据集上效果:http://pan.baidu.com/s/1bocx5s

       大体上还是挺理想的。


    小结

      本文主要展现了orbslam2在Kinect2下的表现,大致是令人满意的。读者在使用时,请务必注意kinect2的标定,否则很可能出错。

    转载请注明原文地址: https://ju.6miu.com/read-667234.html

    最新回复(0)