当前位置: 华文星空 > 汽车

四足机器人雷达

2023-05-29汽车

四足机器人导航可以采用多种方案,对于小型室内机器人更多的会采用2D雷达运行SLAM算法实现地图的构建,典型的地图算法包括HectorMap、Gmapping,由Google推出的 Cartographer SLAM是目前公认效果较好的2D 建图和定位算法。其可以融合雷达、IMU和机器人里程计,采用图优化因此相比前两个SLAM算法具有更好的建图效果,目前室内扫地机器人或轮式底盘都会采用该方案。

对于轮式机器人由于仅关心自己相对地面很低高度范围内的避障问题,因此需要将2D雷达布置于较低位置来对障碍物进行探测。

但是对于四足机器人来说,由于四足机器人本身的站立高度,使得2D雷达建图存在如下的几个特点:

1)成本低:采用2D雷达结合足式里程计可以实现室内环境下较好的定位和建图效果,并进一步用于机器人导航,2D雷达低成本相比视觉也更可靠;

2)仅能建立机体高度平面的占据地图:由于雷达往往安装在四足机器人背面,因此仅能建立该平面高度的占据地图,仅采用2D雷达避障无法实现躲避低矮的障碍物;

综上,对于四足机器人来说除了保证机体层导航安全外,对于低矮障碍物如门槛、盒子、鞋子等室内物品,仅依靠2D雷达会导致机器人踩踏和摔倒,因此四足机器人对于局部地图的信息更加关注,对于非结构地形需要主动落足跨越或者躲避障碍物,比较有代表的为ETH开源的elevation map项目。

1 2D雷达融合局部地图

Ego Planner浙大FAST LAB开源的路径规划算法,其包含了基于GridMap和占据地图的局部地图,借助简单的视觉和里程计数据就可以实现在线动态轨迹规划, 并控制无人机完成树林穿越、狭窄圆环等特技穿越飞行 ,最新推出的swarm集群版本更可以支持多机器人的集群飞行。

标准的Ego Planner采用Vins作为视觉里程计并采用Realsense深度点云来构建局部地图。本项目采用2D雷达来代替Vins采用Ego Planner完成对局部地图的构建。

2D雷达完成 :全局占据地图构建、全局路径导航;

Ego Planner完成: 对局部地图的构建并完成对低矮障碍物局部路径规划;

存在的问题 :由于Ego Planner针对无人机其可以在空间高度上实现3维轨迹的规划,因此无法满足机器人绕行的需求,很多时候会采用跨越飞行来实现!

2 Ego Planner基本介绍

本项目,采用Cartographer使用2 D雷达和机器人IMU、里程计 完成全局地图构建和自身导航定位,并为Ego Planner提供Odom与TF定位信息,通过D435提供的深度图像实现局部地图构建。由于Ego Planner默认采用Map作为坐标系,因此如果采用雷达导航需要发布下世界和地图固定变换:

<node pkg="tf" type="static_transform_publisher" name="map_to_world" args="0 0 0 0 0 0 /map /world 100"> </node>

(1)话题映射

规划器订阅话题参数在advanced_param.xml中,需要增加里程计、相机点云、相机位姿和深度相机图像:

<remap from="~grid_map/odom" to="/odom_map"/> <remap from="~grid_map/cloud" to="/camera/depth/color/points"/> <remap from="~grid_map/pose" to = "/odom_pose"/> <remap from="~grid_map/depth" to = "/camera/depth/image_rect_raw"/>

Ego_planner在源码中可以看到采用了两套话题方法通过pose_type选择,0采用前面两个话题即相机的里程计和点云数据,但是直接订阅点云会使得数据量很大;1采用后者,则需要提供深度相机的位姿数据,直接采用深度相机的图像:

<param name="grid_map/pose_type" value="1"/> <param name="grid_map/frame_id" value="map"/>

具体实现在grid_map.cpp中:

/mp_.pose_type_ = POSE_STAMPED ; if (mp_.pose_type_ == POSE_STAMPED) { pose_sub_.reset( new message_filters::Subscriber<geometry_msgs::PoseStamped>(node_, "grid_map/pose", 50)); sync_image_pose_.reset(new message_filters::Synchronizer<SyncPolicyImagePose>( SyncPolicyImagePose(100), *depth_sub_, *pose_sub_)); sync_image_pose_->registerCallback(boost::bind(&GridMap::depthPoseCallback, this, _1, _2)); std::cout<<"Pose mode"<<endl; } else if (mp_.pose_type_ == ODOMETRY)// { odom_sub_.reset(new message_filters::Subscriber<nav_msgs::Odometry>(node_, "grid_map/odom", 100, ros::TransportHints().tcpNoDelay())); sync_image_odom_.reset(new message_filters::Synchronizer<SyncPolicyImageOdom>( SyncPolicyImageOdom(100), *depth_sub_, *odom_sub_)); sync_image_odom_->registerCallback(boost::bind(&GridMap::depthOdomCallback, this, _1, _2)); std::cout<<"Odom mode"<<std::endl; }

(2)采用2D雷达TF发布Odom数据

为了实现高实时的建图一般采用ODOMETRY的定位方式,但是其需要提供相机在世界坐标系下的位姿,对于雷达定位来说一般只能产生TF变换并不能得到odom的话题,因此需要自己计算并发布。方法是计算相机TF相对MAP的TF变换并按Odom消息类型进行发布:

pose_pub = n.advertise<geometry_msgs::PoseStamped>("/odom_pose", 10, true); odom_pub = n.advertise<nav_msgs::Odometry>("/odom_map", 10); //tf::TransformListener listener; tf::TransformListener listener(ros::Duration(5));//缓冲5s防止报错 ros::Timer timer=n.createTimer(ros::Duration(0.01),boost::bind(&transformPoint, boost::ref(listener)));//0.01s 100Hz调用一次监听器

采用listener监听TF变换:

void transformPoint(const tf::TransformListener &listener) {//监听TF 获取机器人质心的位置 try{ tf::StampedTransform transform; //得到坐标odom和坐标base_link之间的关系 listener.waitForTransform("map", "camera_link_odom", ros::Time(0), ros::Duration(0.005)); listener.lookupTransform("map", "camera_link_odom", ros::Time(0), transform); printTf_camera(transform); } catch (tf::TransformException &ex) { //ROS_ERROR("%s",ex.what()); } }

发布Odom消息:

void printTf_camera(tf::Transform tf) { //cout<<"vector from reference frame to to child frame: "<<tfVec.getX()<<","<<tfVec.getY()<<","<<tfVec.getZ()<<endl; geometry_msgs::PoseStamped position_3d; position_3d.pose.position.x = tf.getOrigin().getX(); position_3d.pose.position.y = tf.getOrigin().getY(); position_3d.pose.position.z = tf.getOrigin().getZ();//odom_3d->pose.pose.position.z; if( position_3d.pose.position.z<0.1) position_3d.pose.position.z=0.1; position_3d.pose.orientation.x = tf.getRotation().x(); position_3d.pose.orientation.y = tf.getRotation().y(); position_3d.pose.orientation.z = tf.getRotation().z(); position_3d.pose.orientation.w = tf.getRotation().w(); position_3d.header.stamp = ros::Time::now(); position_3d.header.frame_id = "map"; pose_pub.publish(position_3d); nav_msgs::Odometry odom; odom.header.stamp = ros::Time::now(); odom.header.frame_id = "map"; //set the position odom.pose.pose.position.x = position_3d.pose.position.x; odom.pose.pose.position.y = position_3d.pose.position.y; odom.pose.pose.position.z = position_3d.pose.position.z; odom.pose.pose.orientation = position_3d.pose.orientation; //set the velocity odom.child_frame_id = "base_link"; odom.twist.twist.linear.x = 0; odom.twist.twist.linear.y = 0; odom.twist.twist.angular.z = 0; //odom_pub.publish(odom); }

下面两个参数决定了局部地图在机器人后端消失的部分,margin越小则越快消失,ground_height则决定了地面建模高度,对于不想考虑的低矮障碍物可以采用该值进行屏蔽:

<param name="grid_map/local_map_margin" value="15"/> <param name="grid_map/ground_height" value="0.05"/>

3 Ego Planner面向地面机器人的简单修改

如前文所述,Ego Planner原始是用于无人机3D轨迹规划的,因此对于低矮的障碍物无人机是可以从上方飞越的,而对于地面机器人来说其只能绕过障碍物,因此需要对原始的建图机制进行修改,Ego Planner采用Grid Map管理局部地图并进行膨胀和轨迹规划。

我采用了一种比较简单的方法,原始建图算法实际可以建立出空间中的障碍物, 因此我们只需要将该点沿着竖直方向进行扩展 ,与地面连接、与虚拟天花板连接。这样最终轨迹规划就能约束不会从障碍物上方越过, 而是实现绕行 ,进一步通过约束地面高度,则可以滤除一些较为矮小即四足机器人通过步态可以通过的障碍物:

(1)占据栅格修改

则简单的修改方式在Grid_map.cpp里完成对障碍物膨胀的代码,基本的原理就是在每个障碍物XY坐标位置进行高度循环将占据标志位设定为1,md_.occupancy_buffer_inflate_[idx_inf] :

// inflate obstacles for (int x = md_.local_bound_min_(0); x <= md_.local_bound_max_(0); ++x) for (int y = md_.local_bound_min_(1); y <= md_.local_bound_max_(1); ++y) for (int z = md_.local_bound_min_(2); z <= md_.local_bound_max_(2); ++z) { if (md_.occupancy_buffer_[toAddress(x, y, z)] > mp_.min_occupancy_log_) { #if 1//doghome for ground robot for (int z1 = 0; z1 <= 100; ++z1){ inflatePoint(Eigen::Vector3i(x, y, z1*mp_.resolution_*2), inf_step, inf_pts); for (int k = 0; k < (int)inf_pts.size(); ++k) { inf_pt = inf_pts[k]; int idx_inf = toAddress(inf_pt); if (idx_inf < 0 || idx_inf >= mp_.map_voxel_num_(0) * mp_.map_voxel_num_(1) * mp_.map_voxel_num_(2)) { continue; } md_.occupancy_buffer_inflate_[idx_inf] = 1; } } #else inflatePoint(Eigen::Vector3i(x, y, z), inf_step, inf_pts); for (int k = 0; k < (int)inf_pts.size(); ++k) { inf_pt = inf_pts[k]; int idx_inf = toAddress(inf_pt); if (idx_inf < 0 || idx_inf >= mp_.map_voxel_num_(0) * mp_.map_voxel_num_(1) * mp_.map_voxel_num_(2)) { continue; } md_.occupancy_buffer_inflate_[idx_inf] = 1; } #endif } }

另外在发布占据地图时可以进行如下修改,避免对地面区域和虚拟天花板的显示,在Rviz里仅显示障碍物的膨胀栅格:

void GridMap::publishMapInflate(bool all_info) { if (map_inf_pub_.getNumSubscribers() <= 0) return; pcl::PointXYZ pt; pcl::PointCloud<pcl::PointXYZ> cloud; Eigen::Vector3i min_cut = md_.local_bound_min_; Eigen::Vector3i max_cut = md_.local_bound_max_; if (all_info) { int lmm = mp_.local_map_margin_; min_cut -= Eigen::Vector3i(lmm, lmm, lmm); max_cut += Eigen::Vector3i(lmm, lmm, lmm); } boundIndex(min_cut); boundIndex(max_cut); for (int x = min_cut(0); x <= max_cut(0); ++x) for (int y = min_cut(1); y <= max_cut(1); ++y) for (int z = min_cut(2); z <= max_cut(2); ++z) { if (md_.occupancy_buffer_inflate_[toAddress(x, y, z)] == 0) continue; Eigen::Vector3d pos; indexToPos(Eigen::Vector3i(x, y, z), pos); if (pos(2) > mp_.visualization_truncate_height_ ||pos(2)<mp_.map_draw_range_min||pos(2)>mp_.map_draw_range_max )//doghome continue; #if 0//for ground robot for (int z1 = 0; z1 <= 15; ++z1){ pt.x = pos(0); pt.y = pos(1); pt.z = z1* mp_.resolution_; cloud.push_back(pt); } #else pt.x = pos(0); pt.y = pos(1); pt.z = pos(2); #endif cloud.push_back(pt); } cloud.width = cloud.points.size(); cloud.height = 1; cloud.is_dense = true; cloud.header.frame_id = mp_.frame_id_; sensor_msgs::PointCloud2 cloud_msg; pcl::toROSMsg(cloud, cloud_msg); map_inf_pub_.publish(cloud_msg); // ROS_INFO("pub map"); }

(2)四足机器人轨迹跟踪控制

上面完成了对地图部分的修改,即让机器人可以对任意障碍物进行处理从而解决地面机器人绕行的需求,对于其他的运动轨迹如机器人钻过一个门洞也只需要限制膨胀栅格的方式就行,但是要实现机器人的轨迹控制仍然是不行的。Ego Planner原始规划模型是面向四轴飞行器,在轨迹控制是直接产生位置、速度和加速度前馈指令:

pos = traj_[0].evaluateDeBoorT(t_cur); vel = traj_[1].evaluateDeBoorT(t_cur); acc = traj_[2].evaluateDeBoorT(t_cur);

因此随着时间的递增,期望指令会不断向前移动,而四足机器人由于模型不一致,虽然也是一个全向平台但是难以实现对轨迹的跟踪,因此这里采用了预瞄点+PID的简单处理,即通过获取当前机器人位置前方的轨迹目标进行PID控制产生机器人的机体速度和转向速度,从而实现对轨迹的跟踪。由于需要反馈控制首先需要采用发布相机位姿的相同方法发布机体TF的变换Odom并在traj_server中进行订阅:

void odomCallback(const nav_msgs::Odometry::ConstPtr& odom_3d) { nav_msgs::Odometry position_3d; position_3d.pose.pose.position.x = odom_3d->pose.pose.position.x; position_3d.pose.pose.position.y = odom_3d->pose.pose.position.y; position_3d.pose.pose.position.z = odom_3d->pose.pose.position.z; position_3d.pose.pose.orientation = odom_3d->pose.pose.orientation; position_3d.header.stamp = odom_3d->header.stamp; position_3d.header.frame_id = "map"; odom_base=position_3d; tf::Quaternion quat; tf::quaternionMsgToTF(odom_3d->pose.pose.orientation, quat); //定义存储r\p\y的容器 tf::Matrix3x3(quat).getRPY(roll, pitch, yaw);//进行转换 //std::cout << roll*57.3<< ' ' <<pitch*57.3 << ' ' << yaw *57.3<< std::endl; //std::cout << odom_3d -> header.stamp << ' ' << odom_3d->pose.pose.position.x << ' ' << odom_3d->pose.pose.position.y << ' ' << odom_3d->pose.pose.position.z << std::endl; }

则对轨迹控制函数进行修改,发布cmd_vel指令:

void cmdCallback(const ros::TimerEvent &e) { /* no publishing before receive traj_ */ geometry_msgs::Twist cmd_vel; if (!receive_traj_){ cmd_vel.linear.x=0; cmd_vel.linear.y=0; cmd_vel.linear.z=0; cmd_vel.angular.z=0; cmd_vel_pub.publish(cmd_vel); return; } ros::Time time_now = ros::Time::now(); double t_cur = (time_now - start_time_).toSec(); Eigen::Vector3d pos(Eigen::Vector3d::Zero()), vel(Eigen::Vector3d::Zero()), acc(Eigen::Vector3d::Zero()), pos_f; std::pair<double, double> yaw_yawdot(0, 0); static ros::Time time_last = ros::Time::now(); if (t_cur < traj_duration_ && t_cur >= 0.0) { pos = traj_[0].evaluateDeBoorT(t_cur); vel = traj_[1].evaluateDeBoorT(t_cur); acc = traj_[2].evaluateDeBoorT(t_cur); /*** calculate yaw ***/ yaw_yawdot = calculate_yaw(t_cur, pos, time_now, time_last); /*** calculate yaw ***/ double tf = min(traj_duration_, t_cur + 2.0); pos_f = traj_[0].evaluateDeBoorT(tf); } else if (t_cur >= traj_duration_) { /* hover when finish traj_ */ pos = traj_[0].evaluateDeBoorT(traj_duration_); vel.setZero(); acc.setZero(); yaw_yawdot.first = last_yaw_; yaw_yawdot.second = 0; pos_f = pos; } else { cout << "[Traj server]: invalid time." << endl; } time_last = time_now; cmd.header.stamp = time_now; cmd.header.frame_id = "world"; cmd.trajectory_flag = quadrotor_msgs::PositionCommand::TRAJECTORY_STATUS_READY; cmd.trajectory_id = traj_id_; cmd.position.x = pos(0); cmd.position.y = pos(1); cmd.position.z = pos(2); cmd.velocity.x = vel(0); cmd.velocity.y = vel(1); cmd.velocity.z = vel(2); cmd.acceleration.x = acc(0); cmd.acceleration.y = acc(1); cmd.acceleration.z = acc(2); cmd.yaw = yaw_yawdot.first; cmd.yaw_dot = yaw_yawdot.second; last_yaw_ = cmd.yaw; pos_cmd_pub.publish(cmd); #if 0 //spd controll float temp[3]={cmd.velocity.x,cmd.velocity.y,0}; float xd=cos(-yaw)*temp[0]-sin(-yaw)*temp[1]; float yd=sin(-yaw)*temp[0]+cos(-yaw)*temp[1]; cmd_vel.linear.x=xd; cmd_vel.linear.y=yd; cmd_vel.linear.z=0;//cmd.velocity.x ; cmd_vel.angular.z=yaw_yawdot.second; #else //pose control float traj_dt=0.1; float dis=0; float dis_min=99; int id_closet=0; float pos_now[3]={odom_base.pose.pose.position.x,odom_base.pose.pose.position.y,odom_base.pose.pose.position.z}; float pos_tart[3]; for(int i=0;i<100;i++){ double t_cur = traj_dt*i; if (t_cur < traj_duration_ && t_cur >= 0.0) { pos = traj_[0].evaluateDeBoorT(t_cur); dis=sqrt((pos[0]-pos_now[0])*(pos[0]-pos_now[0])+(pos[1]-pos_now[1])*(pos[1]-pos_now[1])); if(dis<dis_min) { dis_min=dis; id_closet=i; } }else break; } float t_forward=time_forward_; t_cur = traj_dt*id_closet+t_forward; if(t_cur>traj_duration_) t_cur=traj_duration_; pos = traj_[0].evaluateDeBoorT(t_cur); pos_tart[0]=pos[0]; pos_tart[1]=pos[1]; //spd controll float temp[3]={0,0,0}; temp[0]=limit_ws(pos_tart[0]-pos_now[0],-0.25,0.25)*kp_pos; temp[1]=limit_ws(pos_tart[1]-pos_now[1],-0.25,0.25)*kp_pos; temp[0]=limit_ws(temp[0],-max_spd_x,max_spd_x); temp[1]=limit_ws(temp[1],-max_spd_x,max_spd_x); float xd=cos(-yaw)*temp[0]-sin(-yaw)*temp[1]; float yd=sin(-yaw)*temp[0]+cos(-yaw)*temp[1]; #if 0 t_forward=time_forward_; t_cur = traj_dt*id_closet+t_forward; if(t_cur>traj_duration_) t_cur=traj_duration_; pos = traj_[0].evaluateDeBoorT(t_cur); pos_tart[0]=pos[0]; pos_tart[1]=pos[1]; #endif double yaw_temp = atan2(pos_tart[1]-pos_now[1], pos_tart[0]-pos_now[0]); float yaw_spd=To_180_degreesw(yaw_temp*57.3-yaw*57.3)/57.3*kp_yaw; yaw_spd=limit_ws(yaw_spd,-max_spd_yaw,max_spd_yaw); float yaw_weight=0; yaw_weight=limit_ws(fabs(yaw_spd/0.4),0,1); if(yaw_weight<0.2) yaw_weight=0; xd=limit_ws(xd,-max_spd_x,max_spd_x); yd=limit_ws(yd,-max_spd_y,max_spd_y); cmd_vel.linear.x=xd*(1-yaw_weight)*1; cmd_vel.linear.y=yd*(1-yaw_weight)*1; cmd_vel.linear.z=0; pos = traj_[0].evaluateDeBoorT(traj_duration_); dis=sqrt((pos[0]-pos_now[0])*(pos[0]-pos_now[0])+(pos[1]-pos_now[1])*(pos[1]-pos_now[1])); if(fabs(dis)>0.1) cmd_vel.angular.z=yaw_spd; else cmd_vel.angular.z=0; if(fabs(dis)<0.025) { cmd_vel.linear.x=cmd_vel.linear.y=0;//yd*(1-yaw_weight)*1; cmd_vel.linear.z=0; } #endif //ROS_INFO("%f %f",t_cur,traj_duration_); //ROS_INFO("Linear Components:%f [%f,%f,%f]",yaw_weight,cmd_vel.linear.x,cmd_vel.linear.y,-cmd_vel.linear.z); //ROS_INFO("Angular Components:[%f,%f,%f]",cmd_vel.angular.x,cmd_vel.angular.y,cmd_vel.angular.z); cmd_vel_pub.publish(cmd_vel); }

(3)轨迹持续重规划

最后还有一点,就是Ego Planner并不是一值周期重规划,其除了用一个固定周期重规划外,主要是依赖轨迹跟踪的偏差,即机器人偏离轨迹过大后才会重规划,而地面机器人由于需要实时重规划,因此需要对其replan_fsm状态机进行修改:

case EXEC_TRAJ: { /* determine if need to replan */ //en_replan=0; replan_cnt++; LocalTrajData *info = &planner_manager_->local_data_; ros::Time time_now = ros::Time::now(); double t_cur = 0.0;// 0;//(time_now - info->start_time_).toSec();//doghome t_cur = min(info->duration_, t_cur); double t_cur1 = (time_now - info->start_time_).toSec();//doghome t_cur1 = min(info->duration_, t_cur1); Eigen::Vector3d pos = info->position_traj_.evaluateDeBoorT(t_cur); /* && (end_pt_ - pos).norm() < 0.5 */ if ((target_type_ == TARGET_TYPE::PRESET_TARGET) && (wp_id_ < waypoint_num_ - 1) && (end_pt_ - pos).norm() < no_replan_thresh_) { wp_id_++; planNextWaypoint(wps_[wp_id_]); } else if ((local_target_pt_ - end_pt_).norm() < 0.15) // close to the global target doghome { if (t_cur1 > info->duration_ - 1e-2) { have_target_ = false; have_trigger_ = false; if (target_type_ == TARGET_TYPE::PRESET_TARGET) { wp_id_ = 0; planNextWaypoint(wps_[wp_id_]); } std::cout<<"Reaching Waypoints_"<<std::endl; changeFSMExecState(WAIT_TARGET, "FSM"); goto force_return; // return; } else if ((end_pt_ - pos).norm() > no_replan_thresh_ && t_cur1 > replan_thresh_) { changeFSMExecState(REPLAN_TRAJ, "FSM"); } } else if (t_cur1 > replan_thresh_) { changeFSMExecState(REPLAN_TRAJ, "FSM"); replan_cnt=0; std::cout<<"replan_cnt replan_thresh_"<<std::endl; } break; }

即在轨迹规划执行中增加周期计数判断,让状态机跳回重规划部分,同时重规划一直采用全局重规划函数:

case REPLAN_TRAJ://moshi { bool success = planFromGlobalTraj(20); // zx-todo if (success) { changeFSMExecState(EXEC_TRAJ, "FSM"); flag_escape_emergency_ = true; publishSwarmTrajs(false); } break; }

4 测试效果

最终,采用该方案在HelloRobotic的Tinymal机器人平台上进行了测试,实现了2D雷达和视觉互补下的室内导航,借助雷达低成本导航定位的优势同时,补偿了其无法检测低矮障碍物并实现避障的问题:

相关源码下载请参考: