当前位置:   article > 正文

ROS - RealsenseD435/D455 - YOLO联合_ros配置realsense仿真gazebo插件

ros配置realsense仿真gazebo插件

目录

一、 RealsenseD435/D455实物 - YOLO - detect

参考

使用

二、ROS - RealsenseD435/D455 - gazebo仿真

参考

使用

三、ROS - RealsenseD435/D455 如何保存rgb与depth图

 解释

代码

四、ROS - RealsenseD435仿真 - YOLO - detect

代码


最终可以实现:在gazebo仿真环境中,realsenseD435相机能够检测出画面中的目标物体并且返回其(x,y,h)

一、 RealsenseD435/D455实物 - YOLO - detect

参考

realsense-D455-YOLOV5开源资料:

realsense-D455-YOLOV5

使用

win/ubuntu,装环境,插入设备,运行python realsensedetect.py,可以检测出coco数据集中的图像坐标与深度,如果想打印某目标的(x,y,h):

  1. label = '%s %.2f%s' % (names[int(cls)], np.mean(distance_list), 'm')
  2. plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3)
  3. c1, c2 = (int(xyxy[0]), int(xyxy[1])), (int(xyxy[2]), int(xyxy[3]))
  4. ####上面三行方便定位代码位置,下面两行是需要添加的
  5. if names[int(cls)]=='mouse': #以鼠标为例
  6. print('xyh',mid_pos[0],mid_pos[1],np.mean(distance_list))

二、ROS - RealsenseD435/D455实物

参考

GitHub - IntelRealSense/realsense-ros at ros1-legacy

1、如果遇到这个错误:

(messenger-libusb.cpp:42) control_transfer returned error, index: 300, error

 检查usb线是否可用,相机是否连接正常

2、如果遇到这个错误:

[camera/realsense2_camera_manager-1] process has died [pid 11552, exit code 127, cmd /opt/ros/melodic/lib/nodelet/nodelet manager __name:=realsense2_camera_manager __log:=/home/nico/.ros/log/8a4aa5aa-f974-11ed-ac63-a333cc6ddd13/camera-realsense2_camera_manager-1.log].

ddynamic重复安装,卸载:

sudo apt-get remove ros-melodic-ddynamic-reconfigure

三、ROS - RealsenseD435/D455仿真 - gazebo

参考

源码:

IntelRealSense / realsense-ros

issaiass / realsense2_description,其中提供了支持gazebo的插件

文档

ROS1The ROS1 wrapper allows you to use Intel RealSense Depth Cameras with ROS1.These are the ROS (1) supported Distributions: Note: The latest ROS (1) release is version 2.3.2. Noetic Ninjemys (Ubuntu 20.04 Focal)Melodic Morenia (Ubuntu 18.04 Bionic)Kinetic Kame (Ubuntu 16.04 Xenial) ROS Documentation a...icon-default.png?t=N7T8https://dev.intelrealsense.com/docs/ros1-wrapper

使用

运行:

roslaunch realsense2_description view_d435_model_rviz_gazebo.launch

可以看到topic:

/camera/color/image_raw

/camera/depth/image_raw

四、ROS - RealsenseD435/D455 如何保存rgb与depth图

 解释

/camera/color/image_raw 比较容易保存

/camera/depth/image_raw容易出问题,首先应该先确定编码方式

rostopic echo /camera/depth/image_raw/encoding

 bridge.imgmsg_to_cv2()函数可以将图像的ros话题转化从cv支持的格式,但容易保存成很黑的图片,参考了这篇回答

代码

  1. #!/usr/bin/env python3
  2. # coding:utf-8
  3. import sys
  4. import rospy
  5. import numpy as np
  6. from sensor_msgs.msg import Image
  7. from sensor_msgs.msg import JointState
  8. from cv_bridge import CvBridge, CvBridgeError
  9. import cv2
  10. class Dataset_Process:
  11. def __init__(self):
  12. rospy.init_node('img_process_node', anonymous=True)
  13. self.bridge = CvBridge()
  14. rospy.Subscriber('/camera/color/image_raw', Image, self.image_raw_callback)
  15. rospy.Subscriber("/camera/depth/image_raw",Image, self.image_depth_callback)
  16. def image_raw_callback(self,data):
  17. self.color_image = data
  18. self.color_image_cv = self.bridge.imgmsg_to_cv2(data, "bgr8")
  19. def image_depth_callback(self,data):
  20. self.depth_image = data
  21. self.depth_image_cv = self.bridge.imgmsg_to_cv2(data, "16UC1")#32FC1
  22. self.depth_array = np.array(self.depth_image_cv, dtype=np.float32)
  23. self.depth_array[np.isnan(self.depth_array)] = 0
  24. cv2.normalize(self.depth_array, self.depth_array, 0, 1, cv2.NORM_MINMAX)
  25. def saveimg(self,imgsaving,filename):
  26. filestr="/home/lidia/catkin_ws/src/offboard_node/images/"+filename+".png"
  27. cv2.imwrite(filestr,imgsaving)
  28. if __name__ == '__main__':
  29. data_sample = Dataset_Process()
  30. rate = rospy.Rate(10)
  31. while not rospy.is_shutdown():
  32. rate.sleep()
  33. try:
  34. time_now_secs = int(data_sample.color_image.header.stamp.secs)
  35. time_now_nsecs = int(data_sample.color_image.header.stamp.nsecs)
  36. rgb_file_name="rgb/img"+str(time_now_secs)+"_"+str(time_now_nsecs)
  37. depth_file_name="depth/img"+str(time_now_secs)+"_"+str(time_now_nsecs)
  38. data_sample.saveimg(data_sample.color_image_cv,rgb_file_name)
  39. data_sample.saveimg(data_sample.depth_image_cv,depth_file_name)
  40. print("saving..")
  41. except:
  42. print("try again")
  43. rospy.spin()

四、ROS - RealsenseD435仿真 - YOLO - detect

 结合前三部分的内容,可以将第一部分中的realsensedetect.py修改成如下代码,以实现:

在gazebo仿真环境中,realsenseD435相机能够检测出画面中的目标物体并且返回其(x,y,h)

代码

  1. import argparse
  2. import os
  3. import shutil
  4. import time
  5. from pathlib import Path
  6. import cv2
  7. import torch
  8. import torch.backends.cudnn as cudnn
  9. from numpy import random
  10. import numpy as np ##
  11. import pyrealsense2 as rs ##
  12. from models.experimental import attempt_load
  13. from utils.general import (
  14. check_img_size, non_max_suppression, apply_classifier, scale_coords,
  15. xyxy2xywh, plot_one_box, strip_optimizer, set_logging)
  16. from utils.torch_utils import select_device, load_classifier, time_synchronized
  17. from utils.datasets import letterbox
  18. import sys
  19. import rospy
  20. from sensor_msgs.msg import Image
  21. from cv_bridge import CvBridge, CvBridgeError
  22. import cv2
  23. class Dataset_Process:
  24. def __init__(self):
  25. rospy.init_node('img_process_node', anonymous=True)
  26. self.bridge = CvBridge()
  27. rospy.Subscriber('/camera/color/image_raw', Image, self.image_raw_callback)
  28. rospy.Subscriber("/camera/depth/image_raw",Image, self.image_depth_callback)
  29. self.detect()
  30. # rospy.Subscriber("/joint_states",JointState,self.joint_state_callback)
  31. def image_raw_callback(self,data):
  32. self.color_image = data
  33. self.color_image_cv = self.bridge.imgmsg_to_cv2(data, "bgr8")
  34. # cv2.imshow("frame" , cv_img)
  35. # cv2.waitKey(1)
  36. # filestr="/home/lidia/catkin_ws/src/offboard_node/images/rgb/img"+str(data.header.stamp.secs)+"_"+str(data.header.stamp.nsecs)+".png"
  37. # cv2.imwrite(filestr,cv_img)
  38. def image_depth_callback(self,data):
  39. self.depth_image = data
  40. self.depth_image_cv = self.bridge.imgmsg_to_cv2(data, "16UC1")# 32FC1
  41. self.depth_array = np.array(self.depth_image_cv, dtype=np.float32)
  42. self.depth_array[np.isnan(self.depth_array)]=0
  43. cv2.normalize(self.depth_array,self.depth_array,0,1,cv2.NORM_MINMAX)
  44. def detect(self,save_img=False):
  45. out, source, weights, view_img, save_txt, imgsz = \
  46. opt.save_dir, opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size
  47. webcam = source == '0' or source.startswith(('rtsp://', 'rtmp://', 'http://')) or source.endswith('.txt')
  48. # Initialize
  49. set_logging()
  50. device = select_device(opt.device)
  51. if os.path.exists(out): # output dir
  52. shutil.rmtree(out) # delete dir
  53. os.makedirs(out) # make new dir
  54. half = device.type != 'cpu' # half precision only supported on CUDA
  55. # Load model
  56. model = attempt_load(weights, map_location=device) # load FP32 model,载入模型
  57. imgsz = check_img_size(imgsz, s=model.stride.max()) # check img_size
  58. if half:
  59. model.half() # to FP16
  60. # Set Dataloader
  61. vid_path, vid_writer = None, None
  62. view_img = True
  63. cudnn.benchmark = True # set True to speed up constant image size inference
  64. #dataset = LoadStreams(source, img_size=imgsz)
  65. # Get names and colors
  66. names = model.module.names if hasattr(model, 'module') else model.names
  67. colors = [[random.randint(0, 255) for _ in range(3)] for _ in range(len(names))]
  68. # Run inference
  69. t0 = time.time()
  70. img = torch.zeros((1, 3, imgsz, imgsz), device=device) # init img
  71. _ = model(img.half() if half else img) if device.type != 'cpu' else None # run once
  72. # pipeline = rs.pipeline()
  73. # 创建 config 对象:
  74. config = rs.config()
  75. # config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
  76. config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 60)
  77. config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 60)
  78. # Start streaming
  79. # pipeline.start(config)
  80. align_to_color = rs.align(rs.stream.color)
  81. while True:
  82. start = time.time()
  83. # Wait for a coherent pair of frames(一对连贯的帧): depth and color
  84. # frames = pipeline.wait_for_frames()
  85. # frames = align_to_color.process(frames)
  86. # depth_frame = frames.get_depth_frame() # 深度
  87. # color_frame = frames.get_color_frame() # rgb
  88. # print('depth_frame',type(depth_frame)) # <class 'pyrealsense2.pyrealsense2.depth_frame'>
  89. # print('color_frame',type(color_frame)) # <class 'pyrealsense2.pyrealsense2.video_frame'>
  90. # color_image = np.asanyarray(color_frame.get_data())
  91. # depth_image = np.asanyarray(depth_frame.get_data())
  92. # mask = np.zeros([color_image.shape[0], color_image.shape[1]], dtype=np.uint8)
  93. # mask[0:480, 320:640] = 255
  94. sources = [source]
  95. imgs = [None]
  96. path = sources
  97. # imgs[0] = color_image # 检测是基于rgb图检测的
  98. # im0s = imgs.copy()
  99. # img = [letterbox(x, new_shape=imgsz)[0] for x in im0s]
  100. # img = np.stack(img, 0)
  101. # img = img[:, :, :, ::-1].transpose(0, 3, 1, 2) # BGR to RGB, to 3x416x416, uint8 to float32
  102. # img = np.ascontiguousarray(img, dtype=np.float16 if half else np.float32)
  103. # img /= 255.0 # 0 - 255 to 0.0 - 1.0
  104. depth_image = self.depth_array
  105. imgs[0] = self.color_image_cv
  106. im0s = imgs.copy()
  107. img = [letterbox(x, new_shape=imgsz)[0] for x in im0s]
  108. # img = [letterbox(x, new_shape=imgsz)[0] for x in img]
  109. img = np.stack(img, 0)
  110. # print(img.shape)
  111. img = img.transpose(0, 3, 1, 2) # BGR to RGB, to 3x416x416, uint8 to float32
  112. img = np.ascontiguousarray(img, dtype=np.float16 if half else np.float32)
  113. img /=255.0
  114. # Get detections
  115. img = torch.from_numpy(img).to(device) # 转化成了torch的格式传入model
  116. if img.ndimension() == 3:
  117. img = img.unsqueeze(0)
  118. t1 = time_synchronized()
  119. pred = model(img, augment=opt.augment)[0]
  120. # Apply NMS
  121. pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms)
  122. t2 = time_synchronized()
  123. for i, det in enumerate(pred): # detections per image
  124. p, s, im0 = path[i], '%g: ' % i, im0s[i].copy()
  125. s += '%gx%g ' % img.shape[2:] # print string
  126. gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
  127. if det is not None and len(det):
  128. # Rescale boxes from img_size to im0 size
  129. det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()
  130. # Print results
  131. for c in det[:, -1].unique():
  132. n = (det[:, -1] == c).sum() # detections per class
  133. s += '%g %ss, ' % (n, names[int(c)]) # add to string,存放检测到了几个物体
  134. # Write results
  135. for *xyxy, conf, cls in reversed(det):
  136. xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
  137. line = (cls, conf, *xywh) if opt.save_conf else (cls, *xywh) # label format
  138. distance_list = []
  139. mid_pos = [int((int(xyxy[0]) + int(xyxy[2])) / 2), int((int(xyxy[1]) + int(xyxy[3])) / 2)] # 确定索引深度的中心像素位置左上角和右下角相加在/2
  140. min_val = min(abs(int(xyxy[2]) - int(xyxy[0])), abs(int(xyxy[3]) - int(xyxy[1]))) # 确定深度搜索范围
  141. # print(box,)
  142. randnum = 40
  143. for i in range(randnum):
  144. bias = random.randint(-min_val // 4, min_val // 4)
  145. dist = depth_image[int(mid_pos[0] + bias), int(mid_pos[1] + bias)]###照片的xy位置
  146. # print(max())
  147. # print(int(mid_pos[1] + bias), int(mid_pos[0] + bias))
  148. if dist:
  149. distance_list.append(dist)
  150. distance_list = np.array(distance_list)
  151. distance_list = np.sort(distance_list)[randnum // 2 - randnum // 4:randnum // 2 + randnum // 4] # 冒泡排序+中值滤波
  152. label = '%s %.2f%s' % (names[int(cls)], np.mean(distance_list), 'm')#label中存放着标签的名字和距离
  153. plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3)
  154. c1, c2 = (int(xyxy[0]), int(xyxy[1])), (int(xyxy[2]), int(xyxy[3]))
  155. # print(c1, '*******',c2 )
  156. if names[int(cls)]=='airplane':
  157. print('xyh',mid_pos[0],mid_pos[1],np.mean(distance_list))#如果是鼠标就返回xyh
  158. # print(label,type(label))
  159. # Print time (inference + NMS)
  160. print('%sDone. (%.3fs)' % (s, t2 - t1))
  161. # Stream results
  162. if view_img:
  163. cv2.imshow(p, im0)
  164. if cv2.waitKey(1) == ord('q'): # q to quit
  165. raise StopIteration
  166. # print('Done. (%.3fs)' % (time.time() - t0))
  167. if __name__ == '__main__':
  168. parser = argparse.ArgumentParser()
  169. parser.add_argument('--weights', nargs='+', type=str, default='yolov5l.pt', help='model.pt path(s)')#在第一次执行的时候,会下载yolov5m.pt,是一个训练好的模型
  170. parser.add_argument('--source', type=str, default='inference/images', help='source') # file/folder, 0 for webcam
  171. parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
  172. parser.add_argument('--conf-thres', type=float, default=0.25, help='object confidence threshold')
  173. parser.add_argument('--iou-thres', type=float, default=0.45, help='IOU threshold for NMS')
  174. parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
  175. parser.add_argument('--view-img', action='store_true', help='display results')
  176. parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
  177. parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
  178. parser.add_argument('--save-dir', type=str, default='inference/output', help='directory to save results')
  179. parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3')
  180. parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
  181. parser.add_argument('--augment', action='store_true', help='augmented inference')
  182. parser.add_argument('--update', action='store_true', help='update all models')
  183. opt = parser.parse_args()
  184. #print('***************',opt)
  185. with torch.no_grad(): # 一个上下文管理器,被该语句wrap起来的部分将不会track梯度
  186. data_sample = Dataset_Process()

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/250752
推荐阅读
相关标签
  

闽ICP备14008679号