The demand for automation on construction sites has been rising, with the goal of improving efficiency, safety, and scalability. This paper aims to develop an automated pick-and-place system using a UR10e robot, a Robotiq 2F-85 gripper, and a ZED Mini stereo camera with the Robot Operating System (ROS) 2 framework. The system is designed to recognize, estimate poses, and manipulate construction materials, specifically bricks, for a construction-like environment. The focus is on utilizing computer vision with the help of a You Only Look Once (YOLO)-based object detection model, trained on a custom dataset, coupled with stereo vision to identify the poses of objects and automate the brick-laying operation in a Gazebo simulation environment as well as at the Robot Fabrication Lab. The system’s efficiency is assessed by focusing on evaluation metrics representing the detection accuracy, pose estimation accuracy, grasp centering, execution latency, and success rate. Both the simulation and real-world tests highlight the challenges in detection and robot manipulation in dynamic environments. Results of the automated bricklaying include a very high detection accuracy but some error in the pose estimation, leading to manual adjustments, such as centering the grasping position. This thesis contributes to the feasibility of vision-based robotic systems for automation in construction while discussing further necessary adjustments and improvements for robust manipulation and usability in real-world scenarios.
«
The demand for automation on construction sites has been rising, with the goal of improving efficiency, safety, and scalability. This paper aims to develop an automated pick-and-place system using a UR10e robot, a Robotiq 2F-85 gripper, and a ZED Mini stereo camera with the Robot Operating System (ROS) 2 framework. The system is designed to recognize, estimate poses, and manipulate construction materials, specifically bricks, for a construction-like environment. The focus is on utilizing compute...
»