Posted in | News | Technology

Collaborative Robots Streamline Curtain Wall Installation

A recent article accepted for publishing in Biomimetic Intelligence and Robotics proposed a human-robot collaboration system based on the “human-centered with machine support” concept for handling large-scale, heavy objects such as building curtain walls as per different human intentions.

Collaborative Robots Streamline Curtain Wall Installation
Study: Human-robot collaborative handling of curtain walls using dynamic motion primitives and real-time human intention recognition. Image Credit: Holmes Su/Shutterstock.com

Background

Due to the complex environment of construction sites, manual assembly of building curtain walls often requires multiple workers. Robots offer promising solutions for such tasks and are gradually replacing manual handling approaches. Traditional robot handling of items involves writing complex guidance programs in advance. However, this approach is inefficient in complex scenarios and tasks.

Robot skill-learning through dynamic motion primitives (DMP) theory imparts robots with enhanced flexibility, learning, and generalization abilities. However, skill-learning for handling generally focuses on a robot’s end-effector position optimization and ignores speed optimization during motion. Additionally, it is important to understand the operator’s intentions in the handling process, which can vary in real-time according to the operating environment, task conditions, etc.

Effective human-machine collaboration in handling tasks is attainable through the robot’s ability to comprehend the operator’s motion intentions through sensor data. Thus, this study demonstrated a human-robot collaborative curtain wall handling system based on a skill-learning approach.

Methods

Building facades were the targets for operations in this study, where a human-robot collaborative handling system was designed for facade handling tasks via real-time trajectory planning as per an understanding of human intentions. This system comprised three main modules: intent understanding, motion trajectory planning, and executive modules.

The intent understanding module collected real-time force information from human gripping actions on the curtain wall through a six-axis force sensor (M4313M4B model from Sunrise Instruments). Additionally, it read the robot’s (UR5) end-effector’s motion information to estimate the human intention of accelerating, decelerating, or maintaining the current speed. A Kalman filtering algorithm was applied to the sensors installed on the curtain wall to level the force curves.

The motion trajectory planning module generated robot motion trajectories at different speeds via trajectory learning and generalization models. Finally, the executive module integrated real-time human intentions with the motion trajectories to plan new trajectories dynamically, facilitating the curtain wall assembly.

The skill-learning framework for the robot included trajectory learning and generalization using DMP and human intention understanding. A nonlinear control term, including time, was added to enable the robot trajectory learning model to fit various trajectory shapes.

The working of the designed human-machine cooperative curtain wall handling system was demonstrated experimentally using a platform with an intent acquisition module and a human-machine cooperation control module. The effectiveness of robot trajectory learning and generalization was assessed by testing the robot’s movement along a given target path on the platform.

Results and Discussion

The robot gathered information from the operator during collaborative handling via the force sensor to interpret human intentions. Subsequently, the robot communicated with a computer through a serial port and executed motion instructions as per the planned trajectories, integrating human intentions into the collaborative handling process.

The experimental results exhibited the effectiveness of the trajectory learning model in learning and reproducing the teaching trajectory, with a generalization error of less than 0.278%. Notably, the learned robot motion trajectories maintained the same trend as the teaching trajectories. Moreover, when tested for different starting points and target points, the errors of the motion trajectory generated through the robot trajectory learning and reproduction model were less than 0.07%.

The experiment for linear motion in a human-machine cooperative space exhibited initial acceleration, followed by deceleration in motion trajectories along the X, Y, and Z axes. Notably, the errors of these generated trajectories were 0.03377, 0.02377, and 0.2250, respectively. Thus, the robot efficiently recognized the acceleration and deceleration intentions during the motion stage.

A transport experiment with multidirectional and curved movement in space was also performed to verify the accuracy of the robot’s intention understanding. The average trajectory error measured in this case was less than 0.00055 m with a 100% intention recognition accuracy. Thus, the robot could identify the operator’s intention and cooperate with him in handling the curtain wall. Furthermore, while traditional curtain wall handling requires at least three operators, the proposed method required only one operator for intention guidance as handling was performed by the robot, thereby improving handling efficiency by over 60%.

Conclusion

Overall, the researchers successfully designed a collaborative handling system for curtain walls by leveraging human intention understanding to enhance robot handling skills. In this “human-centered with machine support” design, the robot served as the main load-bearer, guided by the operator to handle curtain walls.

The human-robot integration ensured a smooth, flexible, and labor-saving handling process, enhancing accuracy and safety in curtain wall assembly tasks. The researchers suggest optimizing the handling process to further enhance efficiency and improve the flexibility of robots in such collaborative handling scenarios.

Journal Reference

Li, F., Sun, H., Liu, E., & Du, F. (2024). Human-robot collaborative handling of curtain walls using dynamic motion primitives and real-time human intention recognition. Biomimetic Intelligence and Robotics, 100183. DOI: 10.1016/j.birob.2024.100183, https://www.sciencedirect.com/science/article/pii/S266737972400041X

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Nidhi Dhull

Written by

Nidhi Dhull

Nidhi Dhull is a freelance scientific writer, editor, and reviewer with a PhD in Physics. Nidhi has an extensive research experience in material sciences. Her research has been mainly focused on biosensing applications of thin films. During her Ph.D., she developed a noninvasive immunosensor for cortisol hormone and a paper-based biosensor for E. coli bacteria. Her works have been published in reputed journals of publishers like Elsevier and Taylor & Francis. She has also made a significant contribution to some pending patents.  

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dhull, Nidhi. (2024, September 19). Collaborative Robots Streamline Curtain Wall Installation. AZoBuild. Retrieved on September 19, 2024 from https://www.azobuild.com/news.aspx?newsID=23609.

  • MLA

    Dhull, Nidhi. "Collaborative Robots Streamline Curtain Wall Installation". AZoBuild. 19 September 2024. <https://www.azobuild.com/news.aspx?newsID=23609>.

  • Chicago

    Dhull, Nidhi. "Collaborative Robots Streamline Curtain Wall Installation". AZoBuild. https://www.azobuild.com/news.aspx?newsID=23609. (accessed September 19, 2024).

  • Harvard

    Dhull, Nidhi. 2024. Collaborative Robots Streamline Curtain Wall Installation. AZoBuild, viewed 19 September 2024, https://www.azobuild.com/news.aspx?newsID=23609.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.