ArborArm stands as an aerial retrieval system meticulously designed to revolutionise the safety and efficiency of removing objects from elevated locations within the urban landscape. This innovative project initiates from a close collaboration with Mao Sheng QuanJi Construction Pte Ltd, a partnership that underscores the practical, real-world focus of the technology.
The primary aim of ArborArm is to provide a robust solution that eliminates the well-documented hazards associated with traditional manual retrieval methods while significantly enhancing operational efficiency across Singapore’s diverse urban environments. By leveraging the capabilities of unmanned aerial vehicles (UAVs) and advanced robotics, ArborArm offers an alternative to how elevated object retrieval is approached in dense urban settings.
Xiong Zehui
Rashmi Kumar
Dominic Quah
How might we develop a safe and efficient retrieval system for landscaping personnel to access and collect items from elevated areas, reducing the risks associated with manual retrieval methods?
This drone system combines advanced mechanical design and computer vision for precise object interaction and retrieval.
8-Inch Quadcopter
The drone is built on a lightweight yet durable 8-inch carbon fiber frame with integrated propeller guards. This ensures flight stability and safety during close-range operations.
Extendable Arm
A central carbon fiber-based arm extends to reach targets with precision and retracts when not in use, maintaining balance during flight.
Modular End Effectors
The arm supports interchangeable tools—a gripper, rammer, or magnetiser—allowing the drone to adapt to different tasks such as grabbing, pushing, or attracting metallic objects.
Computer Vision
The YOLOv11n model powers real-time object detection. Its efficiency and compact size enable the drone to identify and locate targets autonomously during flight missions.
ArborArm is conceived as a comprehensive and fully integrated system that leverages the advanced capabilities of unmanned aerial vehicle (UAV) technology combined with a dexterous extendable arm. This innovative solution is specifically designed to address the challenges of safely and efficiently retrieving objects from elevated locations within the complex urban environment of Singapore. The core concept revolves around utilizing a custom-built drone platform equipped with a sophisticated gripper mechanism to perform retrieval tasks that are traditionally carried out using manual labor or heavy machinery.
The unmanned aircraft (UA) design was adapted from previous iterations of the same sized UA. Our design needed to be light but sturdy to protect the UA and the attachment from impact. For this, we decided to use FDM 3D printing with a PLA-Aero filament by Bambu Lab which is specially designed for aircraft design. We also combined the cage and landing gear into one, which streamlined the design and production.
To join the cage+landing gear together, we used wooden dowels. The decision to use something that can breakaway is to make sure if there were to be an impact, these are the first things to break. These wooden dowels are easy and inexpensive to replace.
Going into the UA itself, we kept with the carbon fiber body. Made from cuts of a carbon fiber plate, it made the body sturdy to withstand flexing due to it carrying the weight of itself plus the attachment. It is also efficient at dissipating heat, which allows for our UA electronics to operate without damaging the UA body.
During the prototyping phase of the extendable arm, we initially considered using a linear actuator. However, we found that the linear actuator was too heavy due to its numerous metal and mechanical components. To address this issue, we conducted further research and discovered a flexible carbon fiber arm that can extend to form a rigid structure—an ideal solution for our object retrieval use case.
Based on this discovery, we designed a custom holder equipped with motor mounts that allow the carbon fiber arm to extend and retract smoothly. The top plate of the holder features strategically placed holes and indentations, providing slots for electrical components and secure mounting onto the drone.
Moving forward, as for the claw attachment, we initially bought an off-the-shelf mechanical claw. However, we realized that the claw had a small surface area, which was insufficient to grip onto items effectively. Furthermore, we discovered that we can re-create our own claw by using a linear mechanism found online.
Firstly, we used solidworks to create a simple kinematics diagram to envision and finalize on the physics that the claw will operate on.
Next, we designed the individual components of the claw in CAD and assembled them in SolidWorks to assess their functionality. This was followed by several iterations of 3D printing, as some parts were either poorly fitting or too fragile, requiring adjustments and reprints.
After assembling the claw, we conducted testing on the claw by seeing whether it is able to retrieve different objects effectively.
Raspberry Pi Backend
Raspberry Pi 5 (RPi 5) was used as a companion computer to help the flight controller for advanced computation. The system integrates a Raspberry Pi (RPi) with a drone’s flight controller (FC) using a micro XRCE-DDS bridge, enabling bidirectional communication via predefined ROS topics. This architecture facilitates feedback control to manage dynamic physical interactions, such as mitigating whiplash effects during object retrieval tasks. After research, the default PX4 flight stack demonstrated sufficient robustness for these scenarios, and modifications were implemented to extend the bridge’s functionality to an attachment arm subsystem.
Communication Architecture
The bridge initially served as the primary pathway for transmitting control signals between the FC and the RPi. A radio controller’s button-toggling commands (e.g., attachment arm roll or unroll, move the arm up and down movement) were intended to route through the FC to the RPi via ROS topics. The RPi would then execute custom scripts to actuate the attachment arm. However, as the flight controller’s firmware was flashed and calibrated previously, it posed limitations on the FC that prevented the transmission of custom messages, necessitating a redesign. The onboard cameras are capable of live streaming via VLC, broadcasting the feed over the local area network (LAN). This setup offers users seamless access to the camera stream, requiring only a connection to the same network with IP <IFORGOTTHEIP> to view the live feed.
Signal Handling and Power Constraints
To bypass the FC’s firmware restrictions, an ExpressLRS (ELRS) receiver was
integrated directly with the RPi via GPIO pins. This allowed the RPi to interpret radio controller signals without relying on the XRCE-DDS bridge. While effective, this approach increased power consumption significantly due to simultaneous operation of the RPi, ELRS receiver, and onboard cameras.
Computer Vision
To enable robust object detection for the attachment and retrieval process, a YOLOv11n model was deployed on the Raspberry Pi 5. This lightweight neural network was selected for its balance between inference speed and accuracy, making it suitable for deployment on edge devices with limited computational resources.
Model Training and Dataset
The YOLOv11n model was trained using a custom dataset composed of images capturing the target object from various distances, lighting conditions, and angles. The training was performed on a high-performance local machine before exporting the model to the Raspberry Pi.
To improve robustness in outdoor environments, data augmentation techniques such as random rotation, scaling, brightness shifts, and noise injection were applied during training. The final trained model achieved a mean average precision (mAP@0.5) of 85.2% on the validation set.
Deployment and Integration
The exported .pt model was converted and optimized using Ultralytics’ YOLOv11 framework and deployed via NCNN Runtime on the Raspberry Pi 5. The vision system interfaces with the ROS2 stack to relay detection results (bounding box coordinates and confidence scores) to the onboard control system, which uses this information to guide the UAV for retrieval.
The object detection module was containerized with Docker for ease of deployment
and version control on the Pi. The YOLOv11n model is initialized upon startup and continuously processes frames from the onboard camera. Detected objects are filtered based on confidence thresholds (>0.7), and tracking is performed using a lightweight SORT (Simple Online and Realtime Tracking) algorithm.
Performance Evaluation
Inference Time: ~8 ms per frame (on Raspberry Pi 5)
Detection Accuracy: 85.2% mAP@0.5 on validation dataset
Power Consumption: Approx. 2.1W under full load
We extend our deepest gratitude to the SUTD Capstone Office for fostering an environment where academic rigor meets real-world problem-solving. Our heartfelt thanks to Mao Sheng Quanji Construction Pte Ltd for their unwavering trust, collaborative spirit, and industry insights, which anchored our project in practicality. Dr. Xiong Ze Hui’s mentorship was pivotal, providing technical expertise and strategic guidance that refined our iterative design process. We are indebted to Ms. Rashmi Kumar and Mr Dominic Edmund Quah Kim San (CWR) for elevating our technical communication, and to Professor Sumbul Khan and Professor Franklin Anariba for their interdisciplinary perspective, which enriched our stakeholder engagement strategy. Finally, we acknowledge CAAS Singapore for regulatory guidance and SUTD Aerial Arena for facilitating field-testing.
Vote for our project at the exhibition! Your support is vital in recognizing our creativity. Join us in celebrating innovation and contributing to our success. Thank you for being part of our journey!
At Singapore University of Technology and Design (SUTD), we believe that the power of design roots from the understanding of human experiences and needs, to create for innovation that enhances and transforms the way we live. This is why we develop a multi-disciplinary curriculum delivered v ia a hands-on, collaborative learning pedagogy and environment that concludes in a Capstone project.
The Capstone project is a collaboration between companies and senior-year students. Students of different majors come together to work in teams and contribute their technology and design expertise to solve real-world challenges faced by companies. The Capstone project will culminate with a design showcase, unveiling the innovative solutions from the graduating cohort.
The Capstone Design Showcase is held annually to celebrate the success of our graduating students and their enthralling multi-disciplinary projects they have developed.