Skip to main content

A gardening robot for rose, hedge and topiary trimming

Periodic Reporting for period 2 - TrimBot2020 (A gardening robot for rose, hedge and topiary trimming)

Reporting period: 2017-07-01 to 2018-12-31

The TrimBot2020 project will develop and evaluate the technology needed by an autonomous hedge trimmer addressing 2 core issues:
1) advancing outdoor robotic technology: low cost robot vehicles for
interacting with the natural environment and reliable 3D sensing for robot
localisation and scene understanding, and
2) demonstration of a hedge and rose cutting prototype, for potential economic exploitation.

The overall objectives are to develop and demonstrate:
1) novel robotic end effectors for garden trimming,
2) a low cost mobile platform for deploying the end effectors, and
3) reliable outdoor 3D sensing for robot navigation, object and scene
structure recognition, and trimming control.

This requires innovations in: reliable outdoor 3D sensing,
low-cost ground-based mobile robotics with a manipulator arm,
development of new end effectors for hedge, bush, and rose trimming,
algorithms for scene modelling, scene structure recognition,
robot navigation and servoing, deformable map registration,
cut planning, servo and control.

The project has demonstrated successful autonomous navigation in the test gardens, and stand-alone clipping of hedge bushes, and pruning of rose bushes.
Robot platforms 1 & 2 have been built (without the arm), with five stereo camera pairs in a pentagonal sensor.
It was demonstrated at the first review. See milestone 2:
Navigation software guides the vehicle (open-loop) to a user indicated (via the SketchMap Interface)
destination. See figure integration.png.

The project built two novel servo motor controlled end effectors, for
bush clipping (circular counter-rotating saw blades)
and rose stem cutting (a modified commercially available electrical pruner).
Successful operational tests on real plants used the new tools mounted on
a Kinova Jaco2 arm. See hedgecutter.png rosecutter.png rosecutexample.png and bushcutexample.png.

A 10 synchronised camera system was built (Figure penta_top.png) arranged
into 5 stereo pairs covering the full 360 degree horizontal field-of-view.
Each stereo pair consists of a colour (for semantic processing) and a grayscale (for stereo) camera.
An FPGA implementation of ETHZ's stereo algorithm produces a 12 FPS data stream, with
a depth image registered to the colour image. See figure new252.png.

A SLAM system using all 10 cameras estimates the motion of the robot and the 3D scene structure.
An online extrinsic parameters self-calibration procedure using the camera rigidity and vehicle motion
was developed. Figure wageningen_SLAM.png shows a 3D SLAM-derived map.

Deep convolutional networks were developed that run at frame rates
with state-of-art accuracy and can exploit garden-specific relationships.
The DeMoN two-frame structure from motion network, and
the DispNet stereo disparity estimation network gives depth estimates.
The FlowNet network estimates optical flow from image sequences.
Figures depthstatue.png shows a depth image estimated from colourstatue.png.
Videos illustrate DeMoN (
and FlowNet 2.0 (

A deep-net architecture solves colour intrinsic image decomposition into albedo and shading,
facilitating garden navigation and trimming by allowing identification of different scene
structures (eg. grass (driveable) or gravel (not driveable)). See figure intrinsicdecomp.png.

Test gardens were constructed at Wageningen University & Research Centre
and the Robert Bosch Renningen campus.
The robot was demonstrate dautonomously navigating to user selected locations
in the Renningen test garden.
Figures wurgardendesign.png and wurgardenactual.png show the Wageningen garden.
Figure rng_bush_row.jpg shows the Bosch Renningen test garden.

Trimming of bushes and pruning of roses were demonstrated at the second periodic review.

Ground-truth data sets were collected and annotated for evaluating
the position and recognition accuracy of the robot navigation and trimming.
3D semantically labelled point clouds were obtained for both gardens.
Over 700 images were manually labelled with pixel-level scene semantic labels.
A novel GUI tool was developed for the semantic annotation.
Figure leica_wageningen.png shows the Wageningen point cloud.

A Dissemination and Exploitation strategy was developed:
(D8.1 - Dissemination plan, D8.2 - Data management plan, D8.3 - Website and social media presence, and
D8.4 - Report on relevant stakeholders).
The project website (5000 visitors and 10000 sessions) is:
The project has social media profiles on Twitter, Facebook, ResearchGate, and a YouTube channel (13 videos):
* A novel low cost robot for garden trimming based on a Bosch Indego,
which is guiding the design of a new robot platform.
The prototype is capable of vision-guided autonomous navigation:
sensing, wifi, enhanced power, communication, and computing capability.

* A hedge trimming tool with circular counter-rotating saw blades passed initial tests.

* A commercially available Bosch Ciso electrical pruner was modified for robot use.

* A ROS-based state machine was created for outdoor garden navigation, planning, and execution.
Sketch-map based planning takes account of obstacles and driveable regions.

* The project developed a 5 stereo pair (one colour and one monochrome) /
10 synchronised camera pentagonal camera ring for 360 degree coverage.

* ETHZ's FPGA stereo algorithm was adapted for real-time depth maps from the five stereo pairs.

* A feature-based 360 degree multi-camera SLAM algorithm was
developed, modelling the camera system as a generalised camera.
A novel self-calibration module (based on global SfM) computes the extrinsic
calibration between the cameras using vehicle motion.

* A binocular dominant plane sweep stereo algorithm was enhanced to
preserve quality in textureless areas, with state-of-art performance.
An extension to trinocular stereo by using a third image
combines binocular and motion stereo.

* A new benchmark dataset demonstrated the superiority of hand crafted over deep learned
features for the task of feature-based SLAM.

* A novel benchmark dataset containing both high-resolution images and video
sequences was created for evaluation of (multi-view) stereo algorithms.

* A large-scale synthetic dataset was created for training and evaluating garden semantic
labelling algorithms, with pixel-wise semantic annotation, intrinsics, depth, scene flow, and surface normals.

* A novel deep-net architecture decomposes images into intrinsics albedo and shading.

* A deep optical flow network gives state-of-art accuracy at interactive frame rates.

* A two-frame structure from motion deep network gives improved accuracy at frame rates,

* A novel outdoor robot navigation evaluation dataset: ROS bags raw data for 6 runs
captured in the Wageningen test garden, containing 10 camera intensity, colour and depth video,
3D-LIDAR, high precision IMU, and external laser tracking ground truth.

* A new tool for 2D-3D outdoor semantic scene structure labelling, eg. grass, paths, trees, bushes.

* A semantically annotated garden dataset (2D+3D) was partially released
for ICCV and ECCV workshop challenges. The full dataset will be released soon.
prototype new rose clipper
depth image of statue estimated from 2 frames
Wageningen garden design
first lab-based cutting of bush using new cutter on arm
example image set from 10 pentagonal camera ring
demonstrator 1 prototype
pentagonal camera ring
original statue colour image (one of 2 frames)
SLAM constructed point cloud of Wageningen garden
first example of rose cutting using test jug
demonstrator 1 in the garden
Renningen test garden bushes
prototype new hedge cutter
decomposition of colour image into albedo and shading
Wageningen actual garden