Self-driving cameras: automated camera capture for biological imaging

Efficient quantitative analysis of plant traits is critical to keep pace with advances in molecular and genetic plant breeding tools. Machine learning has shown impressive results in automating a lot of these analytical processes however, many of the algorithms rely on a surplus of high-quality biol...

Full description

Bibliographic Details
Main Author: McGirr, Joseph
Format: Thesis (University of Nottingham only)
Language:English
Published: 2021
Subjects:
Online Access:https://eprints.nottingham.ac.uk/64339/
_version_ 1848800119029235712
author McGirr, Joseph
author_facet McGirr, Joseph
author_sort McGirr, Joseph
building Nottingham Research Data Repository
collection Online Access
description Efficient quantitative analysis of plant traits is critical to keep pace with advances in molecular and genetic plant breeding tools. Machine learning has shown impressive results in automating a lot of these analytical processes however, many of the algorithms rely on a surplus of high-quality biological imagery. This data is currently collected in labs via static camera systems, which provide consistent images but are challenging to tailor to individual plants, species, or tasks. Current research in autonomous camera systems use object detection or tracking methods to control the camera. Unfortunately, this quickly falls apart for static biological imagery as large inter- and intra-species variations, even within the same specimen, make object detection less robust and stationary targets make tracking unusable. Inspired by the success of deep learning in the autonomous driving space, we apply an end-to-end learned approach to directly map saliency-augmented input frames from an RGB monocular camera to a pan-tilt-zoom (PTZ) actuation. Our results show our model correctly classifies which direction to move the camera in 87% of instances and has an average offset error of 250 and 140 pixels for a 1920x1080 image, respectively. Results on a much smaller, plant-only dataset demonstrates the applicability of the model to biological imagery and we demonstrate saliency’s effectiveness in improving accuracy by up to 4%.
first_indexed 2025-11-14T20:46:29Z
format Thesis (University of Nottingham only)
id nottingham-64339
institution University of Nottingham Malaysia Campus
institution_category Local University
language English
last_indexed 2025-11-14T20:46:29Z
publishDate 2021
recordtype eprints
repository_type Digital Repository
spelling nottingham-643392025-02-28T15:10:11Z https://eprints.nottingham.ac.uk/64339/ Self-driving cameras: automated camera capture for biological imaging McGirr, Joseph Efficient quantitative analysis of plant traits is critical to keep pace with advances in molecular and genetic plant breeding tools. Machine learning has shown impressive results in automating a lot of these analytical processes however, many of the algorithms rely on a surplus of high-quality biological imagery. This data is currently collected in labs via static camera systems, which provide consistent images but are challenging to tailor to individual plants, species, or tasks. Current research in autonomous camera systems use object detection or tracking methods to control the camera. Unfortunately, this quickly falls apart for static biological imagery as large inter- and intra-species variations, even within the same specimen, make object detection less robust and stationary targets make tracking unusable. Inspired by the success of deep learning in the autonomous driving space, we apply an end-to-end learned approach to directly map saliency-augmented input frames from an RGB monocular camera to a pan-tilt-zoom (PTZ) actuation. Our results show our model correctly classifies which direction to move the camera in 87% of instances and has an average offset error of 250 and 140 pixels for a 1920x1080 image, respectively. Results on a much smaller, plant-only dataset demonstrates the applicability of the model to biological imagery and we demonstrate saliency’s effectiveness in improving accuracy by up to 4%. 2021-03-15 Thesis (University of Nottingham only) NonPeerReviewed application/pdf en arr https://eprints.nottingham.ac.uk/64339/1/Self_driving_cameras__Automated_Camera_Capture_for_Biological_Imaging_corrected.pdf McGirr, Joseph (2021) Self-driving cameras: automated camera capture for biological imaging. MRes thesis, University of Nottingham. Deep learning autonomous camera tracking static scene object tracking camera recentering image recentering image relocation saliency detection biological imaging
spellingShingle Deep learning
autonomous camera
tracking
static scene object tracking
camera recentering
image recentering
image relocation
saliency detection
biological imaging
McGirr, Joseph
Self-driving cameras: automated camera capture for biological imaging
title Self-driving cameras: automated camera capture for biological imaging
title_full Self-driving cameras: automated camera capture for biological imaging
title_fullStr Self-driving cameras: automated camera capture for biological imaging
title_full_unstemmed Self-driving cameras: automated camera capture for biological imaging
title_short Self-driving cameras: automated camera capture for biological imaging
title_sort self-driving cameras: automated camera capture for biological imaging
topic Deep learning
autonomous camera
tracking
static scene object tracking
camera recentering
image recentering
image relocation
saliency detection
biological imaging
url https://eprints.nottingham.ac.uk/64339/