Deep learning-based motion estimation for uninterrupted tracking of awake rodents in PET
Access status:
Open Access
Type
Conference paperAbstract
The ability to image the brain of awake rodents using motion-compensated positron emission tomography (PET) presents many exciting possibilities for exploring the links between brain function and behavior. A key requirement of this approach is obtaining accurate estimates of animal ...
See moreThe ability to image the brain of awake rodents using motion-compensated positron emission tomography (PET) presents many exciting possibilities for exploring the links between brain function and behavior. A key requirement of this approach is obtaining accurate estimates of animal pose throughout a scan. Our present motion tracking approach suffers crucial line-of-sight limitations which leads to tracking "dropout" and subsequent loss of motion information that can be used for motion correction. The proportion of a scan affected can range anywhere from 5% for sedentary subjects up to >50% for highly active subjects. The aim of this work was to investigate the feasibility of augmenting optical motion tracking with a video-based deep learning motion estimation method to mitigate the impact of tracking drop-out.A deep convolutional neural network (CNN) based regression approach for estimating six rigid-body motion parameters is proposed. We tested our model using multi-view camera images of a rat phantom under robotic control. The commanded robot motion provided the labels for our data. We compared the performance of deep learning-based motion estimation for simulated gaps in the motion sequence against the robot ground truth. We also compared deep learning to naïve linear interpolation of motion across the gaps. Deep learning provided promising alignment with the ground truth motion, in many cases sub-degree/sub-mm. The root mean square error for the deep learning and interpolation methods versus ground-truth was 1.26° and 23.64° (y-axis rotation) and 0.77 mm and 6.57 mm (z-position), respectively.Deep learning-based rigid-body motion estimation from multi-view video appears promising as a solution for augmenting optical tracking. Future work will focus on (i) the use of a Long Short-Term Memory (LSTM) unit to better model temporal information in the motion trace and (ii) incorporation of the known camera calibration to further constrain pose estimates.
See less
See moreThe ability to image the brain of awake rodents using motion-compensated positron emission tomography (PET) presents many exciting possibilities for exploring the links between brain function and behavior. A key requirement of this approach is obtaining accurate estimates of animal pose throughout a scan. Our present motion tracking approach suffers crucial line-of-sight limitations which leads to tracking "dropout" and subsequent loss of motion information that can be used for motion correction. The proportion of a scan affected can range anywhere from 5% for sedentary subjects up to >50% for highly active subjects. The aim of this work was to investigate the feasibility of augmenting optical motion tracking with a video-based deep learning motion estimation method to mitigate the impact of tracking drop-out.A deep convolutional neural network (CNN) based regression approach for estimating six rigid-body motion parameters is proposed. We tested our model using multi-view camera images of a rat phantom under robotic control. The commanded robot motion provided the labels for our data. We compared the performance of deep learning-based motion estimation for simulated gaps in the motion sequence against the robot ground truth. We also compared deep learning to naïve linear interpolation of motion across the gaps. Deep learning provided promising alignment with the ground truth motion, in many cases sub-degree/sub-mm. The root mean square error for the deep learning and interpolation methods versus ground-truth was 1.26° and 23.64° (y-axis rotation) and 0.77 mm and 6.57 mm (z-position), respectively.Deep learning-based rigid-body motion estimation from multi-view video appears promising as a solution for augmenting optical tracking. Future work will focus on (i) the use of a Long Short-Term Memory (LSTM) unit to better model temporal information in the motion trace and (ii) incorporation of the known camera calibration to further constrain pose estimates.
See less
Date
2018Source title
2018 IEEE Nuclear Science Symposium and Medical Imaging Conference ProceedingsPublisher
IEEEFunding information
ARC DE160100745Licence
Copyright All Rights ReservedRights statement
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Faculty/School
Faculty of Engineering, School of Biomedical EngineeringShare