Show simple item record

FieldValueLanguage
dc.contributor.authorZhang, Shisheng
dc.contributor.authorBalamurali, Mehala
dc.contributor.authorKyme, Andre
dc.date.accessioned2021-01-08T00:03:13Z
dc.date.available2021-01-08T00:03:13Z
dc.date.issued2018en_AU
dc.identifier.urihttps://hdl.handle.net/2123/24253
dc.description.abstractThe ability to image the brain of awake rodents using motion-compensated positron emission tomography (PET) presents many exciting possibilities for exploring the links between brain function and behavior. A key requirement of this approach is obtaining accurate estimates of animal pose throughout a scan. Our present motion tracking approach suffers crucial line-of-sight limitations which leads to tracking "dropout" and subsequent loss of motion information that can be used for motion correction. The proportion of a scan affected can range anywhere from 5% for sedentary subjects up to >50% for highly active subjects. The aim of this work was to investigate the feasibility of augmenting optical motion tracking with a video-based deep learning motion estimation method to mitigate the impact of tracking drop-out.A deep convolutional neural network (CNN) based regression approach for estimating six rigid-body motion parameters is proposed. We tested our model using multi-view camera images of a rat phantom under robotic control. The commanded robot motion provided the labels for our data. We compared the performance of deep learning-based motion estimation for simulated gaps in the motion sequence against the robot ground truth. We also compared deep learning to naïve linear interpolation of motion across the gaps. Deep learning provided promising alignment with the ground truth motion, in many cases sub-degree/sub-mm. The root mean square error for the deep learning and interpolation methods versus ground-truth was 1.26° and 23.64° (y-axis rotation) and 0.77 mm and 6.57 mm (z-position), respectively.Deep learning-based rigid-body motion estimation from multi-view video appears promising as a solution for augmenting optical tracking. Future work will focus on (i) the use of a Long Short-Term Memory (LSTM) unit to better model temporal information in the motion trace and (ii) incorporation of the known camera calibration to further constrain pose estimates.en_AU
dc.language.isoenen_AU
dc.publisherIEEEen_AU
dc.relation.ispartof2018 IEEE Nuclear Science Symposium and Medical Imaging Conference Proceedingsen_AU
dc.rightsCopyright All Rights Reserveden_AU
dc.titleDeep learning-based motion estimation for uninterrupted tracking of awake rodents in PETen_AU
dc.typeConference paperen_AU
dc.subject.asrc0299 Other Physical Sciencesen_AU
dc.subject.asrc0801 Artificial Intelligence and Image Processingen_AU
dc.subject.asrc0903 Biomedical Engineeringen_AU
dc.identifier.doi10.1109/NSSMIC.2018.8824642
dc.relation.arcDE160100745
dc.rights.other© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
usyd.facultySeS faculties schools::Faculty of Engineering::School of Biomedical Engineeringen_AU
workflow.metadata.onlyNoen_AU


Show simple item record

Associated file/s

Associated collections

Show simple item record

There are no previous versions of the item available.