Cell images are essential in modern medicine and biological research, such as in cancer diagnosis and drug development. A major obstacle for the application of traditional machine learning algorithms is caused by the large variability of the spatio-temporal patterns in cell images. The shape, size, and motion patterns of the same family of cells can have vastly different visual appearance according to the physical and biological setting in which the images were acquired.
This thesis addresses the gaps between machine learning methods and their practical applications in cell image analysis by developing new methods that use neural networks to learn to characterize subtle spatio-temporal patterns in cell images in order to minimize reliance upon human inputs and to maximize generalizability. To achieve this goal, we introduce four methods for cell analysis: (1) an unsupervised method that learns the distinctions between different components of the spatio-temporal patterns of cells to detect and classify cell events in time-lapse phase-contrast microscopy (PCM) videos; (2) a semi-supervised method that exploits the spatio-temporal patterns of cells in their normal stage in PCM videos and performs unsupervised estimation of temporal lengths of cell events; (3) a supervised method that enables end-to-end learning and prediction of contextual spatio-temporal patterns in PCM videos; and (4) our transfer learning framework that extracts useful spatial features for cells.
We conducted experiments on a public datasets, which have been used extensively in previous studies and competitions. It comprises of visual data of cells with different cell types, various cell shapes and sizes, irregular cell motion patterns, and densely packed cell populations. Our results show that our methods are more accurate, data efficient, and generalizable than other state-of-the-art methods.