Friday, March 8, 2019

Image Analysis at Run-Time

Image Analysis at Run-time 

Bin Li from UW-Madison


Presenting Friday, March 8th at 1:00PM CST

Whether experiments are focused and small-scale or automated and high-throughput, effectively quantifying image is now a critical, widespread need that continues to grow in many ways:

Scale. Automated microscopes can acquire millions of images faster than can be analyzed by eye. Large-scale computing resources for analysis are often inaccessible to biologists who need them.

Size. Many experiments, from basic research to patient studies, involve huge images, often 10,000+ pixels in each dimension, using light sheet microscopy and/or large field-of-view detectors.

Dimensionality. Researchers are performing quantitative, higher-throughput experiments using these multi-dimensional image types via time-lapse, three-dimensional, & multi-spectral imaging

Scope. Researchers need to create complex workflows involving software for microscope control, high-throughput image processing, cloud computing, deep learning, & data mining

Modality. Researchers are struggling to identify and apply appropriate analytical methods for the explosion of novel types of microscopy, including super-resolution, single-molecule, and others

Complexity. Microscopy is now being used for profiling, to extract a “signature” of samples based on morphological measurements of imaged cells and microenvironment, often more subtle & complex than humans perceive.

The overall goal of quantitative and systematic biomedical imaging is to have a general platform for run-time computer vision applications with hardware acceleration and integration knowledge from biology, instrumentation, engineering and computer science, so that new instruments with run-time processing ability can address many current challenges and allow us to thoroughly study nature’s variability.