Editor's Note: Although modern microscopes are equipped with advanced hardware systems, the large image datasets they generate often include low-quality or out-of-focus images. To ensure reliable and unbiased scientific analysis, it is crucial to employ high-precision automated image processing techniques. Google researchers have developed deep learning models to address this challenge, enabling scientists to capture and analyze high-quality microscopic images more effectively. The following is a refined version of the original content.
Many scientific imaging applications, especially those involving microscopes, generate terabytes of data daily. These systems benefit greatly from recent advancements in computer vision and deep learning. As we collaborate with biologists on robotic microscopy projects, we've discovered that cleaning and enhancing image data—by removing noise and identifying focus quality—is both challenging and essential. Moreover, many scientists may not be familiar with coding, yet they still want to leverage deep learning for analyzing their images. One key issue we aim to solve is the problem of out-of-focus images. Even with state-of-the-art autofocus systems, improper setup or incompatible hardware can lead to poor image quality. Automated focus assessment allows for efficient detection, troubleshooting, and removal of such problematic images.
Deep Learning for Image Quality Assessment
In our paper "Assessing Microscope Image Quality with Deep Learning," we trained a neural network to evaluate the focus quality of microscope images, achieving better results than traditional methods. We also integrated a pre-trained TensorFlow model into Fiji (ImageJ) and CellProfiler, two widely used open-source tools for scientific image analysis. These tools can be accessed via a graphical user interface or through scripting, making them accessible to a broad range of users.

The workflow of our machine learning project is detailed in published articles and open-source code (including TensorFlow, Fiji, and CellProfiler). We created a training dataset by defocusing 384 cell images, eliminating the need for manual labeling. We then used data augmentation to train a model capable of generalizing well across different cell types. Finally, we deployed a pre-trained model that doesn’t require user-defined parameters, offering more accurate focus quality assessments. To improve transparency, our model evaluates focus quality on 84×84 pixel blocks, represented by colored borders in the image above.
What About Images Without Objects?
One interesting challenge we faced was dealing with blank image patches—images that contain no target objects. In such cases, there’s no meaningful concept of focus quality. Instead of manually labeling these patches, we let the model identify them. However, we configured the model to predict a probability distribution of defocus levels, allowing it to express uncertainty in these blank regions.
What’s Next?
Deep learning-based approaches for scientific image analysis promise greater accuracy, reduce the need for manual adjustments, and could lead to new discoveries. It’s clear that the sharing of data, models, and tools is essential for widespread adoption. Making these technologies accessible and effective across all scientific fields remains a critical goal.
Shenzhen Ousida Technology Co., Ltd , https://en.osdvape.com