A study published in Nature Artificial Intelligence introduces a deep learning method that promises to improve how researchers analyze motion in biological microscopy. The technique, called DEFORM-Net (Displacement Estimation FOR Microscopy) was developed to address longstanding challenges in capturing the complex and often subtle movements seen in live biological samples.
The accurate estimation of motion, or more specifically, full-field displacement, in biological images is vital for understanding dynamic processes such as tissue mechanics, heartbeats, and cell contractions. Traditional methods like digital image correlation and optical flow have been widely used, but often struggle with the low contrast, noise, and variability typical in microscopy data.
“The estimation of full-field displacement between biological images or in videos is important for quantitative analyses of motion, dynamics and biophysics… DEFORM-Net outperforms traditional digital image correlation and optical flow methods, as well as recent learned approaches, offering simultaneous high accuracy, spatial sampling and speed.”
What sets DEFORM-Net apart is its experimentally unsupervised approach, meaning it doesn’t rely on ground-truth data from real experiments. Instead, it trains using synthetic deformations generated with fractal Perlin noise, a method that mimics the statistical properties of real biological motion. This sidesteps the common problem of lacking accurate training data for biological displacement tasks.
In testing, the system was applied to real videos of beating mouse heart cells and contracting Drosophila tissues, delivering results that matched or exceeded traditional techniques. Perhaps most notably, DEFORM-Net achieved this with a speed approaching real-time. The model is open-source and compatible with common platforms like ImageJ/FIJI, making it accessible for integration into existing research pipelines.
