Virtual Finger to navigate 3D images of complex biological structures

Virtual finger technology to digitally navigate three-dimensional images.

Published Date
14 - Jul - 2014
| Last Updated
14 - Jul - 2014
Virtual Finger to navigate 3D images of complex biological struct...

Researchers from Allen Institute for Brain Science have developed a new technology called Virtual Finger that makes 3D imaging studies more efficient, saving time, money and resources in areas of experimental biology.

Scientists have to sift through the slices one at a time to study three-dimensional structures which is very challenging with the advent of big data. "Looking through 3D image data one flat slice at a time is simply not efficient, especially when we are dealing with terabytes of data," explains Hanchuan Peng, Associate Investigator at the Allen Institute for Brain Science. "This is similar to looking through a glass window and seeing objects outside, but not being able to manipulate them because of the physical barrier."

Virtual Finger allows scientists to move through digital images of small structures like neurons and synapses using the flat surface of their computer screens. The technology allows scientists to digitally reach into three-dimensional images of small objects like single cells to access the information they need much more quickly and intuitively.

Scientists at the Allen Institute are already using Virtual Finger to improve their detection of spikes from individual cells as well as to better model the morphological structures of neurons. The technology may become a game-changer for many biological experiments and methods of data analysis, in the future. The scientists stated how the technology has already been applied to perform three-dimensional microsurgery in order to study the developing lung, knock out single cells and create a map of all the neural connections in the fly brain.

"Using Virtual Finger could make data collection and analysis ten to 100 times faster, depending on the experiment," explained Hanchuan Peng, associate investigator at the Allen Institute for Brain Science in the US.

"When you move your cursor along the flat screen of your computer, our software recognizes whether you are pointing to an object that is near, far, or somewhere in between, and allows you to analyse it in depth without having to sift through many two-dimensional images to reach it," Peng added.