3D Segmentation
3D Semantic Segmentation
3D Instance Segmentation
3D Part Segmentation
3D Semantic Segmentation
3D Instance Segmentation
3D Part Segmentation
Data acquired from 3D imaging modalities:
Computed Tomography (CT)
Micro-Computed Tomography (micro-CT or X-ray)
Magnetic Resonance Imaging (MRI) scanners
These data are labeled to isolate regions of interest. These regions represent any subject or sub-region within the scan that will later be scrutinized. This could facilitate the analysis of part of the human body, or a specific feature within an industrial component or assembly.
Although direct measurement and analysis of 3D images is possible in some scenarios, segmented images are the basis for most 3D image analysis.
Allows for conversion into 3D models, which permits visualization and quantification of the scanned subject.
Obtaining a physical representation of the subject through 3D printing.
In 2D CNN, the convolution kernel moves in two (x and y) directions, while in 3D convolution, it moves in three (x, y, and z) directions.
2D convolution can learn only 2D spatial features while 3D convolution can also extract inter-slice information from adjacent frames.
In 2D CNN, the model accepts 2D images (or matrices) as the input data and provides corresponding 2D segmentation maps. In 3D CNN, both the input and output are 3D in nature.
The input and output feature space of 2D convolution layers (except in the input and final layers) are 3D in nature. Similarly, the input and output feature space of 3D convolution layers are 4D.
Since the convolution operations in a 3D CNN are significantly higher, and it needs more memory space to save the parameters and feature space than 2D CNN.
Using interpolation approaches to segment large volumes with greatly reduced manually labeled regions.
Automatically adjusting surface positions to correct for noise, artifacts or other segmentation inaccuracies.