Flowchart shows sample workflow for a radiology-centered 3D three-dimensional printing process. Digital Imaging and Communications in Medicine (DICOM) images are initially processed with compatible segmentation software, and the segmented anatomy is reviewed by the radiologist. An STL file of the selected tissues is then generated. The anatomic parts defined in the STL file can be 3D printed or further manipulated with compatible CAD computer-aided design software to, for example, design prostheses or produce a support platform to hold the parts in place. Final preparation of the tangible 3D-printed model (e.g., cleaning and sterilization) is required before clinical use. Reprinted with permission from Mitsouras et al., Radiographics. 2015
Decisions made in each stage of the process will be driven by several factors including the imaging modality used, the anatomy modeled, and the intended use of the eventual 3D-printed model. Some of the initial post-processing steps may be familiar to the medical imaging experts, as they share common features with 3D visualization tools that are used for image post-processing tasks. However, bridging the imaging data to 3D printing technologies requires an extra set of steps for refining and proper manipulation of the 3D rendering and finally preparing it for 3D printing.
Typically, manipulating DICOM images for 3D printing involves accurate segmentation of the desired tissues via placement of regions of interest (ROIs) , followed by creation and refinement of the STL representation of the ensemble surfaces defined by those ROIs. The refinement step is new to imagers and generally requires specialized software and skills used primarily in engineering applications. The operator must also carefully review the final STL model against source images for ensuring quality and accuracy. A number of free and commercial software are available to achieve these steps, namely, image segmentation with STL file generation and CAD-based STL manipulations. Examples are Vitrea (Vital Images, Inc., Minnetonka, MN) and OsiriX (Pixmeo, Geneva, Switzerland) for the former task and Geomagic Freeform (3D Systems, Rock Hill, NC) or Meshmixer (Autodesk, Inc., San Rafael, CA) for STL manipulations. Although these are two distinct categories of software, medical 3D printing software suites exist such as the Mimics Innovation Suite (Materialise, Leuven, Belgium) and Mimics inPrint (Materialise, Leuven, Belgium) that provide a solution combining elements of DICOM image processing and digital CAD.
3.2 Image Segmentation
Imaging modalities utilized for medical 3D printing commonly involve high-resolution, cross-sectional imaging, most commonly computed tomography (CT) (Mitsouras et al. 2015; Greil et al. 2007; Schmauss et al. 2015) and magnetic resonance imaging (MRI) (Greil et al. 2007; Yoo et al. 2016). More recently, success has been reported with the use of ultrasound in the cardiovascular field with the use of 3D transthoracic echocardiography (TTE) and transesophageal echocardiography (TEE) (Mahmood et al. 2015; Olivieri et al. 2015). Finally, rotational digital subtraction angiography or 3D rotational angiography has also been employed (Frolich et al. 2016; Ionita et al. 2011; Poterucha et al. 2014). It has also been demonstrated that multiple imaging modalities can be used to create a hybrid 3D-printed model enabled by the imaging strengths of each modality. For example, combining CT with TEE has been employed for generation of a 3D model of the heart capturing both structural and valve morphology (Gosnell et al. 2016).
Sufficient “pre-print” planning taking into account the modality and the parameters to be selected for source image data acquisition increases the accuracy and ease of the printed model; the quality of the images is tethered with the quality of the model. Optimizing spatial and temporal resolution along with appropriate contrast in structures of interest will result in the highest-quality models and most e fficient data processing (Fig. 3.2).
Examples of poor raw image data quality for generating 3D printable files . Panels (a) and (b) show a reconstructed STL file of the femur and tibia from a CT scan with 3 mm slice increment resulting in low resolution and missing data in the reconstruction. Panels (c) and (d) demonstrate STL files of femur and tibia derived from a T2-weighted MRI with poor contrast between the bone and surrounding soft tissue
Generally, the thinner the image cross sections (e.g., commonly reported 0.5–1.25 mm for cardiac 3D printing) (Jacobs et al. 2008), the more accurate the delineation of anatomical structures given the enhanced spatial resolution, yet very thin slices can lead to cumbersome post-processing and are not always recommended. Importantly, the desired image quality should be identified by selecting appropriate image reconstruction techniques, such as reconstruction kernels; smooth kernels generate images with lower noise but with reduced spatial resolution, while sharp kernels generate images with higher spatial resolution, bounded though with increased noise (Flohr et al. 2007; Matsumoto et al. 2015). After acquiring imaging in the appropriate resolution and quality, segmentation of these DICOM images is the first step toward manufacturing a patient-specific 3D-printed model.
A number of software programs and algorithms are available to perform image segmentation which can often be tailored toward specific imaging protocols or anatomy. The segmentation of appropriate ROIs can be both automated and manual or more frequently semiautomated, combining an initial step of automated segmentation followed by manual corrections (Fig. 3.3).
Paradigms of manual vs. automated segmentation in a case of double outlet right ventricle. Upper panels show the process of manual segmentation that involve thresholding, region growing, and generation of a single STL file including the entirety of the heart and the great vessels. Lower panels demonstrate the automated approach to segment the same case using automated algorithms for thresholding, separating the heart chambers and the great vessels and providing a composite STL model
Automated algorithms include thresholding, edge detection, and region growing. In thresholding, a widely used technique, voxels in the tissue of interest are selected based on the range of intensity values of that tissue (Mitsouras et al. 2015). Although this technique suffices for bone segmentation from CT because the HU are higher than surrounding structures, more complex algorithms are usually necessary, such as dynamic adjustment of the thresholding range. This is especially the case when processing MRI data where the pixel gray values do not correlate with tissue density. Common imaging artifacts also require interpretation and manual corrections. For example, due to noise or beam hardening in a CT image, a portion of an enhanced vessel lumen may fall outside of the typical enhanced blood HU range. If dynamic region growing or hole filling is not performed, the printed model may contain a nonanatomical hole or void. A segmentation approach such as “wrapping ” of a segmented region can also be used in such cases or to fill true anatomic voids such as in the cancellous bone to produce a simple solid model (Harrysson et al. 2007; Kozakiewicz et al. 2009). Additionally, metal artifact from implants or dental fillings causes streaking artifact which is challenging to handle with automated segmentation processes (Fig. 3.4).
(a, b) Streaking artifact in CT imaging resulting from metal in the body. Manual segmentation processing is typically required to counter the artifact and generate an accurate 3D reconstruction
Region selection (also called region growing) is a useful second step to determine whether segmented voxels belong to “a single or multiple” parts to be 3D printed. Region growing typically reduces the burden of the final step, namely, manual editing (“sculpting”) of the 3D ROIs that surround the segmented voxels, which includes manually manipulating ROI boundaries and manually erasing, combining, and modifying parts.
It is important to recognize that a 3D-printed model cannot convey information regarding tissues that are either not visualized in the imaging modality used to acquire the source images or that do not have sufficient differences in signal or density from adjacent tissues. For example, nerves are not clearly delineated on a standard CT; thus, it would be challenging to create a 3D model demonstrating the relationship of the brachial plexus to a superior sulcus tumor. This can be overcome by placing geometric objects (i.e., splines) to represent the paths of nerves or small vessels when they are not easily segmented from the source images. It is also possible to fuse imaging data from multiple imaging modalities to create such a model, for example, the bone and vasculature can be visualized in a contrast-enhanced CT and the nerves in an MRI of the brachial plexus.
One typically segments only those tissues visualized in the images that are relevant to convey to clinicians. For example, in a case of chest wall tumor, the adjacent portion of the rib cage and the vascular supply may be deemed pertinent to print in addition to the tumor itself, but not the mediastinal structures which are outside the surgical field or the non-adherent lung which does not pose a surgical challenge. This is necessary not only because segmentation is a time-consuming and currently laborious task but also because the efficacy and thus clinical utility of the model hinge on its ability to quickly communicate the relevant information. Thus, while an anterior mediastinal mass model could contain the entire rib cage and thoracic spine, the resulting model would likely present difficulties in clear visualization of the tumor and for comprehension of the relationship of the tumor to more crucial mediastinal structures. In this context, 3D printing of complex models at present also demands an artistic component, since no guidelines have been clearly established as to what tissues are useful to include in a model for any one particular indication (Giannopoulos et al. 2016). Future work should aim to optimize this aspect of this new modality.
3.3 STL Generation
Since tissues are segmented by demarcating their boundaries in individual, successive 2D cross-sectional images that compose a 3D image volume, the next step required is to assemble a 3D representation of the tissue and produce a closed surface “shell” of each tissue from its individually demarcated 2D cross sections. This shell is almost universally a surface mesh composed of small triangles and stored as a STL file format. The STL file format is to 3D printers what the DICOM format is to radiology workstations. Workstation software knows how to interpret the signal values stored in DICOM files so as to display them as an image on a monitor. Similarly, 3D printer drivers know how to interpret the triangles in an STL file so as to manufacture the physical object enclosed by them.
Figure 3.5 illustrates the process of generating an STL model.
Generation of a 3D-printable STL model from a volumetric medical image dataset. The aorta and aortic arch vessels are first segmented from a contrast-enhanced CT (a). The segmented image voxels identify the region of space occupied by blood, and conversely this region of space is entirely filled by the individually segmented voxels (b). If one were to cut through this region, it would simply expose the inner voxels that have been segmented (c). An STL model that can be 3D printed is instead a surface composed of small triangles that enclose the segmented voxels (d; shown in red, with individual triangle outlines shown in inset). Cutting this surface merely exposes the inner side of the triangles (e; shown in green, with individual triangle outlines shown in inset). Reprinted with permission from Giannopoulos et al., J Thor Imag. 2016
Once segmentation of the DICOM images has been performed, the voxel data must be converted to a 3D surface file recognizable by digital CAD software and 3D printers. Many of the image segmentation software will have the ability to convert the segmented images to a tessellated surface file most commonly using an implementation of the marching cubes algorithm. After segmentation, most software packages generate a printable 3D STL model of the surfaces surrounding segmented tissues based on algorithms such as interpolation and pattern recognition that preserve anatomical features. The easiest way to understand this step is as follows: using ROIs, operators select voxels that enclose a 3D surface. Conversion of this surface to STL can use any number of triangular facets to fit these surfaces; too few will compromise anatomical features in the 3D-printed model, while too many leads to unnecessary roughness in the object if the segmented surface is not smooth (Fig. 3.6). In our experience, STL-based models have no benefit to the provider once they exceed a given threshold of triangles for some common models (Mitsouras et al. 2015) (Table 3.1).