Links

IMPPACT Intervention Planning and Intervention Training System -- IPS

Overview

1 Overview

The IMPPACT intervention planning system (IPS) combines the research results of the IMPPACT consortium in a fully functional clinical research prototype. The application of this system reaches from a clinical stand-alone application to a complete intervention planning environment. The IPS is easy to use and allows conclusions about the lesion state after one Week after the actual RFA treatment.

The system's features have been approved by medical experts from Medical University of Leipzig and Medical University Graz and are currently tested at those sites for real intervention planning.

2 Details

Figure 1 gives an overview of the internal data dependencies of the IPS. This diagram also includes tracking input and all necessary dependencies for extensions to the basic system. Section 2.1 describes the basic system, which is currently in use by our medical partners. Section 2.2 outlines a possible extension towards save access path planning. Section 2.3 gives an overview how the IPS can be used as training tool and Section 2.4 provides a link to a snapshot of the basic IPS source code.


Figure 1: Overview of the current steps required for the RFA Planning Prototype. Input data consists of polygon meshes (e.g. liver segmentation, vessel segmentation, tumor segmentation), unstructured grid data used for simulation and the CT datasets of a patient. Some of these datasets have to be converted in an internal conversion step to a feasible representation before they are used in the RFA planning environment.

2.1 Basic system for clinical stand-alone use

Figure 2 (click to enlarge) shows an overview over the required steps with the IPS until simulation results can be evaluated by medical professionals. These steps can be performed between a diagnostic scan, the pre-intervention scan (approx. one day before the actual intervention) and the intervention by a medical professional of assisting staff. The interface of the final IPS is shown in Figure 3.

The original agreement of the IMPPACT consortium was to use OpenSuse 11.1 as base platform. However, since this operating system is not common in the clinical practice, we provide also a Microsoft windows version of the IPS. Because the Simulation relies on OpenSuse specific functions it is however necessary to run the simulation itself in a Virtual machine, which can be seen as virtual PC which runs on a host system like Windows. We provide an easy to use interface to this virtual Simulation PC under Windows. The interface to the virtual simulation PC is shown in Figure 4. The final simulation result is shown in 3D and as overlay over conventional radiologic slice images as shown in Figure 5. The green outline indicates the segmentation result of the tumorous region and the red area shows cells which are very likely to be dead one week after the intervention.


Figure 2: Complete clinical simulation workflow diagram. (click to enlarge)


Figure 3: Interface of the final basic clinical stand-alone IPS.


Figure 4: Interface for the windows version to a virtual tumor ablation simulation PC.


Figure 5: Computed coagulation regions in 3D and 2D (red with white boundary) for a specified needle setup, enclosing the tumour (green).

2.2 Extension 1: finding a good access to the tumor

Planning an optimal access path to harmful structures within the human body plays an essential role for many medical procedures. Traditionally, the decision about medical tool trajectories or resection areas have been made by doctors empirically, mainly relying on their experience with similar interventions and their general knowledge about vulnerable anatomical structures. This empirical approach is typically strongly related to the experience of the performing medical doctor. Therefore, also the possible treatment success depends on the person who is doing the intervention. In certain cases this unsupervised accessibility planning attempt might be harmful, if not deadly to the patient in the long run. While abdominal interventions are often performed completely without computer assistance for the path planning step, at least in neuro-surgery, interventional navigation systems can be considered standard. In the meantime most medical suppliers are offering commercial solutions to this problem. However, the planning input for these systems is still based on empirical decisions of the performing doctor. Both approaches -- trajectory planning for a later use in a navigation system and completely unassisted intervention -- will benefit from an accessibility visualization of the target structure during the planning stage. However this visualization should only provide additional information to the performing doctor, leaving the final decision for him. A fully automatic determination of the ideal access path is of course desirable but this is currently neither possible in every case because of literally hundreds of degrees of freedom, nor well accepted by medical doctors and even less accepted by patients.

Besides that, the evaluation of all possible and impossible access paths has always been computationally very expensive. A full evaluation might take up to several years using conventional CPU based iterative approaches. However, modern GPU programming languages like Nvidia's CUDA allow to parallelize those accessibility algorithms and to evaluate the whole input space within a few seconds.

Furthermore, most medical accessibility planning approaches consider only one possible representation. It is either a 3D representation, e.g. often used for robotic interventions, or 2D projections that are provided to assist an operating surgeon. Each representation offers different advantages but also disadvantages for intervention planning and intervention assistance. To our knowledge no medical accessibility visualization system and intervention training system has been proposed so far, which combines the advantages of different representations in a multi-stage accessibility planning and training system or which is able to visualize the various degrees of freedom of accessibility considerations. Therefore, in this work we propose a novel multi-stage tumor accessibility planning and intervention training visualization approach which is able to model a normal RFA intervention completely as it is and in an addition to display and to rate all possible access paths by their safety without putting additional limits on the decision of the performing doctor. In order to present the data in an intuitive way we based our visualization method on the following natural phenomenon: when the sun shines through the clouds or trees into the air which contains enough scattering particles (such as dust or moisture) an observer can see several ray bundles shaped by the obstacles. We think of a tumor as the light source and vulnerable structures as the obstacles.

All possible access paths can be encoded in a volume of an arbitrary size and resolution centred at the centroid of the target structure. Our method can use one or arbitrary many input volumes. A voxel size similar to the input volume(s) is optimal since the ray volume's accuracy does not increase beyond the accuracy of the input volume. As a first step we have to encode the input volume(s) voxels in areas which are safe to pass, areas which might cause problems and impassable structures. This process is quite common in medicine and normally referred to as segmentation. For segmentation we use the tools provided by Aalto University, Helsinki.

The output of the segmentation software directly defines the input for our visualization preprocessing step. For the liver RFA case a useful assignment would be

After the tissue classification is done our algorithm starts to send rays from every voxel of the tumor in every direction of the ray volume. The potential access path starts with zero at a given tumor voxel. If the ray hits an impassable or highly vulnerable structure it is set to the highest available value. The attenuation through uncertain areas depends on a lookup table which maps intensity value to vulnerability and is continuously added to the ray value while passing the structure. Since every tumor voxel emits rays in all possible directions, several rays might cross. In this case value accumulation clearly separates our ray volume into fully safe regions around the intensity value 0, uncertain or slightly vulnerable areas of low intensity and impassable paths with very high intensity values.

For the final visualization the ray volume has to be presented together with a representation of the anatomical conditions around the tumor and, optimally, with an enhanced tumor area. The most obvious way to achieve this is to use the polyhedral DVR system which has been developed during the IMPPACT project. It scales only with the available graphics memory and its performance does not depend strongly on the number of volumetric intersections in the scene. The required transfer function can be chosen automatically for all regions. For example assigning green color with a steep falloff in a small area around 0 and a color gradient between yellow and red for medium to high values will encode every path in a way similar to a traffic light. Consequently, green areas correspond to all the safe paths, for which no vulnerable structure will be hit on a straight trajectory to the tumor. Yellow areas will correspond to those regions where the doctor may decide, that it is worth the risk to follow that trajectory, if, e.g., a better working area in the operation theatre is provided from that side. However, red areas will correspond to regions which contain impassable or very dangerous structures on the way to the tumor.

This representation gives a clear overview over all areas which are accessible since impassable areas can be switched off and uncertain areas can be regulated depending on their vulnerability. However, an important decision criterion for a certain access region is the amount of safety margin. This safety marking corresponds to the area which is spanned by the ray bundles of a similar safety level. To identify the safety margin we introduce a second level in 2D for the planning procedure which is described in the next paragraph.

Depending on the desired safety level we can calculate a geometric approximation of the ray volume in parallel to the ray volume calculation. This is necessary to augment the pure projection of the ray volume onto 2D slice views with information about the overall extent of a given region. In 3D these regions can be identified at a glance but 2D slices lack the required depth information. For the extrusion of geometric tumor segmentation we first have to approximate the tumor surface by a convex ellipsoid. This is necessary to prevent non-manifold faces and overlaps during the extrusion process. For the approximation we define the ellipsoid similar to the three main axis of the tumor. The tessellation level of the approximated tumor further defines the accuracy.

Every vertex is subsequently extruded in direction of its normal vector. If a certain vulnerability threshold is reached on the vertex' way, the extrusion stops. The resulting geometry is subsequently separated in vertices which have been extruded until a static distance is reached and those which hit an obstacle. Because of the binary separation we can directly define connected regions which have hit no or only a neglectable obstacle (depending on the given safety threshold). Those regions are disjoint and are further evaluated due to their mass properties. Finally size of the region to which a vertex belongs is saved as scalar with every vertex.

For visualization of this surface in 2D we use the contour that results from the intersection of the extruded geometry and the desired cutting plane. In order to convey the information about the safety margin and thereby the size of the complete surface area, which is invisible in 2D, we draw different parts of the projected contour with different thickness and color. An obvious scheme for this encoding is thick and green lines for large areas providing a lot of elbow space and thin and red lines for very small and delicate areas.

Figure 6 shows an example for the 2D projection scheme in an axial slice of an artificial dataset while Figure 7 shows the corresponding 3D representation.


Figure 6: 2D concept of our accessibility visualization and training system.

Figure 7: 3D representation of our accessibility visualization and training system.

2.3 Extension 2: Virtual Reality ablation result examination

One part of our setup enables the user not only to navigate in 3D but also to see in 3D. This is possible due to a stereo back-projection system with special glasses worn by the user. Our projector is able to render two overlapping screens at a very high frame rate. This means that we can drive the system by all PCs supporting two display outputs. Stereo rendering is then easily achievable by rendering the same screen once on the left screen with camera setting for the left eye, and once on the right screen with camera settings for the right eye. So called "Shutter glasses" worn by the user occlude rapidly each screen alternating. This is done so fast, that the impression of two different views are melted to a 3D scene in the users brain.

For optimal preparation, we provide an examination and evaluation tool. Using a stereoscopic projector and an infrared tracking system, we provide an immersive framework which combines pre-recorded CT-scans, segmented parts of the liver, as well as the simulation result, in a three dimensional environment.

By linking head tracking to the point of view in our system we achieve a high degree of immersion in an egocentric perspective. By including the results of an RFA simulation using volume rendering and enabling control over the amount of data presented, we also provide uncertainty visualization in real time.

The major usage of the tool is to compare simulation results with expected cell death. This is achieved by overlaying the simulation with a wireframe representation of the segmented tumour. The borders of the tumour, combined with the uncertainty visualization of the simulation, can be used to determine proper parameters for the RFA. We use animation as communication channel for the uncertainty information. A picture of this setup is also shown in Figures 8.


Figure 8: A prototype of our Augmented Reality (AR)/ Virtual Reality (VR) RFA training environment at Graz University of Technology (TUG).

2.4 Extension 3: Training of RFA interventions

In case a tracking system is available, we provide three additional different training modes as extension of the IPS software to match best the requirements of our medical partners. A virtual operating room, featuring all IPS-extensions and training options is shown in Figure 8.

Anatomical Training Mode
This mode provides an opportunity to combine real time tracking of the needle prop and examination of the provided anatomical dataset. By coupling the tracking information to the virtual needle model, the trainees are able to evaluate their performance in needle placement. The immediate visual feedback can be used to check different approaches for invasive canals. This way, the depth of intrusion and the correct orientation towards a possible tumor is trained in an environment which does not directly punish trainees for errors.

Furthermore, the trainees get more experienced in using the provided anatomical datasets for estimating correct needle placement in real scenarios.

RFA Intervention Training Mode
While the anatomical training mode aims towards inexperienced surgeons, the intervention training mode is made for trainees with a higher skill level. We simulate the workflow of a real RFA. In real scenarios, surgeons use the visualized anatomical dataset to determine the correct needle placement.

The intervention training mode does not provide immediate visual feedback of the tracked needle position. Instead, the trainee needs to place the needle at the desired position first. Next, it is required to scan the needle position by pressing the corresponding button. After scanning the needle position, the system provides the desired feedback by placing the virtual needle according to the tracked position. Of course, it is possible to adjust the needle position. This is achieved by repositioning the real needle followed by re-evaluation of the tracking data, which leads to an adjusted position of the virtual needle.

Needle Access Path Evaluation Mode
All possible access paths can be encoded in a volume of an arbitrary size and resolution centred at the centroid of the target structure. Our method can use one or arbitrary many input volumes. A voxel size similar to the input volume(s) is optimal since the ray volume's accuracy does not increase beyond the accuracy of the input volume.

2.5 Obtaining the software

A source code snapshot can be downloaded here.

Copyright 2006 - Design by LS