The ANALYZE software system developed by the Biomedical Imaging Resource (BIR) at the Mayo Clinic in Rochester has rendered such tremendous services for both surgical planning and post-operative evaluation in different medical fields, that it has been split up into a library of functions, called AVW or A Visualisation Workshop. The AVW library provides a ready extensibility as well as a renewed development and implementation of advanced imaging algorithms, procedures and customised applications. In turn, AVW has led to the creation of a Virtual Reality Assisted Surgery Programme, known as VRASP, allowing the surgeon to benefit from the ANALYZE and AVW assets in the hospital operating room (OR) itself, for the optimisation of planning, rehearsal and, eventually, even surgical intervention.
Dr. Richard Robb, BIR director at Mayo Clinic, states that the AVW library currently contains over 250 imaging functions. Refinement of the programme code has caused a lot of AVW functions to perform up to 50% faster than their ANALYZE counterparts. Individual developers are able to write their own specialised code within the frame of the comprehensive imaging capabilities of the package, for the fully user-interface independent functions which they have selected. The possibilities of three-dimensional imaging will thus extend into ever wider spreading clinical environments and will also imply medical education and research, next to Computer Aided Surgery (CAS) and Radiation Treatment Planning (RTP).
At present, the ANALYZE system is still being enhanced with fully automatic tissue segmentation and classification; multi-modality image fusion; effective image database compression and management and possibly, a more friendly, powerful user-interface. Based on the high-performance of ANALYZE by means of AVW, an implementation for the hospital OR is being developed under the name of VRASP, which offers the surgeon flexible computational support in an intra-operative way. The basic idea is to deliver virtual imagery in real time for the surgeon to be modified and controlled. The VRASP project has to run in three phases. The first component is currently operative and consists of the surgical planning based on pre-scanned data from CT and MRI, rendered into interactive 3-D images.
The second stage implies rehearsal with Virtual Reality (VR) equipment such as a high-performance computer, a head-mounted display, an interactive data glove and a customised surgeon interface. Different departments at Mayo Clinic are occupied with the validation of this rehearsal procedure. Third will follow actual use of the VRASP system in the OR, where the surgeon will manipulate the virtual images through audio control or through interactive input devices. Therefore, he wears the VR equipment and handles the customised interface in order to view the pre-planned and ad hoc volumetric images in virtual space, without interference with normal surgical activities. This programme's stage will be implemented after satisfactory laboratory and clinical evaluation of the second phase.
The real time rendering of the 3-D images constitutes the greatest challenge for the use of VR in surgery planning. The VR system has to be able to generate smoothly animated images at a rate of 30 frames per second and should ideally respond at the user's commands within 10 milliseconds. At present, rendering algorithms already can produce photorealistic images from the dense volumetric data generated by medical imaging systems, but ray tracing algorithms however, have great difficulty sustaining the visual update rate, necessary for real time display. The problem is how to deliver good polygonal surface representations from a variable but limited number of polygons as to optimally preserve useful detail and produce realistic geometric models from patient-specific volumetric data. Experts have now developed a method which requires only 10 minutes for the entire process of image segmentation, surface detection, feature extraction and surface tiling for models of 10.000 polygons.
Due to the frequent occurrence of prostate cancer, this method is primarily used for examination of the prostate gland. Starting from MR images, the desired anatomical objects are semi-automatically or manually segmented to transform them into the required models, which faithfully represent the patient-specific anatomy. Surgeons prefer these models to the organ volume renderings because they need less computing time. Moreover, in contrast with models, volume renderings often exhibit irregular surfaces, texture artefacts and partial volume effects, which might hamper the pre-operative treatment planning.
The VRASP system is actually based on the need to rapidly generate correct models from medical 3-D scans in order to visualise and manipulate them in real time while qualitatively and quantitatively assessing the OR data by means of the pre-operative plan. Finally, VRASP should also be used in post-operative evaluation of outcome, in the comparison of pre-operative plans with operating room results. The resulting statistics are extremely important to determine the efficiency of the VRASP procedure and for the analysis of patient morbidity and health care cost. More news about VRASP is offered by Dr. Robb and his colleagues at the Biomedical Imaging Resource site of the Mayo Clinic and in our article Virtual Reality merges with Euromed's Virtual Medical Worlds through supercomputing. For more details on ANALYZE, we refer to our article The blessings of ANALYZE for Computed Aided Surgery and Radiation Treatment Planning.