Parallelized Branch-and-Bound algorithm speeds up angiographical 3D data reconstruction

Amsterdam 14 April 1999 Before the medical applications' conference track audience at the HPCN'99 event, Dr. Yudith Cardinale from the department of Computing and Information Technology at Simon Bolivar University in Venezuela, presented the potential of parallel algorithm use for 3D reconstruction of angiographic images. The academic research team in Caracas developed ANIA, a tool for Angiographic Image Analysis and study which allows to create the solution space for the problem of reconstruction from biplane images. This process involves a combinatorial generation of feasible solutions through the use of a priori knowledge about coronary arterial conditions. The job is performed by means of a sequential Branch-and-Bound algorithm. Since its execution takes about seven days, the team has parallelized the algorithm to reduce the runtime while simultaneously achieving a good load balancing.

Advertisement

Before the medical applications' conference track audience at the HPCN'99 event, Dr. Yudith Cardinale from the department of Computing and Information Technology at Simon Bolivar University in Venezuela, presented the potential of parallel algorithm use for 3D reconstruction of angiographic images. The academic research team in Caracas developed ANIA, a tool for Angiographic Image Analysis and study which allows to create the solution space for the problem of reconstruction from biplane images. This process involves a combinatorial generation of feasible solutions through the use of a priori knowledge about coronary arterial conditions. The job is performed by means of a sequential Branch-and-Bound algorithm. Since its execution takes about seven days, the team has parallelized the algorithm to reduce the runtime while simultaneously achieving a good load balancing.

Angiography is an imaging technique to diagnose the dysfunction of coronary arteries. The examination and therapeutic results can be largely improved via 3D visualization, based on two orthogonal images taken at the same time. In the reconstruction process, the research team applies a set of pre-established discrimination conditions in order to eliminate the number of erroneous solutions. The starting point is a Total Solution Tree (TST) which contains all the possible representations of the coronary tree. From the TST, the Partial Solution Trees (PST) are derived that satisfy all the discriminating conditions. These include exclusive ramification through bifurcation; short superposition to identify successful pairs of edges, so-called unitary threads; completion meaning that every segment in the 2D projections corresponds to an image in the PST; verticality of the segments from the main artery to the heart's inferior extreme; and no traversing of the heart or the sanguine flux.

These five conditions make the algorithm bound. The first PST is built from the root tree or TST where there are no edges. For each node, there are one or two segments. It may be possible to add new PST segments to the current tree. In case they do not correspond with the conditions, the segments should be eliminated. The generation of expanding PSTs involves an exhaustive combination of the TST's edges. Therefore, Dr. Cardinale and her team suggest a parallel solution to build unitary threads, and combine them in all possible ways. Those pairs of threads with common points form combinations. Each combination satisfying the conditions constitutes a valid solution. The unitary threads receive a number from 1 to n to determine how many combinations or work units each processor has to perform. In addition the master processor generates all the unitary threads to broadcast them to the other processors.

The research team assessed the performance of the parallel application using 8 nodes on their Parsytec Xplorer platform. Each processor node consisted of a PowerPC 601 with 32Mbytes of RAM memory, and 32K-byte cache on the processor chip to hold both instructions and data. The system computed 27 unitary threads with 130 segments from the Total Solution Tree. There were 67 million combinations to be calculated. The runtime for one processor was more than 8 hours; with eight processors the result came within 79 minutes. With 64 processors, the answer can be provided in 16 minutes, whereas 256 processors would do the job in only 6.2 minutes. The application of parallel algorithm programming has proven to considerably reduce the runtime with a satisfactory load balancing. As a next step, the team plans to experiment with other discrimination conditions. You can visit the Web site of the team for more information in Spanish(!) at the Department of Computing and Information Technology of Simon Bolivar University.


Leslie Versweyveld

[Medical IT News][Calendar][Virtual Medical Worlds Community][News on Advanced IT]