This year's Supercomputer Seminar, which according to the tradition, took place in Mannheim from June 18th to 20th, had an instructive evening session in store, to close the first conference day with a touch of philosophy. In the beautiful Aula of the University Schloss, the senior vice-president of Silicon Graphics Inc., Dr. John F. Vrolyk gave a fascinating talk on the strategy followed by SGI with regard to supercomputer architectures and their scientific applications in the next era. The company is showing a strong will to meet the tremendous challenges of the new millennium with a multiplicity of architectures and high quality software. In San Diego, the Congress of Vascular Surgeons already received a powerful foretaste of this huge computational strength.
In High Performance Computing and Networking (HPCN), experience has learned us to try and follow the right sequence, as Dr. Vrolyk points out. Rather than designing programmes for a specific architecture, we should first allow the datastructures and algorithms to guide us to the appropriate architecture, in the way they are gradually developing through the course of the underlying science. Currently, multiple Teraflop machines are representing the leading edge in computing power, required by the industry and research laboratories. A wave of very fast and large machines with incredible opportunities will be built in the years to come, in order to still the growing hunger of engineers and scientists, while the American government actively supports the massive change in attitude towards HPCN.
Next to the Teraflop machines, SGI will continue to focus on both the T3E architecture and the good old vector computers, which have proven their superiority in some specific applications. Dr. Vrolyk insists on these three fundamentally different approaches, which are all able to reach similar performance criteria and to deliver a computational power, only equal to high Teraflop computing. In front of the HPCN picture stands the benefit to mankind. If these types of machines are going to be used for applications relating to the safety of the population, we should pay utmost attention to their degree of reliability, functionality and robustness, as Dr. Vrolyk states while referring to examples, such as weather forecasting and tornado prediction.
There is a tight coupling between both the ability to simulate, which requires a huge amount of computation, and to understand, which is achieved by visualization. Recently, Dr. Vrolyk was asked to speak before the Society of Vascular Surgeons in San Diego. On an SGI workstation, a colleague from Stanford displayed the bloodflow through a real patient's arterial system. The doctors are now able to study the bloodflows, including the elasticity of the arteries and the pulsations. Bloodpressures can be measured directly on the arteries as the heart pumps. The different data is gathered by CT, MRI and ultrasound. The surgeons can draw a bypass around the pump after which they rerun the simulation by means of the programme, written in Java and viewable with a normal browser.
The supercomputer recalculates the bloodflow and the bypass data and a new drawing appears on the little workstation's screen. As a result, the doctor is able to figure out what would happen if he were to put the bypass on the indicated spot. Before, surgeons could never see the results of a bypass operation after they had performed one. This formed a great risk for the patient because by increasing massively the bloodflow above, the arterial structure below is suffering severely from high bloodpressure. It took the supercomputer almost an hour to perform the amount of processing needed for this kind of pre-operative planning for one patient. This might lead up to a world where there is a Teraflop processor in the basement of each hospital, as Dr. Vrolyk imagines, to accurately simulate patient care prior to surgical interventions, in order to guarantee a longer and healthier life for us all.
Initially, the T3E machine has offered a chance to science people to let their imagination run free. It has also brought a fundamental change in the industry where the best central processors are supplied by commercial vendors. We can now leverage industry standards to build incredible fast machines. In fact, only two companies in the world possess the sufficient financial muscle to construct these processors, namely Intel and IBM, and we should urge them to build exactly the architectures we need. In addition to this, the usefulness of having a uniform memory model has to be fully recognized because here, the shift of the computational power to just the place where it is needed, is far more easy than with use of a cluster model.
In modern supercomputer building, at least a 1000 processors will be necessary in the future to provide the power, demanded by the researchers. Up till now, performance has been up to 256 processors, so one is working very hard on trying to integrate more and faster processors. The hardware usually doesn't constitute any real problem but the writing of the software likely will turn into an enormous job. Companies like SGI however, are willing to listen to the scientists in order to know which memory structures have to be addressed, to suitably match the various research issues. Dr. Vrolyk is fully aware of the fact that the Age of Imagination is only at the verge of true disclosure.