At the Institute "Mihajlo Pupin" in Belgrade, a research team has developed a visualisation and sonification method in order to introduce this software into a temporal electro-encephalography (EEG) mapping programme, referred to as TEMPO. Professor Emil Jovanov explained and demonstrated the system at the ITIS'98 Conference. A standard browser which supports Java and Virtual Reality Modelling Language (VRML), suffices to create the synaesthetic effects, enabling the physician to simultaneously view the spatial distribution of the patient's brain activity as well as perceive the symmetry between the left and right hemisphere.
Conventional health care has several limitations because it largely depends on the physical presence of the physician to collaborate with the patient and possibly with other qualified experts. Usually, the patient medical record is also lacking. An Internet based Virtual Medical Worlds environment can provide all the required data since they are stored on an application server whereas the doctor, patient, and specialists are able to remotely communicate over the Web. The Jovanov research team has based its EEG visualisation and sonification method on this facility. It has created a wide range of parameters to generate a three dimensional head model for the interpretation of encephalograms.
As a result, complex topographic maps in real time 3D animation have been developed for a multi-modal presentation. The pure visualisation has been extended to sonification and force feedback modalities within an immersive environment which you can identify with. The problem of information overload has been countered with the introduction of synaesthetic effects or additional channels. Sonification is the acoustic presentation of sound information. The US army was the first to experiment with sonification to prepare radar information for its pilots. Processing happens much faster than with visual data and it is easier to focus and localise the attention in space. This additional information channel offers a good temporal resolution and permits to present multiple data streams.
However, this method shows some disadvantages too. It is difficult, for instance, to perceive the right frequency. The spatial distribution is limited and the sound parameters are not fully independent. Interference with other sources, such as speech, cannot be avoided and there is equally absence of persistence. Obviously, the user perception is very individual. Still, sound parameters can pay large services to the clinical examination. There are four of them, namely pitch, timbre, loudness and location, and it is possible to switch them on or off. The researchers prefer to use a natural sound trace, like the sound of a creek, in order to position it in real time, to detect both the brain activity and the symmetry between left and right hemisphere.
Since the programme is used in a standardised VRML environment, this method presents an acceptable and affordable solution. Professor Jovanov is convinced that the multi-modal presentation improves the immersion and temporal experience. Ideally, the distance has to be maximised in the perceptual domain. It is important to perform a fusion with other diagnostic procedures as well as anatomic models based on the Magnetic Resonance Image (MRI) data of the real patient. Up till now, the team has been using an artificial model. The introduction of a real person's model for the mapping of the brain activity, would allow to optimise the perception of the exact processes which are going on in the brain.