Pretty image
In “And Your Bugs Can Sing” in September’s PragPub, Brian wrote about turning your log files into music. But what if the bugs you’re looking for are the kind that make you sick?

Recently on NPR I heard a report about doctors using music to monitor a patient’s condition. The idea is that there are many medical instruments in an ER, each reporting on a patient’s real-time condition. So many instruments, in fact, that it can be hard to notice changes in the values. It’s as though a bunch of musicians are jamming, but each is playing her own tune.

To make it easier to follow what’s going on, these doctors are playing an orchestral piece of music and associating each medical instrument in the ER with a different musical instrument in the piece. When the medical instrument’s output is in a safe or normal range, the musical instrument plays normally. If the medical instrument’s output is not in the safe range, the musical instrument’s track is tweaked so that it sounds wrong (by raising or lowering the pitch, for example).

Hey, That’s a Lot Like Log4jfugue!

This caught my attention because it’s very similar to my Log4jfugue project, which converts computer program log files into a music stream. Both projects use music as a way to deal with an overwhelming volume of data. I like how the doctors in this project are harnessing the human ability to process music. An orchestra creates a huge volume of acoustic data in performing an orchestral piece, yet it all comes together in such a way that the human brain can process it in real time and can isolate subtle variations.

log4jfugue.jpg

Although this project is similar to Log4jfugue, there are a number of interesting differences between our approaches.

The medical music system seems to be optimized to detect deltas from a norm, whereas Log4jfugue is optimized to detect deltas in rates. This leads us in different directions. In the medical scheme, any change from the norm needs to be converted into a bending of the music stream of a particular instrument. So if the heartbeat gets too high, the violin track might be bent, while if the O2 saturation gets too low, the flute track might be bent.

This approach seems as though it would work best for detecting a single instrument going out of range. If many medical instruments reported bad values at the same time, the result would seem to be a literal cacophony.

Log4JFugue, on the other hand, is optimized for detecting rate differences without judging any value as normal or abnormal. So if the computer program’s output shows an increase or decrease in the rate of some log message that is being measured, Log4JFugue will increase or decrease the tempo of the associated percussion instrument. So if the system being measured speeds up, Log4JFugue will increase the tempo of all of the percussion instruments, which will still sound OK but will be noticeably faster.

The medical system solves a problem that Log4JFugue never solved, which is how to use non-percussion instruments. Both systems get a single rate variable for each item being measured. That works fine for achromatic instruments, but most instruments require both a pitch and a rate. The medical system takes the fascinating approach of starting with an orchestra score which provides constantly changing pitch and rate for each instrument. It then overlays that with the single rate value from the medical instrument. Each out-of-norm value perturbs the delicate balance of the orchestra, which is something easily noticeable to people.

The NPR article did not discuss the mechanism for altering the music stream but it implied that it could be done in real time to any music stream.

The Unharnessed Power of Sonification

Bubbling up a level, it is intriguing to see continuing interest in Sonification as opposed to Visualization.

Humans are primarily visual creatures: in fact one’s “attention” is generally equated with the location where your gaze is oriented. The flip side of this is that vision is optimized for focusing on a single target at a time. Hearing, on the other hand, is a much less directional sense and is optimized for more overall processing. There are lots of signals in our world that require our complete and focused attention, and these signals generally require visual input. Lot of other signals, though, can be attended to in a less focused manner and are good candidates for acoustic signals.

This leads to question of how we can improve our computer interfaces by making more intelligent use of non-visual signals. I see many articles these days discussing how to use the vast new expanse of the iPad’s screen. I don’t, however, hear people talking about ways to use sound to enhance the hopefully wonderful iPad experience.

Yes, I know what you’re thinking. But before you remind me that silence is golden, let me wholeheartedly agree. Extra sound for the sake of sound would be just as bad as the flashing screen icons that defaced many early web pages.

On the other hand, there are relatively non-intrusive ways to add acoustic feedback to signals. For a truly geeky example, it used to be fun to watch Star Trek TNG using a sound system with a great subwoofer—the low rumbling sound of the engines made an impressive background roar. Most of us run a CPU monitoring tool on our development boxes; imagine that its output was a modulated low bass rumble rather than a widget on the tray.

I think this medical musical data reduction project is an interesting example of sonification, and I’m keeping my ears open for other examples.

Brian Tarbox is a Principal Staff Engineer at Motorola where he designs server side solutions in the Video On Demand space. He writes a blog on the intersection of software design, cognition, and Tai Chi at briantarbox.blogspot.com.

Send the author your feedback or discuss the article in the magazine forum.