Advertisement

Mark Ballora, professor of music technology at Pennsylvania State University, is changing the way researchers interpret scientific data by transforming datasets into musical scores.

Ballora has worked with former Grateful Dead percussionist and ethnomusicologist Mickey Heart and Nobel Laureate George Smoot on the DVD project “Rhythms of the Universe.” More recently, Ballora received two seed grants from the National Academies Keck Futures Initiative (NAKFI) and the Gulf Research Program to create sonifications in the area of ocean research.

Managing Editor Lauren Scrudato spoke with Ballora to learn more about the field of sonification and how it can offer a new perspective on analyzing research results. 

You can listen to Ballora's sonifications here

 

Mark Ballora

LS: Can you tell me about your career in music technology and the type of work you conduct at Penn State? 
MB: After collecting tape decks, synthesizers and personal computer gear in the 1980s, I went to grad school in the 1990s to study music technology and then composition at NYU, then for a Ph.D. in computer music applications at McGill. At Penn State, my job was to set up a music technology program for students in the School of Music. We’ve now got a minor in music technology, and started a BA in music technology in 2015. I teach courses in musical acoustics, basics of music production with a laptop, history of electroacoustic music, and software programming for music.  

LS: Can you walk through the steps of transforming datasets into music? 
MB: I might start in a spreadsheet program and just explore the dataset a bit, plotting it in various ways, just to get an idea of its behavior, value range, and so forth. From there, I’ll export it as a .csv or space delimited file and bring it into SuperCollider, an audio synthesis program that is well suited for sonification work. Then it’s a matter of designing an instrument to play the dataset like a musical score, and stepping through the dataset, applying its values to the instrument so that they change its characteristics such as pitch, vibrato rate, volume, pan position, and so on.

The fun part is in coming up with a sound that can illustrate the data’s behavior effectively, and that makes sense on an intuitive level. This varies, depending on the type of data. In a recent example, [I was working with] pulsar data. The dataset was the pulsation cycle of a number of electromagnetic frequencies. So it was a matter of transposing these frequencies down many octaves so that they corresponded to audible sound frequencies, then creating a sine wave oscillator that played each of these frequencies, with loudness levels that followed the contours of the pulsations. I also slowed the pulsation cycle down quite a bit so that it was audible. This produced the “chord of the pulsar,” transposed and slowed down to ranges suitable for human ears.

For solar wind data, I used frequency modulation synthesis to come up with a shimmery sound that reminded me of northern lights. For earthquake data, I came up with a sound like the wooden rubbing sound you get with a guiro, which gave a throbbing sound that seemed suited for the sound of moving earth. For tropical storms, I started with filtered noise that sounded like swirling wind, and had the rate and coloration change with air pressure levels to reflect the intensity of the storms.

It takes some trial and error. The first try usually doesn’t sound that great, even if it may accurately reflect the data’s behavior. Sometimes a researcher will make a suggestion about something to change to reflect the data in a slightly different way that she finds more informative or intuitive.

LS: What first inspired you to transform data sets (which can be very dry and technical) into a more creative musical product?
MB: When I was at McGill, I got a call from a physiologist, asking me if I had any interest in rendering heart rate variability datasets as music. It sounded intriguing enough that I said “yes” before I really had any understanding of what heart rate variability was. I soon learned that the physiologist, Leon Glass, was a pioneer in nonlinear dynamics (chaos theory) who also played French horn. He had a creative approach to searching for patterns in physiological signals, as well as a musical bent. As a composer/sound designer with science envy, the idea appealed to me enough that I changed my thesis topic to this work. I also learned that sonification was an emerging field in informatics, started in large part by Gergory Kramer, the founder of the International Community for Auditory Display. It seemed that there was some low-hanging fruit there, as it was a new field that had many opportunities for discovery. There is an annual conference in this area—we hosted this year’s at Penn State in June. There are a number of people working in this area. Bruce Walker has a sonification lab at Georgia Tech, as does S. Camille Peres at Texas A&M, as well as Thomas Hermann at Bielefeld, and Myounghoon Jeon at Michigan Tech. A number of composers and sound artists also work in the area, such as Margaret Schedel at SUNY Stonybrook, Bruno Ruviaro at Santa Clara University, or composer Natasha Barrett. 

LS: How exactly can these musical scores be used? 
MB: There are three reasons I can think of to justify work in sonification. I think of them as a three-legged stool, since any one of them can lead to the other. 

One of the best uses of sonification has been by Wanda Diaz Merced, an astronomer who was rendered blind by an illness. She developed software that allowed her to listen to graphs. In her TED talk, she describes quite compellingly how she is now able to work at the same level she worked at when she was sighted. What’s more, by listening to graphs, she was able to detect the presence of electromagnetic resonances that no one had noticed in visual graphs, and her sighted colleagues find they also like working with the sonification software, as there are often patterns in the data that are more readily heard than seen. This is an example of sonification that was created for reasons of accessibility (one stool leg), yet brought about as a side effect of new discoveries (another stool leg). 

When I heard that Mickey Hart and George Smoot were looking for sonifications for their film, it was an exciting project because I knew that they would have to work on a musical level if they were to use them. Here, the purpose was outreach (leg number three), with the primary goal of engaging non-scientists.

There are any number of examples of people gaining insights through the use of sound. Proof of concept has been established. What strikes me as more interesting at this point is how it can evolve from a novelty to a standard method of research and outreach, alongside of visualization.

LS: Any exciting projects coming up in relation to this technique?
MB: In addition to the two seed grants from the National Academies, I’ve been working with a meteorologist at Penn State named Jenni Evans who has had me sonify tropical storm data. We’re working on getting funding to broaden the outreach of these to large populations of students. I’ll also be working with Joseph Schlesinger, an anesthesiologist at the Vanderbilt Medical School, who is committed to improving the acoustic environment of operating rooms and hospitals, and creating methods of monitoring and alarms that are not stressful and cluttered, which is too often the case. 

LS: Is there any type of dataset this doesn’t work for? 
MB: Whether or not this works depends on what someone is looking for. If you need to know specific values of data points, then this is likely not the right tool for the job, given that data values are typically transposed to serve as pitches or other auditory parameters. But if what you need is to understand the behavior of a function, then using the ears to track dynamics can be extremely useful.

As far as creating sonifications for the purposes of research and discovery goes, time-based datasets, particularly multi-variate datasets, are good candidates for sonification. These leverage the particular strengths of the auditory system.

Non-time-based sets present different kinds of challenges. Images, which are typically seen all at once, such as maps, require a different approach, since they aren’t time-based. Again, this kind of thing is usually exploratory. If one is studying the demographics of various regions and wants to hear the relations of various characteristics (income levels, racial/ethnic/religious populations, types of industry, etc.) may be well-served by an interactive representation whereby an image can be touched or activated by mouse, and different characteristics of a region can be heard.

Examples of Ballora’s work can be found at www.markballora.com. “Rhythms of the Universe,” the film by Mickey Hart and George Smoot that features Ballora’s sonifications, can be found here.
 

Advertisement
Advertisement