Over the 2019 Spring Semester at the University of Iowa, I studied the physical representation of acoustics in "Spectral Nature of Sound" under professor of Composition and Music Theory, Jean-François Charles. The course focused on analyzing, interpreting and manipulating acoustic recordings in the spectral domain of sound with the use of FFT analysis-based softwares, Phase Vocoders and patches made with Max MSP. "From Image to Sound", a Max MSP patch created by Jean-François Charles, can take an image and turn it into a collection of sine-waves by assigning a group (bin) of frequencies to a part of the image on the horizontal axis. As the playhead scrubs and loops from the left side of the now black-and-white image to the right, it passes over bright spots in the image which then tell that frequency bin to activate. The result is in its own way, beautiful...
Image of an icicle and windows in the distance (left)
The same image with a black-and-white outlining effect used for processing (right)
The same image with a black-and-white outlining effect used for processing (right)
Audio captured in "From Image to Sound" Max patch created by Jean-François Charles
That last audio file was from the image I used to create this piece. I tested out at least 12 images and from the results, I figured out that I was looking for an image with simple geometry, high contrast with large patches of dark space, and an image that created (relatively) harmonically rich audio. Without using an image with those guidelines in mind, much of the audio came out as a single pitch, static noise, or inharmonious gobbledygook. With too high quality of a photo and too many small angles and lines, the audio becomes almost entirely noise without any detectible pitch. With too many changes in shape and gradual variation, the audio becomes
After I had found which image/audio I would use for my piece, I converted the output audio file into MIDI data with the use of Ableton Live's "Audio to MIDI" converter; a software that has assigned MIDI notes with their respective frequencies and scans an audio file for transients, pitch, and timing. In Ableton Live, there are three "Audio to MIDI" conversion techniques. The first, "Melody to MIDI", looks for a single melodic line in an audio file by detecting the most prominent frequencies at a given time and triggers the MIDI note based on the formation of the note's transient. The second technique, "Harmony to MIDI" conversion, looks at any frequency peak that passes a certain threshold and counts that frequency/MIDI not as an active note. The final conversion technique, "Drums to MIDI", is only useful for it's namesake - It looks for signs of transients in the audio file and depending on a very limited number of frequency bins (I've only ever created 4 separate MIDI notes to convert from an audio file), the software converts the audio file into notes with higher precision and a higher likelihood that the same tones would be paired along the same MIDI note.
After testing out the three different conversion styles, I settled on running the previously mentioned image (here) through the Max patch (here)and then into Ableton Live's "Harmony to MIDI" conversion software. The audio produced after putting the output MIDI data into Garageband's stock acoustic piano plugin was both inspiring and comical. The audio made by the Max Patch sounded similar to the melody made up by piano track. The software's loose restrictions on transient detection and specificity when determining frequency to MIDI note conversion made the outcome a fast-paced, semi-coherent display of how subjective and nuanced programing can be for the acoustic medium.
After testing out the three different conversion styles, I settled on running the previously mentioned image (here) through the Max patch (here)and then into Ableton Live's "Harmony to MIDI" conversion software. The audio produced after putting the output MIDI data into Garageband's stock acoustic piano plugin was both inspiring and comical. The audio made by the Max Patch sounded similar to the melody made up by piano track. The software's loose restrictions on transient detection and specificity when determining frequency to MIDI note conversion made the outcome a fast-paced, semi-coherent display of how subjective and nuanced programing can be for the acoustic medium.
The final step was to take this MIDI data into a Digital Audio Workstation and compose a piece. I first went to Avid's ProTools because it had been the DAW I was heavily working with at the time but found myself almost exclusively making music that reminded me of some of my own previous works. Making more of "my own sounding music" didn't seem thematically relevant and I didn't want to make the outcome of my project sound inappropriate in its given context so I tried switching over to Garageband for its simplicity and "on the spot" music making design layout. I started making more cohesive, less half-thought through music but throughout my creation process, I realized a few key issues I would have to work through with my given MIDI data; key signature, note structure, and note timing.
If I was going to make a piece that was as beautiful and unsuspecting as I view the textures I find in ice, snow, and water, I would need to take some notes out of the MIDI data. I searched for the 5 least used notes in the data that I could delete (all together, I think there was a total of 20-30 notes deleted in the process) and in the end, the key of the song came out as B-flat major. I had to quantize to 32nd notes and split the MIDI file into sections to restructure the piece in any way but those changes were somewhat unavoidable. With a two-voiced arpeggiator, a bass drone in the key of B-flat major, the Max MSP Patch version of the audio, and multiple self-recorded samples, I created a piece that I was, and still am, proud of.
If I was going to make a piece that was as beautiful and unsuspecting as I view the textures I find in ice, snow, and water, I would need to take some notes out of the MIDI data. I searched for the 5 least used notes in the data that I could delete (all together, I think there was a total of 20-30 notes deleted in the process) and in the end, the key of the song came out as B-flat major. I had to quantize to 32nd notes and split the MIDI file into sections to restructure the piece in any way but those changes were somewhat unavoidable. With a two-voiced arpeggiator, a bass drone in the key of B-flat major, the Max MSP Patch version of the audio, and multiple self-recorded samples, I created a piece that I was, and still am, proud of.
"A Reflection on Ice"
written and recorded by Ethan Fagre
written and recorded by Ethan Fagre