Stylometry and Music
One of the chief interests of the the Music and Computation Lab at LSU is using digitized musical scores to do various types of analysis. Once digitally encoded, scores can be searched just like a text document for patterns. These patterns can then be used to answer musicological questions and shed light on questions ranging from authorship in the music of Josquin, to schemata in bebop improvisations, to relationships between structural features of melodies and their cultural significance. Not only does MCCL perform analysis on digitized scores, but we also make an active effort to encode music outside computational musicology's normal repertoire for scholarly use. Recently the lab has encoded Dvorak's complete musical themes, Shostakovich's Op. 24 Piano Preludes, op. 34, as well as a rich data set of the improvisations of Clifford Brown, Charlie Parker, and Dizzy Gillespie. Below you can find links to posters and papers detailing our work in music and musical style.
The human ear is extremely sensitive to micro-timings in musical performance. Variations in these timings can lead to various judgments about the quality of a musical performance. If a performance is too metrical, a listener might think it sounds robotic, but too much randomness in playing might also lead to a performance sounding somewhat un-human. One project the lab is interested in is determining the degree to which onsets of notes in a musical performance can affect a listener's perception of who (or what!) is creating the musical experience. Preliminary reserach on this subject can be seen below.
Important to the scientific process is the idea of being able to replicate your findings. In collaboration with other music cognition labs across the world in London, Bremen, and Leipzig. Currently the Music Cognition and Computation lab is engaging in collaborative research to replicate often important and often cited music science research. If you would like to help out the music cognition community and have a pair of headphones and 30 minutes to spare, please help us (and the rest of the music cognition community) out!