Music is universally considered an expression of emotion. The existence of a link between music and emotion is well documented through decades of psychological and psychophysical research with consistent findings. Yet even with the common understanding that music can both subconsciously affect emotion as well as consciously represent a mood, it is not common to see emotion used in the software interface of programs created to store, sort, and play digitalized music. More commonly these music programs sort using meta data such as artist, genre, and album. A few exceptions exist, such as Pandora.com or MoodLogic, which include listening options based on manually categorized music. In an attempt to assist the user with more meaningful play list formation, Thomson Inc. developed a proprietary algorithm called Digital Signal Processing and Advanced Acoustical Analysis technology which is able to cluster music based on the mathematical relationships resulting from the analysis of the actual audio content. The outcome may create unexpected results with a cluster containing songs by artists ranging from Wayne Newton to Wojciech Kilar, yet the tones and qualities sound similar. But do they contain an emotional cohesiveness? This study will test whether music samplings derived from different algorithm-created clusters will elicit a statistically different affective response among human participants.