Building Interactive-Drama applications where users act out their roles and build a story in cooperation with virtual characters proposes several challenges. One of these challenges is to build Autonomous Affective Characters that are able to establish an affective interaction with the users. Several approaches have been made to achieve this goal, most of them using gestures and facial expressions that rely simply on the Visual Channel. Other approaches use the Auditive Channel either as character's speech or background music. However most of these approaches use pre-defined samples which contrast with the emergent approach taken in the Visual Channel. With I-Sounds we want to increase the Affective Bandwidth of an Interactive Drama system called I-Shadows, implementing a fully emergent system that generates affective sounds, based on musical theory and on the emotional state of the characters. The project's main goal, is to build a software system able to translate emotions into real-time generated music. We want to have a "virtual composer" able to deliver emotional contextualized music.