This paper discusses the implementation of a complex event generation model with a simple feedback loop and its sound synthesis results while investigating the overall system behaviour. The system is based on the Cosmos model, which is a self similar structure, and distributes events on different time-scales with certain interdependency. The user intervenes with the system in real-time by inputting a live sound source and interacting with the user interface by controlling the parameters for the time scales macro, meso and micro. Because of the complex dynamic behaviour and modulation scheme, it is possible to create a timbre space of unique textures.