This work presents an algorithm that is able to achieve novel spatialization effects on multitrack audio signals. It relies on a crossadaptive framework that dynamically maps the azimuth positions of each track’s time-frequency bins with the goal of reducing masking between source signals by dynamically separating them across space. The outputs of this system are compared to traditional panning strategies in subjective evaluation, and it is seen that scores indicate it performs well as a novel effect that can be used in live sound applications and creative sound design or mixing.