The present paper details a set of subjective measurements that were carried out in order to investigate the perceptual fusion and segregation of two simultaneously presented ERB-bandlimited noise samples as a function of their frequency separation and difference in the direction of arrival. This research was motivated by the desire to gain insight to virtual source technology in multichannel listening and virtual acoustics applications. The segregation threshold was measured in three different spatial configurations, namely with a 0◦ , a 22.5◦ , or a 45◦ azimuth separation between the two noise signals. The tests were arranged so that the subjects adjusted the frequency gap between the two noise bands until they in their opinion were at the threshold of hearing two separate sounds. The results indicate that the frequency separation threshold is increased above approximately 1.5 kHz. The effect of angle separation between ERB-bands was less significant. It is therefore assumed that the results can be accounted by the loss of accuracy in the neural analysis of the complex stimulus waveform fine structure. The results are also relatively divergent between subjects. This is believed to indicate that sound fusion is an individual concept and partly utilizes higher-level processing.