If you want to have any control over the relative timing of the two sounds (for example, they should start simultaneously), using multiple processes is probably not a good solution. You should be mixing the signals within your application and writing out a single audio stream.
Since you are using audiolab, you already have the data in numpy arrays. This gives you all the flexibility you need to mix audio:
frames1, fs1, encoder1 = audiolab.wavread('audio1.wav')
frames2, fs2, encoder2 = audiolab.wavread('audio2.wav')
mixed = frames1 + frames2
audiolab.play(mixed, fs=44100)
If this causes the audio signal to clip (you hear clicks / pops), you may need to pre-scale the data before mixing:
mixed = frames1 / 2 + frames2 / 2
.. and if the sounds do not have equal length, it may take a little more work:
mixed = np.zeros(max(len(frames1), len(frames2)), dtype=frames1.dtype)
mixed[:len(frames1)] += frames1 / 2
mixed[:len(frames2)] += frames2 / 2