Question

This question is about windows audio session API. After reading microsoft's sample code, I find that after getting a IAudioClient, "GetService" Method is called on it to obtain a IAudioRenderClient. And then the GetBuffer and ReleaseBuffer method of IAudioRenderClient is called to add buffer to be played.

My question:

  1. Why is IAudioRenderClient necessary, why not just put the GetBuffer and ReleaseBuffer method directly in IAudioClient? Namely when is the case we need multiple IAudioRenderClient for a single IAudioClient?

  2. Since when we play audio, we always need a IAudioClient, and on this IAudioClient we call GetService to obtain a IAudioRenderClient, we don't have any chance to call CoCreateInstance() to create IAudioRenderClient do we? Then why do we need a IID for IAudioRenderClient?

==============================================================================

After some more reading, I think IAudioClient can be a IAudioRenderClient or a IAudioCaptureClient. But another question arise, why we call GetService to obtain IAudioRenderClient but not QueryInterface?

Was it helpful?

Solution

  1. Because IAudioCaptureClient also has GetBuffer and ReleaseBuffer. If you put GetBuffer and ReleaseBuffer on IAudioClient, then how would you know whether you are dealing with capture or render buffers? This approach also stops the IAudioClient from getting cluttered with too many methods for all the possible services. (This is called the "interface segregation principle" and it is considered good design).
  2. You don't use CoCreateInstance because your system could have many soundcards installed. How would it know which IAudioRenderClient you wanted if you passed ion the IID of IAudioRenderClient?
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top