Robin E Davies

According to SDK docs, I am supposed to get the benefits of the Multimedia Class Scheduler, when I open an audio device in exclusive mode, whith a buffer size of less than 10ms. But the results I'm getting don't seem to bear that out.

In particular, I get audio stuttering when screen rendering takes place. With no heavy graphics activity, I can render audio stably with as little as 2ms buffers. But, to avoid stuttering with heavy graphics activity, I have to back the buffer size out to 15 or 20 ms! Which makes me wonder whether everything is really working as it should. It's hard to say whether MMCS is kicking in because none of the MMCS attributes are readable unless your code created the handle. There are no MMCS APIs that report active MMCS settings for the current thread! So, I kind of have to take on faith that WASAPI is doing everything it should.

Questions to which answers would be helpul:

- Is the priority boost totally automatic I'm assuming that once I meet the preconditions, everything happens automatically. Or do I have to explicitly place my service thread into "Pro Audio" category It all seems a bit curious given the nature of the MMCSS APIs, which seem to be built to serially execute service threads; but the only scheduling hook I get from IAudioClient::Initialize is an event, rather than a handle to an MMCSS task.

- Does the implementation support firewire devices (The device in question is a Focusrite Sapphire, firewire pro-audio device). If not, what kinds of devices do benefit from MMCSS scheduling improvements

- Which thread gets the priority boost The thread that calls IAudioRenderClient:Tongue Tiedtart, or the one that calls Initialize I've tried it both ways (calling start on my service thread, and calling both initialize and start on my service thread), but I'm hoping it's applied to the thread that calls Start, not the Initialize thread.

- Is use of the following API worth exploring : DwmEnableMMCSS(FALSE) (http://forums.microsoft.com/MSDN/ShowPost.aspx PostID=353783&SiteID=1). If so, this really should be mentioned PROMINENTLY in WASAPI documentation.

- Can I/should I set thread and process priority (to realtime, for both) independent of MMCSS



Re: Vista Pro-Audio Application Development MMCSS and audio in Exclusive mode.

Robin E. Davies

And the answer is......

If I call DwmEnableMMCSS(FALSE), I get perfect rock solid stable audio with 2ms buffers + 0.5ms latency, compared to semi-stable audio in 4ms buffers (+5ms output latency), with ASIO.

Hey !!! That is very very exciting!!!!

I'm thinking this really should be mentioned in the documentation!

Next step, capture devices, and the next: measure latency to see what I really really get. But this is promising.





Re: Vista Pro-Audio Application Development MMCSS and audio in Exclusive mode.

plgDavid

And any _other_ app can call this to turn it on/off at any time

How long before firewalls, virii and antivirri call MMCSS stuff
So much for real time audio
sigh




Re: Vista Pro-Audio Application Development MMCSS and audio in Exclusive mode.

Maurits

The way MMCSS works is it gives you really-really-high-priority for a very short period of time, and then really-really-low-priority for a longer period of time. So you'll get called regularly, and you'll have a guaranteed time slice in which to do your stuff. This maps really well to multimedia apps, which are of the "do something fast; wait a very strictly determined length of time; do something fast; wait a very strictly determined length of time; ..." model.

Other apps, viri, and firewalls, are usually of the "I have a large amount of stuff to get done and I want as much of the CPU as possible for a long sustained period" model. MMCSS would not be attractive to them.





Re: Vista Pro-Audio Application Development MMCSS and audio in Exclusive mode.

plgDavid

Thanks for this.

The problem with us "outsiders" is that we dont know the schedulling details of the Vista kernel
w/r to MMCSS, and theres a lot of trial and error (make that more something like close to rev eng). going on. I was very thrilled at WASAPI at first, but after having so many snags with the current limited number of drivers and documentation/examples (especially for low latency work)
that i left it (the portaudio WASAPI implementation) as is and i guess ill await SP1. Im really sadened to say this, but its back to ASIO for our users in the mean time.

What we would like to see from you guys is code example in a next rev of the Vista SDK with a real example of low latency duplex processing using the EVENT method, with both exclusive and shared. (sleeping in a main loop really doesnt cut it, sadly)
That way users AND driver vendors would have a test application to start with.

Your post also suggests that a real time audio app will always require "strictly determined lenght of time" .
In case of a software synthesizer and even normal DAWs this doesn't apply . The real time slice of cpu this kind of app would require would always be moving depending on the number of voices playing. (or tracks/effects playing at a specific time in a "song" - in case of a DAW)
No software synth that i know ever busy waits on the worst case scenario of maximum number of voices to keep cpu requirements constant.

Please advise.




Re: Vista Pro-Audio Application Development MMCSS and audio in Exclusive mode.

Robin E. Davies

What we would like to see from you guys is code example in a next rev of the Vista SDK with a real example of low latency duplex processing using the EVENT method, with both exclusive and shared. (sleeping in a main loop really doesnt cut it, sadly)
That way users AND driver vendors would have a test application to start with.

I'll second that. What's *with* the disabling of the exclusive mode code in the sample app, anyway

Although I got rendering working nicely on one of three audio devices, but when I tried bringing up audio capture, I ran into the same problem with the capture client for my one working device that I did on the render client on the other two adpaters (the capture event gets set once only, and never again). I've also tried switching over to not use AUDCLNT_STREAMFLAGS_EVENTCALLBACK, but the results pretty strange: a huge gaping 24ms buffer (when I asked for 2ms latency), and GetBufferPadding always returns zero, so I don't even have buffer pointers to chase (hateful as that was in DirectSound).

Please please please working realtime pro-audio sample code. (or a frank admission that pro-audio is not quite ready for prime-time in vista and that we should stick to ASIO for this initial release would work too).

For what it's worth, the MMCS documentation seems to suggest that Pro Audio category threads do not have an imposed CPU use limit. I did notice that in passing.





Re: Vista Pro-Audio Application Development MMCSS and audio in Exclusive mode.

Robin E. Davies

David,

Just out of curiousity, I noticed that you have a reference to AvRtCreateThreadOrderingGroup in your code. When I tried it, it failed, apparently because the "Thread Ordering Server" service was not started by default on my machine. Did you run into this problem How did you get around the UAC requirements to start the service





Re: Vista Pro-Audio Application Development MMCSS and audio in Exclusive mode.

plgDavid

The AvRtCreateThreadOrderingGroup stuff was just me with the Beta SDKs trying to figure out
how to have low latency, and its there in t my portaudio branch as an "artifact" of previous experimentations.

Reference:
http://www.freelists.org/archives/wdmaudiodev/03-2006/msg00080.html
(wow thats more than a year old!)

To this day i still dont know how it relates to MMCSS, wasapi and the normal SetThreadPrio
rings... someone at MS really should make a nice PDF/PowerPoint explanation for us Smile