ReneC

Hi all,

I¡¯m on the tail end of initial implementation of sounds created by the XACT engine and I figured this forum deserved a little postmortem (even though its not dead quite yet) that covered the XACT tool, the process, and the implementation.

A quick note about who I am: My name is Rene Coronado and I work for Dallas Audio Post Group. I am an audio engineer and sound designer with a history in television and film, voice production, and long form things like corporate training. I¡¯m not a programmer, level designer, writer, or producer. I am a gamer as well as a fan of film, good tv, and sports.


My company was charged with re-doing the sound for a first person shooter MMO type game that has already been released in Asia. The game programmers are all Asia based, and the distribution is US based. We re-recorded the voices in English, re-did all of the weapon and vehicle sound design, and are in the process of mixing.

Initially the game was programmed using the Miles Sound Engine. The implementation was a little rudimentary and while the basics were covered, the depth of the audio programming wasn¡¯t up to par yet. When we took the projects sound over, the developer agreed to re-program the game to call XACT cues and use the XACT engine instead of the Miles engine. In its current state the game actually uses both engines, but the miles engine will be phased entirely out once the XACT mix is completed. During the course of the project I was charged with learning the XACT authoring tool, sound designing weapons and vehicles inside of the tool, and packaging it up in a state that the programmers could map into the game.

Here are the things that I think went well:

-the tool does randomization really well.

I was able to use relatively few samples to create good and diverse sounds for everything I was charged with creating. I made heavy use of the pitch randomization along side the sample randomization to liven up all of my weapons and foly, and I can already hear a pretty marked difference from certain games on the market with more straightforward soundtracks. I have noticed that games like Call of Duty 3 tend to take the pitch randomization a little too far in spots, so I generally stuck to the concept of pushing it until its gone a little too far and then backing it off a notch.

-limiting the number of instances and crossfading worked well.

One of the biggest obstacles in creating a mix for an interactive platform is retaining focus when multiple important events are going on. I¡¯ll pick on Call of Duty 3 again (ps2 version) though it may be the port and I¡¯m picking because I¡¯m addicted to the game. :) In CoD3 when there are multiple machinegunners firing simultaneously the mix just tends to wash out into a sea of riffling bullets. There¡¯s no punch because there¡¯s not room for punch (and because all punch comes from negative sonic space). In my mix I made heavy use of the limit instances proprety of the weapon cues, and with very brief fadeouts I was able to make rapid fire weapons that didn¡¯t wash into their own reverb tails. This also controlled my headroom nicely since there wasn¡¯t much additive gain happening.

-engine RPCs worked well

I¡¯m still no master of videogame engine design, but I¡¯m happy with what I have working right now. My workflow was generally to design an ¡®idle¡¯ track and master it to 0 dbVU, then create a ¡®friction¡¯ track and do the same. The friction track was a bow wash for boats, wind for aircrafts, and running dirt for trucks and tanks. Within the XACT engine I¡¯d set the RPC to set my startup idle to -8 on my meters and my friction track to ¨Cinfinity. As my ¡®speedfactor¡¯ variable was increased by the game, my ¡®idle¡¯ track would increase in volume until it maxed out at 0 VU, and my ¡®friction¡¯ track would also increase into audibility and mixed in to taste. Pitch would increase and affect the entire sound instead of each individual track. Of course these weren¡¯t linear curves, and some vehicles withstood more pitch manipulation than others -a motorcycle will rev higher than a tank will for instance. My general philosophy of setting my samples at a standardized place in the analog VU world meant that each curve could be applied to a large number of very different vehicles and cause the effect and mix to sound right across the board.

-integration and testing

Once the sounds were mapped by the programmers into the game, the process of integration and testing has been running relatively smoothly. I don¡¯t have the hooks built into the game that allow me to connect the XACT tool directly to it, but I¡¯ve been tweaking parameters, rebuilding, and rebooting the game to immediate result so that¡¯s been fine.

-to sum up-

the tool does the things that it says it will do pretty well. Good randomization and control of cue instance limits, good RPC control, straightforward integration once its been mapped into the game.

-----------------

Now that that¡¯s out of the way, here are the things that I either had issues with or would like to see improved:

-documentation

in my opinion this is the single most pressing need that the tool has. In the best practices section of the C++ programmers manual it spells out what parts should fall into the programmer¡¯s domain and what parts should fall into the sound designer¡¯s domain. No where does it really spell out the specific places that the programmer will touch the things that the sound designer has done, however and this is a glaring defect. I¡¯ll list here the things that I think the sound designer has to create and the programmer has to manipulate in some way:

-sound bank names and indexes

-wave bank names and indexes

-cue names and indexes

-relationship of cues to soundbanks, wavebanks, and variables

-stop loops on looping cues

-variables that control interactive cues

-variable ranges

-rpc variables

-rpc ranges

-categories

-dsp functionality for reverbs

that¡¯s 10-15 different ways the programmer will have to put his hands on the sound designer¡¯s work. We need comprehensive documentation functionality that spells all of these things out please. The best practices part of the manual says that we should get together with the programmer and figure lots of this stuff out before hand. That¡¯d be very cool, but I think we need a fallback mechanism for when things aren¡¯t quite so utopian.

Pretty please. I¡¯d like an excel spreadsheet but I¡¯ll take a txt file. Really. Please.

-quicker confirmation of which RPC a sound is attached to

I kind of have to take it on faith that my mouse stopped at the right spot when I let go of the drag-n-drop. If I¡¯m not sure its several clicks before I can be sure. Documentation of the sort pleaded for above would help with troubleshooting this type of stuff as well.

-search function

as projects get bigger a search function becomes more and more necessary. I¡¯d like to be able to do a search for samples, sounds, and cues if possible.

-RPC grids and snap to grid

sometimes I want things to be a little more exacting than my floating mouse allows me to get. Also, sometimes I want things pretty linear and sometimes I¡¯d just like a frame of reference for how far I¡¯ve gone with my line drawings.

-Different RPC default for Orientation Angle variable.

In an ideal world the Orientation Angle RPC would open up with a grid, a line drawn with three knuckles instead of two (the third knuckle being dead center) and a mirroring capability. Then as I draw a volume rolloff on the left side the right side would automatically update to mirror the curve. Maybe a checkbox to turn this functionality off, but I¡¯d default it to on myself.

-folders

I¡¯d really like to see folders inside of the soundbank and wavebank lists. Possibly inside of the RPC lists as well. No folder structure and no search function and no documentation makes for a lot of hunting around and time lost.

-reverb ping

Its nice that we¡¯ve got this fairly wide bank of reverb presets. It¡¯s a huge pain to hear what one sounds like. Currently we have to 1)have a sample created 2)program that sample into a sound 3)assign that sound to an rpc 4)set up a reverb DSP 5)pick a reverb preset 6)draw a curve in the RPC that sends to the reverb 7)play the sound. Id love a little reverb click that I could use when setting up a verb initially to see whether I¡¯m going to like it or not. This would also be very useful for tweaking verb parameters.

-obstruction/occlusion

No way for the sound designer to tackle this one right now. This would be a good thing to take out of the programmers hands and move into the studio.

-basic sample editing

I don¡¯t terribly mind using SoundForge to manipulate samples, but a rudimentary sample editor that would just allow me to trim heads and tails and maybe do a fadein/fadeout would be incredibly time-saving.

-dynamics processing

the single most egregious programmer audio error is using up all of the headroom too quickly. This is even further complicated by interactive play where events can either pile up or thin out in unpredictable ways. I know that implementing a limiter would run the risk of creating further loudness problems, but if put in the hands of an audio person maybe they could be used for good instead of evil. Compressors for categories would be cool as well. I like audio, and most important audio manipulation is dynamics processing.

-output meters

Or any meters. Anywhere. I¡¯d love to see exactly how far away I am from digital clipping at any given point. Currently I just run the digital output of the PC back into my DAW and use that for peak metering. I¡¯d prefer to see it within the tool though.

--------

-to sum up-

the program lacks basic documentation features, basic organizational features, and is not generally very ergonomic. Documentation issues are probably the biggest barrier to this tool¡¯s success, followed collectively by the poor organizational and ergonomic setup. There¡¯s no dynamics processing, no meters, no occlusion/obstruction, and no search.

---------

On the tail end of this I was able to start IMming one of the Asian programmers who both spoke English and was in charge of mapping the sounds to the game. This sped the process up 100x, and is clearly a very very important part of the process. The nature of differing programming philosophies means that lots of little things need to be worked out with the programmers in real-time, and IM is a great way to have a documentation of what the consensus was, as well as to stay in constant contact without intruding on personal space. I should have pushed to have this contact set up much earlier in the process.




Re: Game Technologies: Audio / XACT XACT postmortem from a sound designer

ReneC

A couple of other feature requests that I neglected:

1)Please make wavebank and soundbank names correspond with what they're called in the session. The default of 'soundbank.xsb' is a potential cause of much consternation and confusion.

2)I'd love a way to just build the soundbanks without re-building the wavebanks. I don't know if there is a way to do this right now or not, but its a little painful having to wait for XACT to re-compress and re-write a bunch of files in order to hear a sound programming change in-game.

thx!





Re: Game Technologies: Audio / XACT XACT postmortem from a sound designer

Brian MS

First of all, thank you for your post mortem. I think a lot of people will benefit from it.

For your item 2) above, when building with the command line, xactbld.exe, you can specify the /X:WAVEBANK flag. This will prevent wavebanks from being built.

-Brian






Re: Game Technologies: Audio / XACT XACT postmortem from a sound designer

Mescalamba

Hi, youve worked on WarRock, arent you Xact doesnt work well with this, it causes HW lags, I dont know if it is bug in engine or what. Have you any idea how to switch game back to using Miles Sound System only Just question from one gamer. E-mail me Daemonius@centrum.cz or write somethin here pls. Thank you..




Re: Game Technologies: Audio / XACT XACT postmortem from a sound designer

ReneC

well there was certainly a lag on initial release, but I'm pretty sure that it was not related to the sound engine. It seemed to resolve itself the next day, so the programmers must have addressed something on the server side.

I've been playing the last few days off of the commercial version with no lag at all.





Re: Game Technologies: Audio / XACT XACT postmortem from a sound designer

Mescalamba

http://warrock.net/news.php id=32&ntitle=1

Read this. Problem is, when 2 or more weapons are firing at same time. The game almost freeze after this. Normaly I have about 200 FPS, but when I start firing or somebody near me. FPS go down to almost zero. Its not problem of graphic part of game, or network. Its in sound engine, or maybe just sound. Or maybe we dont have sound card that can handle this, can I ask you what sound card do you have in your PC I think it will be some hi-end. Try to play with some SB Audigy 2 and you will see. Srry for my English, I hope you understand what Im trying to say.. But I dont understand one thing, Xact is using only containers for wave or mp3 files. Most of games are using same types of files, why it isnt working for this game Maybe too much sounds at once Just idea.. btw. Ive played with you few times. :)