Thomas Lindquist

I have a good amount of shaders/fx files where all have several techniques for various shader level and detail implementations. In a worst case scenario this leads to a lot of similar code being represented in the various vertex and pixel shaders I use, which is really a maintenance hell if left like that. Change code in one place and 5+ other places need the same change. So this leads me to the following idea:

Split the code into various functions and place them in suitable include files. Include the needed files for a given shader and call the functions. A few examples of such functions: translatePosition(), getSpecular(), getDiffuse(), getNormalMapSpecular() getShadowValue() and so on. Doing this would mean one task only exists in one file and only one function, which would make it very maintainable. However, as with normal code having excess amount of function calls leads to a certain amount of overhead. My questions are then:

Since shader efficiency is important to maintain a highest possible frame rate, especially in pixel shaders, will doing this make my shaders slower, even if more maintainable

Is there a way to force inline behavior of such code or does the compiler perhaps already handle that

If the compiled shader will suffer a slowdown, does there exist any statistic as to how much, or does anyone have any experience with this themselves

/Thomas Lindquist

Re: Game Technologies: Graphics HLSL design and compiler cleverness

Stuart Yarham

All the functions will most likely be inlined in your code. The HLSL compiler is very good at dealing with this and keeping things nice and optimal. In fact, I don't recall ever seeing a call or callnz instruction in my asm output. It's probably best to check your asm output too, to see what's happening. Unless you're running out of shader instruction store, subroutines shouldn't be needed.

As for tools for checking performance, PIX is your best bet there.