LKeene

Hello all,

I'm trying to learn DirectX9 (unmanaged C++) from 3 different books and I still have a few questions that I can't deduce the answers to. I'm hoping some of the gurus here can shed some light on these subjects:

1) How does Direct3D know when I'm feeding it pre-transformed vertex coordinates If I neglect to implement D3DXMatrixPerspectiveFovLH (or any other transformation matrix for example) will Direct3D assume that the vertex buffer data is already in screen coordinates

2) I've seen, in my book, an example of a custom vertex structure meant to contain transformed data (it's introduced before the chapter on 3D transforms). It looks like the following:

struct ColorVertex
{
   float x, y, z, rhw;
   DWORD color;
}

In the above structure, what is the meaning of the "z" coordinate if the structure is meant to contain coordinates that have already been transformed to 2D screen space and on what basis would I select a value If I want a vertex in the top left corner of the screen do I specify x=y=z=0.0f For some reason the author has specified z=0.5f but she doesn't explain why. 

3) I've seen examples (in Frank Luna's book) where he draws a quad (2 triangles) and maps a texture bitmap to them, but he only specifies the "D3DXMatrixPerspectiveFovLH" projection matrix and not the World/View/Viewport transforms.  Since the projection transform describes only the details of the camera but not it's location, how does Direct3D know where to draw the objects relative to screen coordinates Are all the other transforms optional

4) Lastly, in the "D3DXMatrixPerspectiveFovLH" function call, is the aspect ratio argument supposed to be the aspect ratio of the monitor itself, or the aspect ratio of the client area of the window in which I'm rendering

Thanks in advance everyone! This is a lot trickier than I imagined.



Re: Game Technologies: DirectX 101 Beginner transform matrices question

Bad Habit

DX knows what type of data it is when you set the vertex shader via IDirect3Ddevice9::SetFVF().

Try changing the value from 0.5 to 5000 or any other value and see if makes a difference. Yes (0,0) would be the top left of the client when using transformed co-ordinates.

The viewer starts at the centre looking down the z-axis so Frank more than likely knew where to place the texture if he was using untransformed co-ordinates.

Its the aspect ratio of the client.





Re: Game Technologies: DirectX 101 Beginner transform matrices question

Robert Dunlop

1-2) It's the inclusion of the rhw element in the definition of your vertex format, either using the FVF flag D3DFVF_XYZRHW, or the D3DDECLUSAGE_POSITIONT element in a vertex declaration structure. When this is present, it expects vertices with an rhw coordinate, and will pass them straight to the rasterizer without transforming or lighting them.

The x and y coordinates are in pixels, with the upper left being 0,0. The z coordinate specified the depth of the vertex, that is, how far it is into the screen. In a pre-transformed vertex, it will have to be in the range of 0.0 to 1.0 to appear on the screen, anything outside that range will not display. If you are using a depth buffer, the z value you use will determine what will happen when two or more polygons overlap on the screen - regardless of what order they are rendered in, the pixels with the lowest z will be rendered over pixels with higher z values. When rendering pretransformed vertices, z does not have any effect on your x and y coordinates, it just determines what's on top.

The rhw value is a bit harder to explain, you can just set it to 1.0 rendering in 2D, or just about any positive non-zero value will suffice.

Yes, the view and world matrices are "optional", actually they still apply but if you don't set them they contain an identity matrix, so they have no effect. Using just the projection transform, it would be like you set the camera at 0,0,0 facing in the +z (0,0,1) direction.