Julian Klappenbach

I've been looking over the Mesh methods, and haven't found anything that would enable framework level resampling of vertices in a mesh or otherwise. The use case for this would be for rendering objects at various distances. Obviously, the farther away an object is from a camera, less vertices are required to render an object.

IIRC, the C based API supports this, but I haven't seen anything to support this in XNA.

Also, a utility to perform this on simple triangle lists would be nice. In my case, I'm working on rendering terrain using USGS data, which comes in files usually supplying grids of 1000x1000 or around 1MB of vertices. This appears to be the limit that one can supply to device.Vertices[0].SetSource. Obviously, performing frustum culling and other techniques can help to limit the number of vertices thrown into the pipeline. But I think it would also be useful to break down the entire set of vertices into smaller grids. I've chosen a resolution of 128x128, and thrown each chunk into a QuadTree to optimize frustum culling.

When I resample, I create lists of VertexPositionNormalTexture elements, as well as indexes, that are picked from the original list by selecting only specific points (skipping by one, two, or three points). As a design decision, I can either create separate lists for each resolution, duplicating the vertices and indexes, or I can have each resolution set as lookups into a single static master list. The latter is problemmatic, since VertexPositionNormalTexture requires the equivalent of a UV cooridinate per vertex. Obviously, the lists for lower sample rates will have their uv coords screwed up, so I'll need to constantly revise the master list for each render. The former alleviates this, but I pay a penalty in duplicated data (for the most part).

Is there a better approach It would be nice to place each 128x128 points into a mesh, and have framework methods perform the resampling.

Any suggestions