representing models in structures

cwright's picture

In a version or two of GLTools (I'm thinking 2, since I've already got a new one with texture mapping etc in the pipe), we'd like to have the ability to represent geometry in structure form. This will allow you to draw several OpenGL primitives with a single patch; useful for object loading. As I've been thinking it out though, it seems as though the amount of data per vertex is really quite large (which makes it cumbersome to work with in pure QCStructure form).

As I understand it now, we have the obvious xyz triplet for location. we have an rgba quartet for color (this is multiplied with the texture, if there is one). We have a uv pair for texture mapping (does anyone actually use 3D textures without cheating and using procedural texturing? if so, I guess it'd be an rst triplet instead), and then, an xyz triplet for the vertex normal (used for lighting and making smooth surfaces). So, the grand total is something like "XYZRGBA(UV|RST)NxNyNz". That's kinda big, just for a single vertex. Then there's the problem of partially shared vertexes, where, for example, the xyz triplet is shared, but with different values for everything else. is it worth it to merge coincident points?

Should we keep these as a bunch of accessible numbers in a structure, or devise a new 'KinemeVertex' port (with patches that'll assemble vertexes from structures/values, and disassemble vertexes into structures/values, etc.)?

geometry, thankfully, is simply a bunch of indexes into the vertex table. This could probably get its own port type too, since it's just a list.

With that, I think that's about all I've worked out before going all out and whipping something up. Are there any obvious points that I've left out?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

smokris's picture
simplicity for pass 1?

I'd suggest going with a simpler subset for this initial version. Support a reasonable feature set with the "point structure" inputs. Keep the structures as simple as possible for the user to construct and manipulate.

My suggestions for pruning:

  1. Don't share vertexes. (Later add a different set of optimized patches for rendering vertex-reference-lists, which could take these "point structures" and a structure list of face-vertex-references. Or for the optimized large-dataset patches, don't use QCStructures at all if it's better for performance.)

  2. Don't include vertex normals. The surface normal is by definition perpendicular to the surface; the vertex normal is by definition the average of the normals of the surfaces using the vertex. And the sign of the surface normal is defined by the ordering of the used vertexes.. So I'm thinking it shouldn't be necessary for an input mesh to carry this normal triplet. It would probably make sense to calculate this and cache it internally, but to be passing it around between patches doesn't really make sense to me (especially when deforming the mesh would require recalculating vertex normals anyway).

  3. Don't bother with 3d textures (rst mapping coordinates) for now.

  4. Use standard QCStructures.

So, we're left with XYZRGBAUV which isn't quite as unwieldy.

cwright's picture

  1. agreed.

  2. imported objects will have normal data, as will objects triangulated from splines. NURBS can allow for discontinuous surfaces, so they have to be passed along to be correct.. it will not always be a smooth shape, nor will it always be a faceted shape. I'd rather the user have control over it from the beginning (it's a negligible amount of code to add).

  3. again, negligible amount of additional code. utility is the questionable part. if we change index offsets of something, it'll break backwards compatability later if we don't plan on it here.

  4. Is there a reason to? nothing generates models in structures, and you can't iterate over each vertex in javascript because they're not laid out as an array but as explicitly named members (input_0, input_1, instead of input[x]). I personally see no benefit to using QCStructures for geometry when we can make our own ports that'll handle it natively.

smokris's picture

I was thinking that QCStructures would be useful because we could potentially reuse the existing (work-in-progress) structure manipulation patches on them.

If we take, say, the Mouse patch, feed it into Structure Assemble, and feed that into Structure Record, we get a growing list of points, which we could feed directly into GL Point Structure. This alone isn't exciting, but as we invent other structure manipulation patches, we could add them to the stream. If we add some sort of recursive structure numerical manipulator patch (that, say, recursively changes all structure leaves named "x" by adding 0.5), we then have a simple dynamic particle system.

cwright's picture
good points

That makes sense, and sounds pretty cool actually (for many things other than managing geometry) :) perhaps a structure<->native (custom port) bridge?

I like the utility of this method of creation, but I also like the advantages of native ports (esp for large datasets, where half a dozen or more messages per vertex will accumulate a lot of overhead unnecessarily), so to me a bridge (or, optionally, have the renderers be intelligent enough to deal with either input) seems like a powerful combination that doesn't leave anyone in the dark.

smokris's picture

Yes, either a (two-way) bridge or intelligent ports would satisfy my hypothetical needs.

franz's picture
regarding other graphical apps

MaxMsp / Jitter for instance provide bridge objects for data manipulation that are very handy. Like jit.matrix that allows to convert quicktime movies to a matrix datatype (then you do whatever you want with it, ever interpret as sound)

A custom "KinemeType" would definetly go beyond the 3D geometry utility, and could be used to deal with all kineme related stuff (and upcoming)