Generative graphics with quartz composer?

apalomba's picture

Hey guys,

I am new to QC and am trying to figure out what is the best way to design my graphics system.

I basically want to create 3D geometry on the fly and control its generation from an external source like audio or sensors. I want to then send that to VDMX for presentation.

I am familiar with graphics scripting environments like Processing and and Max. I would ideally like to do things in a programmatic way, which would give me the most freedom.

Is it possible to do real-time generative geom/graphics with quartz composer?

Thanks, Anthony

Serious Cyrus's picture
Re: Generative graphics with quartz composer?

apalomba wrote:
Is it possible to do real-time generative geom/graphics with quartz composer?

Short answer, yes, lots of different ways to do it interacting with many inputs, but you'll need to be more specific, there are many ways to do different things.

apalomba's picture
Re: Generative graphics with quartz composer?

Well basically I want to create geometry on the fly. As the audio changes I want to create vertices and connect them on the fly.

The vertex generation would be controlled by any of a number of different algorithms. Is this possible in Quartz Composer?

smokris's picture
Re: Generative graphics with quartz composer?

@apalomba: There are a few possibilities for creating 3D geometry on the fly:

  • You can generate a plane mesh (using either Kineme 3D Plane Generator or Kineme Super GLSL Grid) and deform it using a GLSL shader. See the GLTools-superglsl.qtz sample composition included with GLTools. This route is the most efficient, since the geometry transformations are happening in parallel on the GPU (but it's also the most restrictive as to how you can perform the transformations, and probably the most difficult to debug).
  • Kineme3D's Kineme 3D Parametric Surface patch lets you specify parametric equations for generating geometry. See the Parametric Torus.qtz sample composition included with Kineme3D. This route is pretty efficient since it uses just native C arrays.
  • You can write JavaScript code to produce a structure of triangle vertices, and feed that to the GLTools GL Triangle Structure patch. See the GLTools-structure-syncom-sphere.qtz sample composition included with GLTools (this uses GL Line Structure instead of triangles, but it's the same basic idea). This is less efficient than above since it involves a roundtrip through a QCStructure, which is relatively inefficient compared to native C arrays.
  • You can write JavaScript code to produce a structure of triangle vertices, and feed that to the built-in Mesh Creator and Mesh Renderer patches. Again, this uses QCStructures, so it's relatively inefficient.

apalomba's picture
Re: Generative graphics with quartz composer?

I see, thanks for the detailed info. Part of what I am doing is trying to design the best system for creating real-time generative graphics. So far these are the choices:

Option 1: Max->Syphon->VDMX Pros: Integrated signal processing/Video/3d graphics environment, infinitely extendable. Cons: Less integration with VDMX

Option 2: Quartz Composer->VDMX Pros: Nice integration with VDMX Cons: not as flexible

I would be interested to knowing peoples thoughts as to which of these two options they think is better. Or maybe another option that I have not thought of.

Thanks, Anthony

gtoledo3's picture
Re: Generative graphics with quartz composer?

I don't think there's really such a thing as "best system", just one that fits your working style and allows you to achieve the results you want. So, for me, having to run graphics through something like VDMX winds up being a gigantic impediment, because it doesn't fit my flow or working style (and I've never enjoyed the aliasing it does on graphics... not sure if that's ever been resolved).

I'm curious what you mean by Max being infinitely extendable. Do you mean that it has a ton of objects available, or that you believe there's something about the api that makes it more extendable? I can see that in some ways, but if you're used to Objective-C, it's a pain calling Cocoa functions in the middle of a C program.

I don't think either one is better; they're both pretty great. Some really great, fast, results can be had by using both in tandem, shuffling around data with syphon or OSC, etc. (not to encourage bad habits though).

Another route that might be interesting for graphics is using fragment shaders instead of relying on drawing everything w/ vertices. In some cases performance and subjective "quality of look" can be vastly better (though there are obviously frag shaders that aren't really capable of being rendered in realtime on current GPUs). I personally find QC to be better for working with shaders on the fly.

apalomba's picture
Re: Generative graphics with quartz composer?

Max is extendable in the sense that it is a graphical programming environment, so I can build what ever functionality I want using a library of objects. It is also open architecture and provides an SDK that allows me to create my own audio/video/control objects in C/C++.

Of course, total freedom can also be a lot of work. Ideally I want an environment that allows me to create with ease, not create more work. I definitely see the benefits of QC, but it will take some time to learn it. Since I already know Max, for now I am leaning towards it.

Thank you for your insight, it is very helpful. If you do not mind me asking, if you do not use VDMX for presentation, what do you use?

gtoledo3's picture
Re: Generative graphics with quartz composer?

apalomba wrote:
Max is extendable in the sense that it is a graphical programming environment, so I can build what ever functionality I want using a library of objects. It is also open architecture and provides an SDK that allows me to create my own audio/video/control objects in C/C++.

Of course, total freedom can also be a lot of work. Ideally I want an environment that allows me to create with ease, not create more work. I definitely see the benefits of QC, but it will take some time to learn it. Since I already know Max, for now I am leaning towards it.

Thank you for your insight, it is very helpful. If you do not mind me asking, if you do not use VDMX for presentation, what do you use?

Since QC is based in Apple tech/Cocoa, you can use Obj-C, C, C++, and probably other wacky languages in the creation of custom plugins for QC, if desired. There's a public api provided by apple, as well as a pretty completely reverse engineering of the private api, available as the "skankySDK". So, that may be worth thinking about.

If you know Max well, there's a ton you can do with it. Gen has improved possible performance too. The big thing, for me, that makes QC nice is that it integrates well into Xcode projects, has methods that feel familiar if you work with other cocoa frameworks, and it's really easy to make plugins that use OpenGL. If I had to do audio processing, or something that Max is really good at, I would just be pragmatic and use Max.

I've always just whipped up some kind of QC composition to control playback. In Leopard era, I probably would have just run using the editor app. QuartzBuilder is nice for making a composition in to a selfcontained app quickly, and getting better performance than the editor nowdays, so I have used that occasionally. Nowdays, I have a "go to" xcode project that loads the qtz's I need, and provides some simple niceties, like better fps, etc.