WWDC 2008

For those of you attending WWDC 2008 in San Francisco this year, keep an eye out for @smokris or @cwright, and their cool new patches in progress.

We'll be showing off some improved Kineme3D features, an enhanced OpenCV patch, an enhanced audio/video input patch, and possibly an improved Particle System and tweaked GLSL patch.

And we might also spend some time tinkering on iPhone stuff. Who knows?! :)

franz's picture
sneak peak...

enhanced a/v input !!!! that's a scoop i guess it's time to make a buzz...

it also seems that you've been secretly working on interesting features.... so baad i won't make it to U.S

toneburst's picture
Exciting....

not gonna make it to the US, though, sadly.

alx

Quartz Composer Blog: http://machinesdontcare.wordpress.com

Music Site: http://www.toneburst.net

jean_pierre's picture
wicked, i will be attending

wicked, i will be attending as well and on the lookout during the media-themed sessions...

psonice's picture
Good luck

I won't be attending, but good luck with it.

Thinking of anything interesting on the iphone? Or is it a secret project? One thing I've been really lusting for is an iphone port of QC (or at least the frameworks so a composition will play). Doesn't look like it's happening though. :(

cwright's picture
QC Session

Tomorrow (Thursday) at 2:00pm in Russian Hall there's a session on QC integration and extension. Be there or be square ;)

tobyspark's picture
...a row of virtual us'es

hahaha, go out and buy a load of macbooks from an apple store that you can return the next day, put ichat/skype on them, we dial in, video fullscreen + volume to the max... and cause them hell!

ah, apart from this being a wishful joke, i've just remembered there's a restocking fee in the us

psonice's picture
Video conference, 30" monitor

With ichat in videoconferencing mode and a 30" screen, he could get a few of us in easily. I think it's a maximum of 3 for ichat, but maybe it could be extended a bit with some hackery.

Christopher: don't suppose you'd mind asking if/when it's likely the glsl bug will get squashed? :D Not an important one, but it'd be nice to have some idea of when my work will actually.. work.

toneburst's picture
On the sampler2D/Vertex Shader Bug

Just heard from someone 'in the know' that apparently, Apple's video drivers don't support vertex displacement mapping anyway, so even with that bug squashed, it's always going to fall back to software rendering. Good news for those with fast processors and poor graphics cards (like MacBook owners) maybe, but bad news for the rest of us. Explains why VDM was always much slower than it should be.

On the other hand, only newer NVIDIA GPUs actually currently support sampler2D input to the Vertex Shader, anyway.

All in all, looks like custom-rolled VBO-based heightfield plugins have to be the way to go.

alx

Quartz Composer Blog: http://machinesdontcare.wordpress.com

Music Site: http://www.toneburst.net

psonice's picture
sampler2d

Ah, thanks. That explains why things are a bit buggy around displacement mapping. I had it working nicely with software CI too.

Time to figure out another way of accomplishing the same thing then...

toneburst's picture
Normal-Mapping

You might be able to get a similar effect using normal/bump-mapping methods (though of course the edges of the mesh would still be flat).

alx

Quartz Composer Blog: http://machinesdontcare.wordpress.com

Music Site: http://www.toneburst.net

psonice's picture
Normal + displacement

I'm actually using normal mapping in combination with displacement, to get a pretty high-res looking object without too many polys. Pure normal mapping wouldn't really cut it for most effects, parallax occlusion mapping is better but only for quite 'shallow' depth effects.

Generating the mesh outside QC is looking like the only way to go (unless the "pull the texture out of QC, set the colorspace attribute to 'GenericRGB' and send it back to the GLSL patch" method works. Has anybody actually tested that to be sure? )

cwright's picture
validity

I can't confirm the validity of that, but we sat through a number of OpenGL sessions (including sessions with NVidia and Intel GPU Engineers) that listed some common Do's and Don'ts, and VDM wasn't on the list.

If VDM was software-only all the time, why would switching to software opengl fix it (since in essence, it's not doing anything differently)

I'll try to snag an ATI engineer at some point (or one of the many Apple OpenGL Driver guys) and confirm/deny this, but my understanding is that texture sampling in a vertex shader isn't difficult anymore, so long as the hardware supports it (the X1600 doesn't, which is a fairly common card in MBPs from a year ago or so?)

toneburst's picture
My understanding is that

My understanding is that only NVIDIA GPUs support it at present. My 'source' (albeit 2nd-hand) is an ATI driver developer working on Apple drivers for their cards. Switching to software opengl would maybe fix it because the problem occurs when the texture is piped into the GPU (which doesn't happen with software rendering). The software renderer presumably isn't as picky about image formats/colorspaces as the GPUs. Just a guess....

alx

Quartz Composer Blog: http://machinesdontcare.wordpress.com

Music Site: http://www.toneburst.net

psonice's picture
SoftwareCI/GL

Thinking about it, the 'fix' is softwareCI, not softwareGL. Doesn't that leave GL in hardware accelerated mode, but put CI in software? Which would mean the vertex shader is running in hardware anyway, but the input texture is perhaps software generated and has no colorspace. At least it feels like that, as some functions are still very fast.

yanomano's picture
displacement: emulate None QT codec ?

As you know, if you export a displacement map (fractal noise or other) with the Quicktime codec "none" all is fine (you can plug the video as a displacement map in GLSL). I'am on an old nvidia 6800 ultra... So isn't it a way to transcode or emulate this image format on the fly ? (With a custom patch ) For info: there is less problems with Max/msp for this kind of things...it works for a long time. See exemple in attached file. (already posted.)

PreviewAttachmentSize
DisplaceWithQuickime.zip2.22 MB

cwright's picture
transcode

transcoding is possible, it's just slow (looking for a faster way to do it) -- right now, it requires loading the texture from the GPU, and then reuploading it with differe colorspace flags. The load cycle shouldn't be necessary, and will destroy performance.

Looking for a way to change an image's colorspace on the fly without the load cycle. Since I don't do GLSL all that often, it's not an urgent problem for me...

yanomano's picture
For me either...

It's not urgent for me either but I understand it can be for someone...I remember my first test with 3D displace in max/msp...tears...:) But i see: double loading =bad FPS

toneburst's picture
true, yanomano, but I think

true, yanomano, but I think what most people want to do is displace the vertices with a live video image, or a displacement map that can be altered in realtime. I know that's what interests me, anyway. Being able to displace with a QuickTime movie doesn't really cut it, sadly.

I still think there's a pretty good case to be made for a HeightField patch as part of Kineme GL Tools. With a VBO heightfield, you can guarantee GPU execution, which has to be faster than software-fallback, and it will additionally neatly sidesteps the colorspace issue. If the heightfield can have normals and texture coords so it can be textured and lit in a GLSL shader, that would be even cooler! And if it could be 'super-ised', even better!!

alx

Quartz Composer Blog: http://machinesdontcare.wordpress.com

Music Site: http://www.toneburst.net

cwright's picture
normalize

generating normals from an input image is tricky. texture coords isn't (it's input invariant).

I'll try to adapt the apple example into kineme gl sometime...

toneburst's picture
I thought

I thought you could generate normals from the mesh?

alx

Quartz Composer Blog: http://machinesdontcare.wordpress.com

Music Site: http://www.toneburst.net

cwright's picture
... abstractly ....

kineme3D does because it keeps mesh vertices in system ram, and has the CPU do calculations on them. The height field esentially renders an image to a VBO (PBO actually), and reads that directly for vertex data -- the cpu doesn't look at the texture, or know anything about the mesh vertices.

when geometry shaders become popular, normal calculation can be done on-GPU, but until then, to make accurate normals we'll have to manually sample the texture by the cpu (slow), render a second, specially crafted image to another PBO that represents normal data (this is the most promising, and is still really fast. not sure how to generate such a texture off the top of my head though), or ignore normals altogether (simplest for lazy coders like myself ;).

toneburst's picture
You can generate a normal

You can generate a normal map from the heightmap image using sobel filtering in GLSL shader code. You'd then need to transform the normal data into object space though. I did something similar just using shaders, but I was never particularly happy with the results.

http://machinesdontcare.files.wordpress.com/2008/05/sobelnmapvdm_03_2205...

alx

Quartz Composer Blog: http://machinesdontcare.wordpress.com

Music Site: http://www.toneburst.net

cwright's picture
surface to object space

The surface-space to object space transform is where it gets ... impossible? :) not sure.

psonice's picture
Normals

It's possible to generate normals in the vertex shader, if you're using some kind of algorithm to generate the normals. It's just a case of applying the same algorithm to points slightly offset each side of the vertex, which tells you the angle of the surface at the vertex, then calculating the normal from that. I found an example shader doing just that, linked from this site somewhere I believe (not got it now unfortunately).

On the other hand, it's not always necessary to use the normals anyway. I'm doing lighting purely by using the emboss filter on the displacement map, then using the embossed version as a light map. It looks good, and you can add multiple lights with some creative light textures fed into the emboss filter.

Just to get the 'workaround by stripping the colorspace data from the texture' thing straight - has anyone actually tried it to confirm it either way? I was under the impression it had been tried and didn't work, but if not I'll give it a go. It's not particularly urgent for me right now as I've started a fresh project in the meantime, but I really want that fixed so I can use my 3d paint tool in anger :D

toneburst's picture
Hiya, I think you're

Hiya, I think you're thinking of this technique: http://tonfilm.blogspot.com/2007/01/calculate-normals-in-shader.html

I've done it a couple of times myself. The problem is, it's pretty GPU-intensive, because you're basically running the algorithm to distort the vertices 3 times- once to calculate the new vertex position, then twice to calculate the derivative. Then you have to get the cross-product of both. This technique isn't really suitable for displacement-mapping-created meshes, anyway, I don't think, and wouldn't work for heightfields.

alx

Quartz Composer Blog: http://machinesdontcare.wordpress.com

Music Site: http://www.toneburst.net

psonice's picture
That's the one

Yeah, that's the method I was thinking of. I used it (or at least my own hacky variant) of it initially, before I changed my lighting method. Actually it's not that heavy on the GPU if you're not applying a complex algorithm.

I also noticed that you can use the extra 'samples' to do a kind of AA on the mesh positions - very handy for displacement mapping as it smoothes out the kinks in the mesh and looks much better in the end!

toneburst's picture
Nice example, by the way

Nice example, by the way Yanomano. I stand by what I said about static displacement movies though. I mean, if you could run it through a CIFilter to do something with it, even, I'd be semi-happy. As it is though, as soon as you try to do that.... boom....

alx

Quartz Composer Blog: http://machinesdontcare.wordpress.com

Music Site: http://www.toneburst.net

psonice's picture
Agreed.

It's nice to have SOMETHING working, but having it animated in realtime has to be the goal. I have a whole bunch of awesome effects waiting to be done using this. I've had some of them running in realtime with the softwareCI workaround, and they look great but sooo slow.

psonice's picture
QC Session?

What happened at the QC session? Was anything new announced, or were any exciting new tricks learned?

cwright's picture
N to the D to the A

[NDA = Non-Disclosure Agreement]

"Was anything new announced" -- yes.

"were any new tricks learned" -- yes.

that's about all I can say ;)

psonice's picture
Full disclosure!

We demand it!

Ok, just joking :) One question though that you hopefully can but possibly can't answer.. can we expect any disruptive changes in the near future?

cwright's picture
disruptive

If you mean "Disruptive" as in "Holy Crap, everything is different, and backwards compatibility is broken", then No. If you mean it as in "Holy Crap, There's So Much More I can Do With QC Now!" then Yes :)

psonice's picture
What i like to hear :D

Sounds like excellent news!

Another question, and I'm sure we can get this past the NDA... don't suppose you asked about the GLSL bug? :D (sorry to keep asking about that, but it's really winding me up.. i really need to use vertex samplers to achieve a whole bunch of new effects, and modifying the gl height field patch to suit is a lot of work + learning, so it's going slowly :/ )

toneburst's picture
How are you getting on with

How are you getting on with that GL Heightfield thing? I'd be interested to know what you want it to do. I have a few ideas there myself.

alx

Quartz Composer Blog: http://machinesdontcare.wordpress.com

Music Site: http://www.toneburst.net

psonice's picture
Not so well really. It kind

Not so well really. It kind of does about 10% of what I need, and the other 90% is tricky.

As a minimum I need something like the height field with texturing, as I need a textured and lit mesh. I'd really like it to work in a GLSL patch (or better still have integrated GLSL) so I can use shaders with it.

The vertex shader I made using the crashy sampler actually used multiple samples per vertex (I found using a texture with the same number of pixels as the mesh has vertices worked best, but multisampling it too got rid of a lot of mesh ripple, which is an issue in some cases). It'd be good to keep that too.

I still think using the GLSL patch and making a plugin to work around the bug somehow would be best, but it doesn't seem like its possible somehow. Although I've never got a really straight answer on whether anyone has tried stripping the colorspace from the texture - anyone tried that for sure? Mr. Wright?

cwright's picture
provisional

Despite my silence on this topic for the past while, I've actually been poking around quite a bit to find a good solution to this. Unfortunately, I don't know if there's a good one "from the outside" (i.e. outside of having QC source access). (There's a bunch of educated guessing going on here, as well as some uneducated guesses as well. don't take this as Gospel Truth just yet)

It works something like this:

Images in QC3 are just very thin wrappers that look like this:

@interface QCImage : QCObject
{
    id <QCImageProvider> _provider;
    NSAffineTransform *_transformation;
    QCRegion *_domainOfDefinition;
    void *_unused2[4];
}

All the image data is stored the _provider member variable, and there are a bunch of different provider classes (NSImage, CoreGraphics, CoreVideo, CoreImage, QC-internal ones, and some other minor classes I guess).

In raw OpenGL mode (where GLSL operates), "textures" are collections of bytes with a specific format (RGB, BGR, ARGB, RGBA, et. al.). QC abstracts that a bit to allow all kinds of colorspace images, and mostly does "The Right Thing" to get them in a format OpenGL uses in a timely fashion (i.e. before dropping to raw OpenGL mode). This is done with a high-level call in patch code that looks like this:

[inputImage setOnOpenGLContext: context];
[ ... do cool stuff ... ]
[inputImage unsetOnOpenGLContext: context];

and that's all anyone really knows (from the outside).

Colorspace is maintained on a per-provider, per-object basis:

@protocol QCImageProvider
...
- (struct CGColorSpace *)colorSpace;
...
@end

(where "struct CGColorSpace *" is a CGColorSpaceRef in proper typedef'd CoreGraphics parlance)

So, to "strip/modify" the colorspace, every provider needs to be modified. This in itself is a bit tedious, but nothing impossible.

The badness comes when dealing with "Non-RGB-like colorspaces". If it's an RGB-Like colorspace, it's a snap: simply pretend, the colors will be a bit off, but nobody gets hurt. BUT if you're dealing with YUV (Ycc_601 or whatever QC calls it), or a mixture (CoreImage chain with video and RGB inputs simultaneously), simply pretending it's RGB-Like gets people killed (colors are completely wrong, and images are smaller than RGB equivalents, which makes things crash). So, we need to explicitly handle the image's colorspace, and Do The Right Thing if it's weird. There are dozens of QC Colorspaces, so this becomes tedius2 from the above.

And then the killer: colorspace conversion is usually hardware-accelerated by the libraries, which isn't painful. But there doesn't seem to be a way to force this conversion to happen before the GLSL patch gets to it. It's possible and simple to handle it in software whenever we want (The OpenCV workaround does exactly that, in a somewhat inelegant way), but it's slower than all get-out. It shouldn't be that slow, but I've not been able to find out any working accelerated methods to get RGB-like information from arbitrary images.

(for what it's worth, tracing the Movie Exporter example plugin indicates that it follows an identical path as the CV conversion... so there's no official "faster way" in any existing sample code to try and pattern after).

So: if the slow CV path is "good enough", I can streamline that a bit, but it's not going to be much faster than the current workaround.

psonice's picture
Big Smilie Face

That's put a huge smile on my face :D

As unlikely as it is in real life, I happen to want only the most simple and fastest case: my inputs are always going to be LinearRGB! I'm only interested in generated inputs (i.e. mixing + processing textures, which will be in an RGB format) so I can skip anything YCC altogether. Yay!

Of course I bet everyone else wants to use live video and such with YCC formats :D (actually, I'd love YCC too... I've no serious use for it, but you can have so much fun with a camera and a pixel shader! Gratuitous worm-shader example :) )

So do you have some code 'on the way' for this (either as a plugin, or just a patch of source I could adopt)? Otherwise, I'll get back on to it myself, and subject to getting it to actually work it can go on here somewhere.

toneburst's picture
If my information is

If my information is correct, though, whatever method you use to get sampler2Ds working in the Vertex Shader, it's ALWAYS going to end up with the shader executing on the CPU, so it's almost inevitably going to be slower than the heightfield method. If you're using the OpenCV plugin to convert the image format, you may as well use the heightfield. Obviously, it needs some tweaks to do what we want, but it's got to be the way to go, in the end.

Having said that, if Apple's NVIDIA drivers are updated to support sampler2Ds in the VS, then things might be different.

alx

Quartz Composer Blog: http://machinesdontcare.wordpress.com

Music Site: http://www.toneburst.net

cwright's picture
future

For now, I think you're right, ATI-wise. The ATI hardware lacks the ability to do texture lookups in hardware at the moment. Current NVIDIA hadrware does, and I believe the upcoming ATI cards do support it though, so it's only a matter of time...

psonice's picture
Software VS is fine

I think for my case it doesn't matter much if it gets executed in the CPU. I'll be using low res textures (probably ~96x96, 16bit colour), and my composition is currently pretty much limited by the GPU speed. Offloading might even help a little, as it'll be a fairly small texture transfer but a fair bit of vertex processing.

Anyway, I've got a plugin that 'works' (after several false starts, including one where I ended up with an error-free build but showed up with no name or description in QC, and caused a nasty crash :/ ). It just doesn't actually do anything yet :D (Well, it accepts an input image. Gotta figure out how to output the image again now)

psonice's picture
Spot the mistake

"@property(assign) id<QCPlugInInputImageSource> outputImage;"

I really hate it when that happens :)

So, I have a working plugin, with an input image and an output image, all seems well except that nothing is output yet as I haven't connected the two together yet. How do I modify the QCPlugInOutputImageProvider to output with GenericRGB and point it at the input image?

psonice's picture
Can has plugin

After much headscratching (mostly pixelformat related), I have a working, colorspace stripping plugin. The input image is linearRGB, the output image is GenericRGB.

It doesn't seem to work though... I no longer get QC crashes, which is a good sign, but the vertex shader doesn't seem to do it's job. I only had a few minutes to test though, and it looked like something strange was going on (the GLSL grid was massive, despite being set to size one).

I'll test more later on tonight.

psonice's picture
(No longer) Has broken plugin :(

  • Edit * - It was another stupid mistake :( Anyway, I have it working, and outputting images in ARGB8, GenericRGB. That's the same as the Plasma patch, except that Plasma outputs with GenericRGB (Uncorrected). I still get a crash from GLSL. :/

Anyone know how Uncorrected GenericRGB can be obtained? I see no reference to it in the docs...

  • /edit *

Right, much testing later (and enough head scratching to cause baldness), I've come to the conclusion that I'm doing it wrong.

To test the plugin, I'm using a plasma patch connected to a GLSL patch with basic displacement shader. It works fine. Then I connect the plasma into my 'colorspace remover' plugin, and connect it to a billboard to make sure it works (it does, and the gamma is noticeably different). Then I connect it to the GLSL texture input, and enable the GLSL patch. Guaranteed crash :(

A quick summary of what I've done in the plugin: (I'll post up the project if it's useful, but I'll have to tidy it first ;)

  • I have a standard QCPlugInInputImageSource bringing the image in, and an output image to send it back to QC.
  • I create a new output image provider, using the attributes from the input image, and use that for the output image. However, I change 2 attributes - the pixel format (which becomes RGBAf, anything else gives me a stretched image and wrong colours), and the colorspace.

For the colorspace, I've tried GenericRGB, GenericRGBLinear, and a DeviceRGB space. None worked.

I think the problem could be one of two things: the pixel format (which in theory shouldn't change from whatever the input is, but does.. maybe I'm getting the bytesPerRow wrong?) or the colorspace (the plasma outputs "genericRGB (uncorrected)", my plugin outputs plain genericRGB.

So, any pointers?