Deeper understanding of graphic manipulation

Raconteur's picture

Hi gang,

I posted this in another thread, but should have started a new one instead (Hey! I'm a poet!) Sorry for the faux pas.

My abilities with QC are severely limited by my ignorance of 3D math beyond a basic level.

Can anyone point me in the right direction for some reading material (books or web sites) to help me understand the hows and whys of things like the JS functions attached to the CI Optical Flow node protarco's Leaves.qtz comp?

From a programming standpoint, it makes sense, I just don't know WHY or HOW that stuff was arrived at.

Thanks for any guidance.

C

gtoledo3's picture
Re: Deeper understanding of graphic manipulation

I don't think there's a good resource for javascript that is focused on QC, that would inform you about the actual logic behind "why thing are" a particular way.

There is the original QC Japan site and it's javascript in QC notes, and then Cybero's transcription of that. Those give example of syntax that is acceptable, and make note of things that work in QC and some that don't, but there it's not going to inform you about WHY or HOW.

I find the "Lost Manual" series and "For Dummies" series of books to be really quick and easy reads (OT, and about 3D modeling, but the Blender "For Dummies" book helped me tremendously, as well as the book on Cocoa), and did a great deal to help me understand javascript than any QC resource. Unfortunately, many resources will be web-centric, not QC-centric. Still, that's going to be the best bet for actually learning underlying principles. From there, it's much easier to figure out what works in QC and what doesn't.

Raconteur's picture
Re: Deeper understanding of graphic manipulation

Hi George,

Thanks for the input. I think I wasn't very clear in my question... Javascript, Cocoa and QC I am fine with. What I am ignorant of is how you know what needs to be coded in Javascript in something like http://kineme.net/files/Leaves.qtz

In the CI Optical Flow node, there is code like this:

kernel vec4 energyComputation(sampler image1, sampler image2)
{
   vec4 E;
   vec2 xy = destCoord();
   float Eijk = sample (image1, xy),
      Eijpk = sample (image1, samplerTransform(image1, xy + vec2(0.,1.))),
      Eipjpk = sample (image1, samplerTransform(image1, xy + vec2(1.,1.))),
      Eipjk = sample (image1, samplerTransform(image1, xy + vec2(1.,0.))),
      Eijk = sample (image1, samplerTransform(image1, xy)),
      Eijpk = sample (image1, samplerTransform(image1, xy + vec2(0.,1.))),
      Eipjpk = sample (image1, samplerTransform(image1, xy + vec2(1.,1.))),
      Eipjk = sample (image1, samplerTransform(image1, xy + vec2(1.,0.))),
      Eijkp = sample (image2, samplerTransform(image2, xy)),
      Eijpkp = sample (image2, samplerTransform(image2, xy + vec2(0.,1.))),
      Eipjpkp = sample (image2, samplerTransform(image2, xy + vec2(1.,1.))),
      Eipjkp = sample (image2, samplerTransform(image2, xy + vec2(1.,0.))),
      Eijkp = sample (image2, samplerTransform(image2, xy)),
      Eijpkp = sample (image2, samplerTransform(image2, xy + vec2(0.,1.))),
      Eipjpkp = sample (image2, samplerTransform(image2, xy + vec2(1.,1.))),
      Eipjkp = sample (image2, samplerTransform(image2, xy + vec2(1.,0.)));
 
   E.x = 1./4.*(   Eijpk - Eijk + Eipjpk - Eipjk + 
            Eijpkp - Eijkp + Eipjpkp - Eipjkp);
   E.y = 1./4.*(   Eipjk - Eijk + Eipjpk - Eijpk + 
            Eipjkp - Eijkp + Eipjpkp - Eijpkp);
   E.z = 1./4.*(   Eijkp - Eijk + Eipjkp - Eipjk + 
            Eijpkp - Eijpk + Eipjpkp - Eipjpk);
   E.w = 1.;
 
   return E;
}
 
kernel vec4 neighborAverage(sampler u)
{
   vec2 xy = destCoord();
   vec4 res = (     sample (u, samplerTransform(u, xy+vec2(-1.,-1.)))/12. + sample (u, samplerTransform(u, xy+vec2(-1.,0.)))/6.
         + sample (u, samplerTransform(u, xy+vec2(-1.,+1.)))/12. + sample (u, samplerTransform(u, xy+vec2(0.,+1.)))/6.
         + sample (u, samplerTransform(u, xy+vec2(0.,-1.)))/6. + sample (u, samplerTransform(u, xy+vec2(+1.,-1.)))/12.
         + sample (u, samplerTransform(u, xy+vec2(+1.,0.)))/6. + sample (u, samplerTransform(u, xy+vec2(+1.,+1.)))/12.);
   res.a = 1.;
   return res;
}
 
kernel vec4 iteration(sampler energy, sampler u_average, float alpha)
{
   vec2 xy = destCoord();
   vec4 E = sample(energy, xy),
      u_av = sample(u_average, xy),
      u = vec4(0.);
 
   u.x = u_av.x - E.x * (E.x*u_av.x + E.y*u_av.y + E.z) / (alpha*alpha + E.x*E.x + E.y*E.y);
   u.y = u_av.y - E.y * (E.x*u_av.x + E.y*u_av.y + E.z) / (alpha*alpha + E.x*E.x + E.y*E.y);
   u.w = 1.;
 
   return u;
}

My ignorance is in that I do not get things like why these particular functions are implemented, what functions are available as hooks (these seem like hooks from the way they are implemented), how the engine applies the hooks/what the flow of control is, etc.

Also, for something like: kernel vec4 iteration(sampler energy, sampler u_average, float alpha) {...} iteration is the function, but what is the return type? Is it vec4? If so, what is "kernel" there for?

Stuff like that... where can I learn it, without leaning as heavily on you guys as I already have/am? :)

Thanks!

vade's picture
Re: Deeper understanding of graphic manipulation

To learn a bit about Core Image, honestly, your best bet is to start learning about GLSL.

Core Image is a subset of GLSL with some things removed and some functions added (it does some extra stuff behind the scenes that is pretty awesome, but requires hiding some functionality to gain the optimizations).

GLSL is a programming language used to perform custom image processing in different parts of the render stages of OpenGLs pipeline.

Older graphics cards (pre programmable pipeline) had 'hard coded' hardware paths for rendering. You could not change fundamentally how it textured, how it placed vertices etc except when you initially hand off the data to the graphics card. To change anything, you pretty much had to go back to the CPU, and then back to the GPU. This was slow.

GLSL allows you to program different parts of the pipeline using "shaders" There are now 3 main kinds, Geometry, Vertex, Fragment (using OpenGL Parlance).

Geometry shaders can't be used in QC, there is no patch for them, but they allow one to emit new primitives.

Vertex shaders let you mess with attributes of a single vertex, such as position, normal vectors, per vertex lighting information, etc.

Fragment shaders let you mess with the final stage of texturing and filling in a pixel, ie, the rasterization phase. Where you can do fun things like convolution (like my blurs), per pixel lighting, bump mapping/normal mapping.

Fragment shaders are the closest thing to Core Image, but understand Core Image is designed specifically for 2D image processing, you cant "get at" lighting information passed in from the GL world, its not there.

So learn GLSL, learn how the old GL pipeline works at least in general, and then start playing with GL Shader Builder (in Dev Tools), QCs Core Image Kernel patch and GLSL programming patches. You will start to get some ideas of how it fits together.

Raconteur's picture
Re: Deeper understanding of graphic manipulation

Thanks, Vade, that was EXACTLY what I needed.

Much obliged!

cybero's picture
Re: Deeper understanding of graphic manipulation

Coding Leaves in JS alone would be [in all probability] pretty pointless in regards of the performance resulting. Core Image is just much faster and I don't think I canadd much more to what has already been stated in regards of the need to learn the lingos by George.

The Core Image JS is slightly different from what can and can't be done in the JS patch and is meant to handle dynamic variables of the CI kernel.

Raconteur's picture
Re: Deeper understanding of graphic manipulation

What would you recommend in terms of learning that stuff, though? Any sites, docs, or books you can point me to?

cybero's picture
Re: Deeper understanding of graphic manipulation

Research, Experimentation, Practice & Study.

I find that the Open Shader Builder is useful for pointing one in the right general direction.

Often OpenGL code, especially, say, when modelled upon what one has gleaned from the OpenGL GLSL tutorials section from opengl.org, will link AOK in the builder, but require some clarification / rectification to run sans hiccups on OS X.

Open Shader Builder can , effectively, be a good debugger for GLSL that doesn't say it's got a problem in the GLSL patch, but just doesn't work.

The best site by far is actually Apple's.

Try http://developer.apple.com/mac/library/samplecode/GLSLShowpieceLite/Intr...

Thereafter, sites like this forum, blogs & 3rd party guides abound, but not in great plurality. To be expected.

Well, OS X it is a variant of Unix, after all. [& unix desktops account for a relatively small percentage of the whole desktop population].

See Resources for a page with some programming links, amongst other Quartz Composer related links. It needs to be updated, in fact. Especially for Core Image.

cwright's picture
Re: Deeper understanding of graphic manipulation

New in OpenGL 4.0, there are 2 new shaders added between Vertex shaders and geometry shaders. They are

  • the Control Shader
  • the Evaluation Shader

(There's also a "Primitive Generator" between those two, but it's not really a shader).

These allow something incomprehensibly awesome: recursive subdivision to arbitrary precision. So instead of sending polygon meshes, you can send cubic bezier spline-defined models (or even NURBS-defined surfaces, though that'd be quite expensive) and get per-pixel polygon output, all done on the GPU. Holy Freaking Crap.

(This is all borrowed from DX11)

If you think it's complicated now, I have a feeling it's just beginning :)

I think learning fixed-function stuff is antiquated and mostly-useless in modern times -- shaders (even just vertex and fragment) are a much better superset of the entire fixed-function pipeline. Learning the basic flow of data might be useful, but that's about it (all the multitexture stuff from fixedfunction land is dead, and has been for almost a decade now, thank goodness).

vade's picture
Re: Deeper understanding of graphic manipulation

Yeah, but OS X will get GL 4.0 support in 2020 :P

usefuldesign.au's picture
Re: Deeper understanding of graphic manipulation

cwright wrote:
If you think it's complicated now, I have a feeling it's just beginning :)
Yeah I think it's really complicated already!!

Sounds awesome to be able to send a NURBS (or cubic spline) model to the GPU for pixel accurate lighting, bumping etc though. Of course QC seems to be the bottleneck on performance from what Vade and yourself said in other threads.

Any advice for a CI Filter primer cwright, like CI filters for dumbies level?

usefuldesign.au's picture
Re: Deeper understanding of graphic manipulation

vade wrote:
Yeah, but OS X will get GL 4.0 support in 2020 :P
cwright wrote:
(This is all borrowed from DX11)
So that put's OS X ten years behind Windows in one respect at least when OpenGL 4 support is added (one out of a million respects to put that in perspective)

;)

cwright's picture
Re: Deeper understanding of graphic manipulation

Hopefully it won't take all that long (I honestly have no idea, and no inside knowledge on this) -- OpenCL already exposes most of what's necessary for this, so it's not like there's a ton of catch up to be made.

Of course, there is a precedent for OS X taking waaaay too long to get any of the good stuff... :/

cwright's picture
Re: Deeper understanding of graphic manipulation

For CI, poking about existing filters, learning GLSL (mostly the fragment program), and basic image filtering paradigms is handy. It's a widely open-ended technology, so asking for a primer on CI is like asking for a primer on ObjC -- there's no possible way to know what facet you're actually interested in, and no possible way to cover the whole thing.

QC's not too much of a bottleneck if the composition isn't doing much -- i.e. you load a mesh, and send it to the renderer. it's when you start iterating, or filtering textures, or deforming vertices that it starts to bog itself down.

vade's picture
Re: Deeper understanding of graphic manipulation

I dont know, QC is kind of a hog, something I am finding out about first hand on the Open Emu project.

For example, we have a pretty straighforward drawing case

OpenGL Texture rect -> CIImage imagewithTexture -> QC -> Render to CAOpenGLLayer

The QC comp loaded is literally, an image to a billboard.

vs

OpenGL Texture Rect -> draw to a Quad via vertex arrays.

I was profiling and noticed a shit ton of time being spend in QC. We use QC for advanced filtering options and so users can have fun, but for straight drawing its a huge loss. Im kind of surprised how bad it is actually.

I did a test just now (Im working on Optimizing OpenEmu for a 10.6 pure release):

QC: drawing 5 games using a billboard : ~40%+ CPU usage. GL: drawing 5 games using straight ~20% CPU.

QC: drawing 1 game is around 10:12% CPU GL: drawing 1 game is around 5-6% CPU

You can see the code is commented here: there are two cases, one straight GL and one QC.

http://openemu.svn.sourceforge.net/viewvc/openemu/branches/iosurface/Ope...

edit: check it out around line 252, you can see my two cases. Note I also tested to see if the colorspace we were using factored in, it does not make any meaningful difference.

cwright's picture
Re: Deeper understanding of graphic manipulation

QC's a hog in 4 places that I can think of:

  • Graph evaluation (it's hyper-object oriented, which is bad)
  • Image IO (because it has the ability to handle non-GPU textures, including vector images) -- this could be something you're running into in this case (I'm really interested in this usecase, so please make a simple "this shouldn't take forever" app or two, do some profiling, and file a radar -- you've done most of that as-is it looks like, including performance stats)
    • By file a radar, I mean file a radar. You(vade)'re typically good about this, but other QC users (myself included) have been really bad about that.
  • doing massive input/output to/from JavaScript (a ton of unavoidable format conversion happens there)
  • possibly GLSL/CI/CL inputs (some more format conversion happens there, but it's not nearly as costly as for JS).

Your code is making a GL texture form an IOSurface -- congrats, that's free (IOSurfaces are GL textures, so no additional processing needs to happen) -- that's a huge win. The QC variant is feeding in a CI Image, which is not actually a GL Texture. Texturizing CIImages might not be as cheap as we'd like (and of course there is QC overhead, which in this case shouldn't be too bad, but I could be wrong ;)

QC not supporting IOSurfaces as an input could possibly be considered a bug? (I don't know enough about IOSurfaces to know if that makes sense).

vade's picture
Re: Deeper understanding of graphic manipulation

Well, if you look closely, the both are using the same IOSurface, the QC case is using [CIImage imageWithIOSurface:]. The IOSurface is a texture attachment from a backgroup app's FBO, where we render each game cores's texture individually (we do this to avoid tons of globals in each emulator core from over-writing state in another instance of the same core, ugh, not fun).

There is (possibly) a conversion there in the CIImage stage for QC, the IOSurface is an RGBA8 GL_UNSIGNED_INT_8_8_8_8_REV texture format, so maybe CI is rendering it to a 32 bit float PBuffer or something for its native linearized high bt depth version? I don't know, all I know is it slows things down!

Looking at the shark traces most of the time was spent in CIContext, within the QC framework, so it could be something like that. Id love it if there was a way to feed in a texture directly to Quartz Composer somehow, rather than having to use a CIImage, or a CVOpenGLTexureRef (which, you cannot make out of thin air from an existing texture)

[myQCRenderer setValue:myTexureID forKey:@"inputImage"];

but, you'd need to have a way to declare the texture size, flippedness, etc. Hm.

cwright's picture
Re: Deeper understanding of graphic manipulation

For all QC's flexibility, I too ran into the same problem of saying "Here's I've got a nice shiny GL texture on the proper context already set up, please eat it without doing any extra work!". I've used CI to create a CIImage from a GL Texture, but I don't think it's anywhere near as cheap as it appears :/

Size of a GL Texture can be queried internally: glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &param), so declaring size isn't necessary. flippedness would be necessary, not sure about format etc.

If you have some time, I'd be interested in snooping around the shark traces to see what QC's doing (I can do this myself, of course, but I hate writing examples that try to simulate performance problems other people are experiencing ;).

vade's picture
Re: Deeper understanding of graphic manipulation

Yea, but doing GLGets is a bad thing, doesnt it stall the pipeline, or act as a synchronization point?

usefuldesign.au's picture
Re: Deeper understanding of graphic manipulation

Please continue…

cwright's picture
Re: Deeper understanding of graphic manipulation

it's a sync point, but it only has to happen once (unless you change the image every frame, which is plausible and maybe even likely). I'm just saying it's not essential, not that it would be a good idea :)

I'm really not sure what the best way to go about exposing this would be, but it'd definitely be a massive performance win, if CI is in fact the bottleneck.

Have you profiled other image types (CGImageRef, NSBitmapImageRep being my first two picks, since they're lightweight, but not vram-backed)? That might involve a roundtrip from vram to sysram and back, but knowing if that helps or hurts could be an interesting data point.

(the more I think about it, the less I think glGet has to be a sync point -- texture sizing functions aren't too frequent, and it would in theory be possible for the GL implementation to walk up the command queue to find the "freshest" texture state changer -- I might be missing some blindingly obvious details on that though, just idly speculating.)

cwright's picture
Re: Deeper understanding of graphic manipulation

if you're asking about exposure, CL's already feature-complete (you can do anything). As such, you can feed CL vertices (equivalent to GL), it can have processing stages (equivalent to GL vertex/geom shader stage), which could include the 4.0 processing stages, and then rasterization/fragment emission (the final step of the GL pipeline). by doing all of the above you'd in effect be reimplementing all of GL in CL, and it would be a ton of work, but because it's possible Right Now it's not like there's a fundamentally new feature in GL 4.0 (it's just that it's been standardized, and has potential hardware support). Shaders were a fundamentally "new" thing in GL -- prior to that, you couldn't fake it in hardware. geometry shaders were a fundamentally new thing to GL, since you couldn't fake it any other way (maybe via FBO-as-vertex-arrays, but not generically as far as I know). After CL though, adding arbitrary shader stages isn't "new" anymore, because it's been fully generalized (I can send whatever data I want, do whatever generic processing I want, and read/apply the results however I want).

Raconteur's picture
Re: Deeper understanding of graphic manipulation

Thank you muchly! Will check it out and see how my brain holds up. :)

cybero's picture
Re: Deeper understanding of graphic manipulation

I'll be looking forward to seeing the results that you get , Raconteur. Think OO :-) .

Raconteur's picture
Re: Deeper understanding of graphic manipulation

LOL! I certainly will... I see the entire world through OO-eyes... I think it is part of my DNA after 20+ years of it! :)

usefuldesign.au's picture
Re: Deeper understanding of graphic manipulation

Quote:
Of course, there is a precedent for OS X taking waaaay too long to get any of the good stuff... :/
That was what I was asking the answer to, actually.

vade's picture
Re: Deeper understanding of graphic manipulation

Do you want me to actually save and send you traces in shark ? I can do that. I dont know about a simple demo app though. Hm. Maybe

Serious Cyrus's picture
Re: Deeper understanding of graphic manipulation

vade wrote:
Well, if you look closely, the both are using the same IOSurface, the QC case is using [CIImage imageWithIOSurface:]. The IOSurface is a texture attachment from a backgroup app's FBO, where we render each game cores's texture individually (we do this to avoid tons of globals in each emulator core from over-writing state in another instance of the same core, ugh, not fun).

There is (possibly) a conversion there in the CIImage stage for QC, the IOSurface is an RGBA8 GL_UNSIGNED_INT_8_8_8_8_REV texture format, so maybe CI is rendering it to a 32 bit float PBuffer or something for its native linearized high bt depth version? I don't know, all I know is it slows things down!

Looking at the shark traces most of the time was spent in CIContext, within the QC framework, so it could be something like that. Id love it if there was a way to feed in a texture directly to Quartz Composer somehow, rather than having to use a CIImage, or a CVOpenGLTexureRef (which, you cannot make out of thin air from an existing texture)

[myQCRenderer setValue:myTexureID forKey:@"inputImage"];

but, you'd need to have a way to declare the texture size, flippedness, etc. Hm.

Sorry to raise an old thread, I'm new to objective-c and I'm trying to figure out how to properly package some of my qc apps and I'm having real trouble figuring out how to feed images into my qc composition in xcode. You come up a lot in my searches as I'm trying to get a screencap to work and this thread has many answers... Many thanks.

So far I've got an iosurfaceref from cgdisplaystream (as used by yourselves), this I want to feed into my qc comp rather than use the plugin, I thought that once getting the stream to work, it would be fairly easy, but it seems very hard to get iosurfaceref into the comp, or I'm doing something very wrong.

In my receivenewframe method, I'm creating a CIImage using imageWithIoSurface, as you had in your emu app, and setting it to the input image value of the qc comp, but it's just not working, and when I build, performance starts dropping dramatically, no image is shown (except once, just so I know it can work....). I've used IOSurfaceIn/DecrementUseCount to make sure the iosurface is marked for use until the next frame is available but it doesn't seem to make a difference.

I notice in your app, the qc comp was rendered after each time the image is set, I've just set my qcview as in the docs and it handles it's own rendering, could that be an issue? Ultimately I don't want the app to based on the refresh rate of the display stream.

Should I be creating a new CIImage each time? It worries me, do they look after themselves and remove themselves after they stop being used? I read somewhere in the CIImage docs that you should avoid creating contexts, but I CIImage doesn't seem to have any facility for contexts. . Should I use something else that could be updated.

Anyway, it's had me stuck for a couple of days, this thread and links has given me much to look into, hope someone can give me some pointers or has some insights.

Serious Cyrus's picture
Re: Deeper understanding of graphic manipulation

Well progress anyway, I subclassed the QCView and changed the renderAtTime method to create and set the ciimage, using iosurface_lock/unlock while making the ciimage and rendering the composition. The display stream runs on the appdelegate, and just passes updated iosurfacerefs to the QCview, the QCView should only use the surfaceref current at render. I now get the screen image, but it gets progressively slower and slower till it either hangs or crashes.

vade's picture
Re: Deeper understanding of graphic manipulation

That sure sounds like a leak to me. Ensure you are properly managing the lifetime of your objects and properly releasing resources. If you are using a CVDisplaylink on desktop, ensure you wrap your method with an autorelease pool, since its on a separate thread, you dont inherit a release pool from the main run loop on thread 0.

Additionally, you only need to lock / unlock the IOSurface if you hit it on the CPU side and modify it (ie, write into it, or read from it).

Serious Cyrus's picture
Re: Deeper understanding of graphic manipulation

Thanks, I'm struggling a bit with making sure objects are released. Going through example code, I see code for releasing objects, but a lot of it now seems to be stopped by ARC when I try it, that's why I was worried if the ciimage is got rid of after rendering, because I couldn't seem to do it explicitly.

vade wrote:
Additionally, you only need to lock / unlock the IOSurface if you hit it on the CPU side and modify it (ie, write into it, or read from it).

If I don't lock the surface during ciimage imagewithiosurface then I just get a black image, I guess CIImage is doing something it shouldn't. I'm trying to figure out if there is another class that might be better.

Serious Cyrus's picture
Re: Deeper understanding of graphic manipulation

Still pulling my hair out over this..... I managed to get some improvement using CIImage alloc initWithIOSurface:surface over the imageWithIOSurface as I read somewhere it might help. It does, but the app still slows to a standstill, just takes a bit longer.

I found the page with the image classes accepted by the QC renderer, I was looking on the wrong class (still hunting around a lot): https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/Q...

Image types: NSImage, NSBitmapImageRep, CGImage object, CIImage, CVPixelBuffer object, CVOpenGLBuffer object, CVOpenGLTexture object, or an opaque QCImage (that is, an optimized abstract image object only to be used with setValue:forInputKey: of another )

It's not been much use to me yet. It was fairly easy to make a CVPixelBufferRef out of an IOSurface, but can I get the buffer object itself? I do not know how, very frustrating.

The only way I've seen IOSurface working is in QCPlugins and using the internal qc image protocols, maybe that's the only way to go.

vade's picture
Re: Deeper understanding of graphic manipulation

Er, no.

You can 100% get IOSurface working correctly in QC - Apple does it, we do it for the v002 Movie Player and for Syphon.

Post some code?

Serious Cyrus's picture
Re: Deeper understanding of graphic manipulation

vade wrote:
Er, no.

You can 100% get IOSurface working correctly in QC - Apple does it, we do it for the v002 Movie Player and for Syphon.

Post some code?

Sorry, that's what I mean, I've seen it used in QC, i've looked at the plugins and seen apples example implementations with the movie player. But they all (That I've seen) do so via a QC plugin, setting the openGL texture and using the QCPlugInOutputImageProvider protocol to provide the image output of the plugin.

I wanted to supply this outside of qc, and feed the image in to qc comp without using a custom plugin. While I want to figure how to start replacing my comps with opengl, I still want to be able to load them as they're pretty handy way to do some things quickly, and I can run with my existing stuff while I improve on it.

I'm new to objective-c, don't be too cruel to the code... Although if you see some screamers, please point them out.

New to github too, hope I've got that setup correctly: https://github.com/seriouscyrus/Quartz-Experiments

EDIT: The app is rough and ready, to access the screencap, you need to set 3 in the second text field, and then click the qc view (details, details)

Serious Cyrus's picture
Re: Deeper understanding of graphic manipulation

Hadn't realised custom plugins could be packaged and referenced entirely within an application, this might be the way to go for me, I can start with the qtvideo examples. Would still like to know an efficient way to do it and ready for a qc input.

vade's picture
Re: Deeper understanding of graphic manipulation

Looking at the code without running it, quickly

I see you are using the new 10.8 CGDisplayStream with a callback. However, that means you are running your QCView / renderer on something other than the main thread, in fact, a dispatch queue can be on any particular set of threads. It sounds odd, but the OS manages a pool of threads on your behalf for Grand Central Dispatch - and, that in turn, is an issue for Quartz Composer, as it pre-dates that api.

A restriction on a QCRender is that you must run it from the same thread you started it on. Secondly, if you are using a QCView, that must hit a main thread.

Try switching

displayQueue = dispatch_queue_create("scqcexp.mainqcview.displayQueue", DISPATCH_QUEUE_SERIAL);

to

displayQueue = disaptch_get_main_queue();

Secondly,im not sure you need the increment/decrement IOSurface use count in there, it seems unnecessary as the CIImage will do the right thing for you.

Serious Cyrus's picture
Re: Deeper understanding of graphic manipulation

Thanks, I switched the queue, not sure if it made a difference. I also figured out how to use a CVPixelbuffer instead of CIImage, and that seems to eliminate any need to lock and unlock the iosurface, I couldn't get the ciimage to work without them. Using the pixelbuffer also seems to stop the application slowing, not as fast as it could be, but I feel better for making some progress.

Think I now need to figure out how to properly control the rendering of the QC comp, as it is, I can't understand when the renderAtTime method called, sometimes it runs all the time, sometimes not, depending on what I do inside and outside the method.

dust's picture
Re: Deeper understanding of graphic manipulation

thank you kineme. you guys help people solve un documented issues like IOSurface etc... please maintain kineme even when view is finished ;)