|
How do you read a composition from data using an opneglconetxt for qcrendererI've been steadily building my own application in xcode. As part of this I've broken down the initial comp into processors that I can manipulate in the code, with a final consumer composition to do the final rendering. Eventually I want to encode all the comps for the app into data, and use the qccomposition method compositionWithData: to read them at runtime. My trouble is the final consumer comp. With the processor comps, I can use qcrenderer initWithComposition:colorSpace:, but for opengGL there's only initWithOpenGLContext:pixelFormat:file:, it doesn't seem to be possible to init with a composition. There's initWithCGLContext:pixelFormat:colorSpace:composition:, but this didn't work for me, very likely I'm not using it right. I have the openglcontext and pixelformat for my opengl view, and can get the equivalent underlying cg versions for each. I tried to init with those, but my app stopped rendering, and wouldn't work again till I restarted and put it back how it was. I suspect I need more steps if this is the right way to go, but not sure how to go from the cgcontext and draw back to the openglcontext. Any pointers?
|
Look at some example code, it shows clearly how to use OpenGL with Quartz Compositions and pass output keys to input keys of a second comp, to daisy chain processing.
If you do the above, you need to ensure that all compositions init with compatible OpenGL contexts (either the same, or that all have a root share context).
Short story is:
Init a GL Pixel Format Init a GL Context bind the context to a NSView.
create a QCComposition create a QCRenderer linked to the composition and GL context.
if you want to load more compositions, and render to the same context, just keep re-using it. If you want to render things offscreen, to separate, but compatible contexts, you need to do some more work, make a root GL context, make other offscreen contexts shared with your root context, and init the various QCRenderers from that.
Well it's been a while, i now have a better understanding of opengl and the contexts, and have implemented multiple contexts as you suggested.
One thing i wondered with offscreen contexts, i'm using them with fbos, but found i couldn't create an fbo unless the context has a view or a pixel buffer set. I've seen a cli tool that seemed to be able to do this without that step, but it wouldn't work in my app. I solved it by adding a pixel buffer of size 1 to the context, but i see that pixel buffers are deprecated, so not a good solution. Is there a better way to do it? The offscreen rendering is handled by a class that doesn't really have access to a view, as it's never really tied to a single one, otherwise i would just tie the offscreen contexts to a common view.
To create the offscreen contexts, i generate a new context using a shared primary context and a single pixel format that's used throughout the app, to ensure opengl resources can be shared between everything in my app.
Incidentaly, i had completely failed to grasp that the cglcontext can be got from the nsopenglcomtext using cglcontextobj, which was the reason for the original question.
EDIT: I replaced my pixel buffers with offscreen views, this still doesn't seem right, but at least I'm not using deprecated pixel buffers anymore.