Did 32bit image handling in QC change?

psonice's picture

I've got a major problem just now with 32bit modes not working. I won't post the comp, it's rather complex (it's a new "pipelined" raytracer I'm building), but it works roughly like this:

  1. Render a scene in 32bit float format (using RII, set to 32bit, depth disabled, color correction disabled). The output isn't colour, but a mix of depth and coordinates, so there's a wide range of values including negatives.

  2. Use that as a texture in the next stage.

It falls down here. The tooltips tell me the image stage 2 receives is RGBAf (32bit float), which is correct. However, it contains NO negative values, and positive values seem to be limited to 8bit accuracy! So it seems to be telling me it's RGBAf, while it's actually RGBA8.

Any idea what might be going on? I did install the latest iOS5 SDK earlier, which could have updated something (I have QC 4.0/103.1 and framework 4.2/156.30).

dust's picture
Re: Did 32bit image handling in QC change?

i just upgraded to xcode 4.2 iOS 5 and can confirm that what your are saying is happening to me as well. it seems like the problem is with the disabling color correction. i ran the RII through a cl kernel with 32bit input and output and native color space settings to absolute linear. that seems to put image into a known and not uncorrected color space but also turns GL backing on which may or may not put you back into the same image as if the RII was set with color correction and depth buffer on.

i don't know how imperative color correction etc is to your project but using a cl kernel puts your image into rgbaFFFF even though it reports rgbaF. i actually don't really have an answer as to why. can only offer my test results. i could guess whats going but normally i am completely wrong when ever i assume or guess with something. so i will refrain from offering any kind of speculation as to what the issue is, and leave that up to the big boys.

PreviewAttachmentSize
32_cl_linear.png
32_cl_linear.png30 KB
32_rii.png
32_rii.png73.2 KB
Screen Shot 2011-07-11 at 1.29.46 PM.png
Screen Shot 2011-07-11 at 1.29.46 PM.png16.9 KB
rgbaFFFF.qtz5.08 KB

cybero's picture
Re: Did 32bit image handling in QC change?

Unable to reproduce your problem, following your disclosed parameters , unless they also included zero width and height parameters on the RII patch, which was the only setup that reproduced nil colour values . As soon as I introduced any width or height, the RII worked. Please note, I'm not running iOS, as yet, although I am running xCode 4.1 in this testing instance.

I get LinearRGB Colorspace and Internal_RGBAf as the Native Pixel Format. My guess is the primary contributing factor might be something to do with the raytracer GLSL , unless that is what the texture is being piped into rather than being in derived from.

psonice's picture
Re: Did 32bit image handling in QC change?

Did some more testing. It's.. an odd one! I've attached a test comp. Switch between the two tests with the 'switch test' button, turn testing on + off with the other.

When the test mode is on, the picture SHOULD look normal, so long as it's a 32 bit texture (the top one moves the image to the range -0.5 to 0.5 in the first RII, then back to 0 to 1 in the second, the bottom test moves the range to 0 to 0.01 then back to 0 to 1. Alpha is always set to 1 to avoid any premultiplication crap.

Both tests are failing for me regardless of the color correction setting, and the tooltips are showing some strange behaviour.

With colour correction disabled, I see: Native pixel format: Internal_RGBAf (that's it! Exactly what I want!) Texture backing: Internal_RGBA8 (wtf! 8bit unsigned integer?!)

With colour correction enabled I get the same.

With colour correction enabled on the first RII but disabled on the second, I see: Native pixel format: Internal_RGBAf Texture backing: Internal_RGBAf

I mean, what the hell?! But it still fails the test!

This is on the last regular release of xcode too, not a beta, so this bug is present on the standard production build too. I'll get it reported.

PreviewAttachmentSize
test32.qtz17.38 KB

psonice's picture
Re: Did 32bit image handling in QC change?

Thanks for the corroboration. Disabling colour correction is pretty critical to this (I'm basically doing a low res first pass render, which just outputs xyz coords + object ID for each pixel, then passing that to a second shader which does edge detection and renders at higher res where there's an edge, or just interpolates everywhere else. It's at least 2x faster, quality difference is near zero :) But colour correction moves the coordinates, then the edges don't match the rest :(

Anyway, doesn't look like colour correction actually matters.

What's RGBAffff btw? Isn't that the same as RGBAf? (RGBAf = FFFF, RGBA8 = 8888). And is that a CI filter you used? Not seen one like that before :)

dust's picture
Re: Did 32bit image handling in QC change?

rgbaFFFF just denotes the float4 data type in open cl. basically the same as rgbaF in regards to a four component float based vector. FFFF being floats in the range 0 to 1. Not sure about the inheritance either FFFF is a vector of plain c floats or Float32's. i guess it all depends on the interpretation. For instance if a plugin is accessing the FFFF data it's components are nsnumber types which means they could be set to a Float64 or even UInt32 or what ever you need I suppose if just dealing with the numbers. Pretty sure cl downs samples the floats to ints when output as 8bit to render in gl. automatically when selected. I would have to look up the kernels in qc framework to know for sure because normally you deal with FFFF not IIII in cl.

psonice's picture
Re: Did 32bit image handling in QC change?

Ah, cL filter, noT ci filter! I thought that looked totally wrong for core image :D

dust's picture
Re: Did 32bit image handling in QC change?

i was able to get both tests to pass. top and bottom. a cl kernel after the RII in the 32 bit bottom test fixed the image for me. the top test passes for me. i don't think i changed anything to it. im sure there is a way to use just a cl kernel to get the image into the format you need without the RII as there are many cl datatype. just converting the data source to a known type to begin with would be my first approach. normally a RII will do this for you. for what ever reason I'm getting fails with color correction because its native pixel space is unknown, its not until i put the image into a known pixel space that it will render correctly for me.

PreviewAttachmentSize
32_top.png
32_top.png41.79 KB
32_input.png
32_input.png22.61 KB
32_out.png
32_out.png22.75 KB
test322.qtz18.14 KB

gtoledo3's picture
Re: Did 32bit image handling in QC change?

Does your patch have anything that turns the rii into an external provider/purple?It's a long shot, but I'd try removing, if so, bc that changes aspects of the rii version that's called, afaik. My guess though, is that it's just screwed... Any driver updates along with the other stuff recently?

edit: I see it's not the rii mode.

psonice's picture
Re: Did 32bit image handling in QC change?

Well, that's um.. interesting? Your "fix" doesn't work for me. But if I add an extra CL patch between the two RIIs, it DOES fix it. It seems to force the RII output to RGBAf.

So thanks for finding a "fix" if we can call it that. I can at least get to work on my tracer :)

Interesting side-effect: adding a CL filter between the last RII and the billboard corrects the colour. That's potentially worrying - if I need one between my 2 raytrace stages, and CL messes up the colour, it's not going to be good. Time for a test...

psonice's picture
Re: Did 32bit image handling in QC change?

Works like a charm! \o/

No colour changes, luckily. And with the new pipelined setup for the raytracer, it's flying along at 120fps at 800x600 (and still 30fps at 1920x1200!), even on my olden radeon 2600 :D Ok, so it's just showing a nice phong lit sphere on a textured floor, but still. There's lots of optimising to be done still too, so speed should end up quite a lot higher. My dream of raytracing a game on iOS is still possibly possible :)

Ooh, that's a thought. My card doesn't support openCL. It's presumably copying the texture back to main memory and back, once per frame. That will be slowing it down.

gtoledo3's picture
Re: Did 32bit image handling in QC change?

I think you totally need to test that on multiple machines and in Lion as well, before feeling even remotely comfortable about it, and even then, I still wouldn't feel great.

I really like OpenCL, some things it allows one to do in QC, but support across machines/gpu's is willy nilly, in my real world experience I've come to learn that I can never take for granted that something I do with OpenCL will work the same across macs, and especially across even minor version updates, and definitely major updates.

It looks like it doesn't matter much though, since it reads as though you're just using this to demo your shader chains for something you're going to do in iOS.

psonice's picture
Re: Did 32bit image handling in QC change?

Yeah, I'm not really concerned so long as it runs on my work + home boxes (which it does!) If/when this gets released, it'll be without QC. It'll be an iOS app hopefully (this is going to push the GPU VERY hard on a mobile device, so we'll see) or if not I'll use it to make a demo (which would be pure opengl).

Anyway, here's a high quality interlude courtesy of unc + brothom states (mostly raymarched I suspect - be sure to watch in HD, e):

(this utterly kills vimeo's compressor, but there's a HQ video here: ftp://ftp.untergrund.net/users/ized/prods/nang.zip )

gtoledo3's picture
Re: Did 32bit image handling in QC change?

I'm curious about the problem in QC still, a little bit... I can't see it on my SL install, b/c I didn't update Xcode in SL, and haven't looked at it more there.

Looking forward to checking out the clips posted :)

Out of curiosity, if you pass the vid input through a CI kernel, like:

kernel vec4 coreImageKernel(__table sampler image) { return sample(image, samplerCoord(image)); }

...and just pass the image through, does it cause it to render correctly?

Just idly curious if treating the color data as if it's a table of color values at some point in the chain would be an alternate workaround (maybe more likely to work on more machines if actually would work?) As an aside, doing that does something kind funny to tooltip data on my system, with the texture that renders in the tooltip occupying something that looks like 1/16th of the entire texture, starting at the origin.... still renders fine.

psonice's picture
Re: Did 32bit image handling in QC change?

Nope, adding a table filter step makes no difference. Unfortunate, as none of my macs support hardware openCL, and now I can't remove the CL step to see true performance :(

The __table keyword tells it you're going to use it as a LUT. I suspect it's setting the texture up for no filtering (maybe not, filtering is very useful in a LUT in some cases) and no scaling. You generally use a table by sampling pixels at a certain pixel offset, if it's scaling the texture the offset gets unpredictable - which is why it crops instead of scaling in the preview.

The clips are not mine btw, they're from unc + brothom states. They're just generally awesome (and use similar tech to what I've been doing lately, raymarching - at least in the first one).

dust's picture
Re: Did 32bit image handling in QC change?

actually putting the cl kernel first before the RII images or last before the billboard seems to work as well.

gtoledo3's picture
Re: Did 32bit image handling in QC change?

psonice wrote:

The __table keyword tells it you're going to use it as a LUT. I suspect it's setting the texture up for no filtering (maybe not, filtering is very useful in a LUT in some cases) and no scaling.

Exactly. The connotation is that you're using the input color as data values, so it's important that they not be shifted willy nilly. It felt like there was a glimmer of hope in a non-CL/possibly more supported by your gpu approach. Oh well.

I'm not sure about the theory of unpredictability/cropping instead of scaling in the tooltip preview. My opinion is that it's because a table function gives instructions not to do any affine transforms and such, which is actually affecting the tooltip in not allowing the image to be reduced there. Maybe we're saying the same thing different ways, but I don't think so. My hunch is that it doesn't have to do with general usage pattern/offset issues, it's what it does to the image/what can be done with it afterwards.

QC just doesn't do anything to mitigate it in the tooltip like it does with other stuff, probably rightly so... at least it shows that the __table function is working in that way.

psonice's picture
Re: Did 32bit image handling in QC change?

I think we are actually talking about the same thing there. If you take the table image as some kind of important data, it's effectively a grid of numbers. If you scale that, it's like somebody giving you a spreadsheet that contains the data you asked for, but compressed into 10% of the space by averaging. In other words, it's pretty useless to look at. One page of the spread sheet at least contains some meaningful numbers.

And yeah, pity about the CI not working. Maybe there's another way (I wrote a plugin to get around an issue like this some years back), but for now it'll do.

gtoledo3's picture
Re: Did 32bit image handling in QC change?

psonice wrote:
I think we are actually talking about the same thing there. If you take the table image as some kind of important data, it's effectively a grid of numbers. If you scale that, it's like somebody giving you a spreadsheet that contains the data you asked for, but compressed into 10% of the space by averaging. In other words, it's pretty useless to look at. One page of the spread sheet at least contains some meaningful numbers.

And yeah, pity about the CI not working. Maybe there's another way (I wrote a plugin to get around an issue like this some years back), but for now it'll do.

Ah, ok, we mean exactly the same thing ;)

TOTAL pity! I felt like that would be super awesome, because there was that mild chance of it working w/ GPU.

I wonder if this was working for you at some point, if it's actually a gpu driver change thing... I've definitely had that bite me in the kind of "pushing the edge/things better be darn right" scenarios before. Really frustrating, because there's not much you can do until an update. This bites me all the time with CL, so it's ironic that CL is proving to be your salvation here!

I'm pretty darn sure that a plugin is the most solid/least likely to break route, but again, probably not worth it for your endeavor. Kinda curious about this in general though, because it never hurts to hammer bugs out of QC.

I wonder if it makes most sense to make an environment patch like an RII that provides some output type options, so that maybe QC/RII/whatever/shifting around and/or breaking becomes less of an issue, along with other weird RII quirks... output color doesn't seem like a wise thing for user to control, but it seems kind of necessary for some scenarios, come to think of it.

I'm pretty sure that can be set in that scenario, instead of making a processing plugin to pipe the image through and tweak there. If an RII -esque thing needs to be employed there, it would probably make most sense to address it that way. Idk, speculation... too much so for something you're just using to prototype with, but I find your endeavor to be interesting (as usual).

dust's picture
Re: Did 32bit image handling in QC change?

the SGX543 is an open cl enabled device. i can not say with any certaintety but I'm pretty sure openCl has been private since IOS 4.3.

psonice's picture
Re: Did 32bit image handling in QC change?

Private APIs aren't much use, if you want to put the app on the store. I've not heard of openCL being supported though, privately or not. From a quick google search, it seems the SGX543 support was leaked via a driver in a beta of 4.3, and because the hardware supports openCL there was some speculation that might mean iOS would support it. Maybe that's where you got that from.

Anyway, I'll stick to pure glsl, I've learned the hardware ins and outs well enough to optimise things to a pretty hardcore level now :)

dust's picture
Re: Did 32bit image handling in QC change?

check out your private frameworks folder in /sdk/sys/library and hope for the best.