ML, Video Input and GLSL, texture2D vs. texture2DRect

gtoledo3's picture

I was looking into a "problem" that popped up with QC upon Mountain Lion, and just realized there is really no problem.

Some old compositions that feature a Video Input patch that directly outputs it's image to a GLSL shader seem to have texture coordinates screwed up now. I had been "working around" this by popping a Core Image patch in between, which sets the proper image attributes.

This was bothering me, so I did some testing and found a really simple answer:

Video Input now outputs a texture2DRect type texture, not texture2D. This of course changes the coordinate system from normalized to pixel w/h, thus explaining the look of something that isn't UV mapped.

Three here are 2 possible fixes, in levels of correctness (IMO):

1.-Make your texture in the shader a rect, not just texture2D.... so, ala:

uniform sampler2DRect texture;
 
void main()
{
   //Multiply color by texture
   gl_FragColor = gl_Color * texture2DRect(texture, gl_TexCoord[0].xy);
}

or...

2.-Pass the texture through a Core Image kernel, which will also change the "2DRect" texture to "2D".

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

gtoledo3's picture
Re: ML, Video Input and GLSL, texture2D vs. texture2DRect

Can't edit the post for some reason... I should have phrased it that "there's not much of a problem". I don't want to take anything away from the fact that it's not that intuitive, and it used to "just work".