kineme 3d image input

blackburst's picture

I need a bit of advice on using the image input on the k3d object renderer patch. I am trying to texture live video onto the faces of simple, low poly 3d objects (a pyramid for instance), of which I have modelled and loaded into the patch. Essentially I want to end up with the "front image","Left Image" type schema for the built in primitives but using my own models. When I feed the input an image it looks like it is just using a single pixel from the video as it just comes out as a solid colour. No amount of image resizing etc seems to change this. Is it because the faces are single polygons? My models are at 1,1,1 scale in the loader, and I can't seem to texture it in a manageable way. Help would be really appreciated. :)

PreviewAttachmentSize
Diamond.qtz5.7 KB

blackburst's picture
Re: kineme 3d image input

Came across this eerily similar thread whose OP was using the stock mesh patches rather than the kineme ones. https://kineme.net/forum/DevelopingCompositions/Kineme3Dv002modelimporte...

@cybero mentions texture fill code in the dae, upon looking at the dae all I can see that looks related are the following lines (totally guessing):

input offset="2" semantic="TEXCOORD" source="#ID10" set="0"/ and bind_vertex_input semantic="UVSET0" input_semantic="TEXCOORD" input_set="0"/

Am I on the right track? If so, is there a known workflow to export from c4d to have my models ready for an image input?

harrisonpault's picture
Re: kineme 3d image input

I have found the area of texture application in QC quite confusing as well: not having any formal training in the whole 3d modeling area, and dealing with the triple-threat of the various Apple Mesh, Kineme 3D Object, and v002 Model Loader approaches. So, take what I say with appropriate amounts of salt.

If we stick to the COLLADA standard, just to keep our terminology focus, the .dae can contain information on "materials" which may have image-type textures, as well as "UV maps" which can position a texture explicitly across the faces of the geometry. The .dae may contain references to external image files.

So, one way to approach this is to first externally create a .dae that is structured so that it projects a static image(s) onto your geometry the way you envision it. E.g., build a UV Map that "wraps" an image around your object the way you wish. OSx preview should display it. QC Mesh Renderer should, too, in a Lighting Patch.

Then you know what you want to tell the object renderer of your choice: a) Here's the UV Map and b) here's the dynamic image I want to be applied. And you can start looking at (e.g.) the Mesh Component / 3d Object / Structure patches to add the references to your images.

And if anyone can recommend a helpful self-study source on this stuff, net or dead-tree based...I want to learn.

cybero's picture
Re: kineme 3d image input

Regards the matter of texture fill image input to a .dae's surfaces that I posted about previously upon another thread texturing a .dae file , this was in regards of a dae model file that had a line of code inside that referred directly to the particular static image fill being used.

By removing that entry in the code, the dae retained its shape, but was able to take on an external texture using the Set Mesh Texture patch fed to a Mesh Renderer patch.

I hadn't even thought of checking out how well such a workaround would work upon a Kineme 3D render patch .

I don't think it will work quite as simply and effectively in that case.

Placing the .dae that was working in the exemplar posted upon that aforementioned thread, for instance, does not produce a 3D shape congruent with what one would expect from how that model file format renders in a Mesh Renderer patch or inside of Preview [although it should be noted that what renders in QC as .dae meshes doesn't always render appropriately in Preview].

The code lines

input offset="2" semantic="TEXCOORD" source="#ID10" set="0"/ and bind_vertex_input semantic="UVSET0" input_semantic="TEXCOORD" input_set="0"/

are specifying a UV map for a material instance ; a particular part of that model you are interested in retexturing and deploying.

gtoledo3's picture
Re: kineme 3d image input

I'm going to revisit this thread, but quick note, Kineme3d could expose an image structure input on a renderer to allow each sub object to be textured differently when using their structure renderer.

I used this principle with my "Object Kit" plugin, which piggy backs on k3d. I'd like to add a few of those Object Kit patches to K3D, and especially the ability to texture per object, but was met with no positive answer (at least that I can recall).

blackburst's picture
Re: kineme 3d image input

Unfortunately the link provided by cybero leads to an access denied page. That sounds really interesting george. Maybe someone from kosada can help with what the renderer expects in terms of model properties and texture size? Has anybody been able to use the image input oob because maybe an examination of their model would let me reverse-engineer the requirements.

gtoledo3's picture
Re: kineme 3d image input

Probably not...or, there's not much to do, so I'd guess it would be done already if there was sufficient interest to justify the time, which isn't much since it is already done.

Take a quad, for an example.

That quad has four vertices. Each of those vertices has an associated UV coordinate, between 0-1.

If it's mapped typically, the image will map the same way that you would expect an image to look when it plugs into a qc "sprite".

Take a sphere, it's comprised of many triangles. Two triangles equal a quad...add a bunch together in the right form, you have a sphere. Each one of those triangles is mapped to render a tiny piece of the texture, so that in aggregate, the entire texture maps to the entire sphere surface, instead of a texture every two triangles, for instance.

However, you could use different uv coordinates to create zooming, skewing, or tiling effects. All of that uv manipulation is pretty much entirely irrelevant for the subject of texturing a complex model; it will already have uv coordinates established for each sub object, that allow textures to wrap to it as expected.

If one wishes to skew that data, enlarge, tile, etc., this is still possible via patches like "image texturing properties". It's just not really needed with model loading typically, more often with rendering layers.

Kineme3d Structure Object Renderer currently has one input image, which maps to each object. This is somewhat weird, as almost no object that it's rendering is likely intended to have the same texture apply to each object.

So, what would be done is to put a structure port on it that activates if a user has attached to it and the index is a valid image. The user would connect images to a structure maker, js, blah, and each image would render on each related sub object. To get even fancier, the model importer or another patch could scan a folder and build the image structure.

If a user wanted to manipulate uv, they'd break out that index, slap image texturing properties on it, and place it back in the structure.

I might add, this isn't theory on the k3d thoughts, I've already done this and had it for a couple of years. It's similar to the released Object Kit Structure renderer patch, but that one was based on rendering a simple object many times, with a different texture or same texture across objects (as well as per object color control, which can also work through a color structure, if it exists). All of that is fairly easily obtained with the k3d framework, just not exposed to the users.

gtoledo3's picture
Re: kineme 3d image input

Oh, it sounds as though your models may not actually have uv values- that will definitely result in the funky look. It's possible to construct a model without uv's associate per triangle, and then you wind up with what you describe.

I was replying more to harrisonpault's comment about uv's in general.

blackburst's picture
Re: kineme 3d image input

Apart from not having UVs altogether as George said, the projection method set in cinema 4d is what lets a texture span over more than a single face of the mesh. You choose from box, sphere, shrink, etc and frontal. For anyone using c4d the material needs to be set on uvw mapping, and in the bodypaint editor you set the uv (black and white checkbox next to your layer) to the mode you want. Once this lets the texture break free of the single side of the mesh the kineme texture transform patch or image texturing properties can be used to spread it over the shape.