Front Page Posts
I rarely have the compulsion to post stuff because I think it's "so darn cool", but I saw this video on Vimeo recently, and thought it was particularly awesome. It's one of the cooler particle/physics looks I've seen. Not QC.
Particles from stephan maximilian huber on Vimeo.
Thats pretty awesome, I thought it was processing, but it's rendering so fast.
You were right to post this. Kind of the other side of the coin to the first 20 seconds of Memos particles as discussed on Memo's website. This makes me wish I could code.
Singer Emily Haines sounds so much like Australian (now London based) singer Sarah Blasko it's weird. Or SB sounds like EM — either way they're both 'indie' ;)
I'm never sure exactly how impressed I am with the stuff from that link, from a real time animation standpoint. There are mentions of using rendered After Effects stuff, and indeed, much of it looks like a bunch of rendered trapcode/after effects particle looks, and I don't know how much of that "look" is actually real time. Not that it's bad to approach something that way, because it's a good look. No diss intended, it's all about what's right for the project.
I can conceive of how I think I would have to set this up (what I linked to) in QC using the tools available, it would just be a pain in the ass, and I don't know if it would perform well. Maybe I'll give a crack at it tomorrow.
Nevermind. I'm getting around 2fps just from drawing a comparable amount of circles and having them respond to camera and do a similar physics thing, not even getting to the part of sampling color values and making them correspond to the correct circles.
Yeah didn't somebody make a plugin to do almost that but with squares on a grid? Was it Lango? Rinboku did something like you are talking about with the PixelS plug in. vimeo
I just meant from an aesthetic POV I liked how the particle splines in the early part of that other video suggest human form coming about, a pre-physical form in a protean state. Then then this one is kind of going from the physical to the image to the dissolving of form.
I can't tell how much of it was REal TIme in MEmo's, I don't think it matters unless you have the performers to create that movement, then it would pretty impressive. Video and dance are so tricky to marry IMHO. Often one (usually the video or telekinetic trickery) seduces the audience at the cost of the other (the expression of the performance).
Cool. I saw this last week I think, and also thought it was cool. We must have the same Vimeo subscriptions, George...
OpenCL is the way you'd do this, I think. Or actually, you could probably do it with a relatively simple GLSL displacement shader, once you had the movement/flow vectors.
Definitely cool video, can be done with OpenCL I'm sure; actually uses C++, OpenSceneGraph & cefix.
Their servers [cefix and OpenSceneGraph] are unable to respond to my request for further info, probably bogged down with everyone else asking them for more info because they'd seen this video too :-)
Dudes, are we REALLY this far into OpenCL and think it is the answer for anything at this point, or that it has some kind of great performance inside of QC, as opposed to any other method? It would be nice to be proved wrong.
It's pretty easy to get the vector flow... just getting that and rendering circles that have size influenced by movement is really hard on QC. I don't think OpenCL or GLSL is going to make a difference with that (I'm using GLSL right now for part of an attempt I whipped up... I don't think GLSL can be used for the actual rendering because of the physics of the balls).
I don't find the Rinboku thing to be similar in look... it winds up looking like a Heightfield with GL Point mode. However, I did conceive to try to setup the circles in a similar iteration method, cand then have them be influenced by cam movement. It just crawls with any reasonable amount of objects being iterated. At the same time, that's only part of it, since the circles bounce against each other and the bounds of the rendering space.
Yeah Rimbouk isn't a particle system in any shape or form, sorry if it sounded like I was suggesting that. Just meant as a way to determine the colour of the circles, which would have to be generated in a complex system and given animation in some way.
Yeah, gotcha, I was totally thinking something along the lines of ImagePixelS would be a good way to get the color info and send it to the circles as well. I see what you mean. I was just shocked at how much Rinboku's example ended up looking like a Heightfield or Rutt drawing points (I guess it's really not too different when one thinks about it).
The stock optical flow setup provides a way to get little lines to move in a really similar way. It's the physics between the circles and the walls that is weird...
Also, it looks like the source image is being effected by something like the v002 Optical Flow/Andrew Benson GPU datamosh thing.
So, it seems like the steps to get something really similar in QC would be
-Do something like "datamosh"/liquidize the source image.
-Downsample image, use something like ImagePixels to get source colors.
-Send color info to Circles and iterate? (This is where I think things go awry, because of iteration in QC... turns performance to crap).
-Add x/y info from stock Optical Flow to displace circles with movement.
That gets one pretty close, but it doesn't do anything to achieve the more subtle physics interactions that are happening.
It's pretty elementary to get one circle (or whatever), bouncing against limits in QC. One can do it in a similar way to the Inertia examples that are in SL, or via the same method that was employed in Teapotii.qtz.
It's just getting that happening with a group of "particles" or renderers as well as the image bounds, so that there is the kind of physical reaction that is present in this video that is the part of this video that I find so appealing... the look of the circles flinging around so wildly, then returning to their rightful place.
This really is making me realize how limited the abilities of QC are in the particle department, especially with ParticleTools in it's current state (not really working in 64 bit, not working with K3D anymore). The stock particle system is useful, but sort of a joke compared to particle systems in some other environments (it's like a reinvention of the cheesy explosion particle system that a local TV station would have used in the 80's).
Yeah, the trick is to get the velocity data generated for each circle based on changes in the source image. Then to give the circles (a) some velocity based momentum and (b) 'gravity' or 'elastic' type return-to-point-of-origin acceleration/force. Gravity: as in fixed force. Elastic: as in proportional to distance away from point of origin.
I think JS could handle to particle tracking end of it easily enough at adequate speeds for a low number of points (1000?) but generating the movement vectors is more tricky. Perhaps a CIFilter, they are still a bit of a no go for me unfortunately. EDIT I just watched it again — that's a dam good particle code: not easy to mimic at all perhaps.
Quote:-Do something like "datamosh"/liquidize the source image.
Was thinking the Kineme Structure Renderer would be better than the Iterator but we can't get the colour values inside the Structured Renderer patch can we :? Nor an image structure to feed to a sprite which would be really rock-n-roll (smokris this has to be the feature I virtually ask the most for haha)
A guy I use to make videos with showed me a Mac App that is just particle systems. Can't remember what it was but he use to generate projections with it then shot live action with the projections on faces and bands and stuff, hmm guess I'll ask google.
Quote:A guy I use to make videos with showed me a Mac App that is just particle systems.
This rings a bell with me too. I remember some quite impressive clips of text desolving into particles and that kind of thing. Let us know if you find it.
The original Optical Flow setup (Apple) has an example where you get x/y vectors and connect them to a line with a start x/y and and x/y. I'm having a pain in the butt right now getting the output structure of that integrated into the x/y position of the Image Pixel circle setup. At that, it doesn't really help with the physics aspect of it.
Are you talking about Adrien Mondot? I can't remember the name of the app...
To elaborate a bit: the killer for CL in QC is that it's almost always a guaranteed round trip to the GPU and back (upload a bunch of data, do some ops, download it to structures for iteration). For almost every use of CL in QC that I've seen, I'm almost certain that you could do it faster on the CPU in a custom patch because you skip the upload/download steps. The only exception is when you're working exclusively on images (which stay on the GPU), or when you're really doing a ton of math (as in, thousands and thousands of ops per element).
cwright wrote:To elaborate a bit: the killer for CL in QC is that it's almost always a guaranteed round trip to the GPU and back...
Is this the case for mesh data, too? I always assumed that if you used an OpenCL Kernel to create or do something to a mesh, then piped the result to a Mesh Render patch (via whatever other Mesh patches are needed), the data remained on the GPU all the way through. Is this not the case?
I believe that's the case as well (though the dataset is typically smaller), but not necessarily always? There are a lot of factors for that.
Ok, I got some pretty darn similar physical reaction going.
I haven't looked at the clip since I started futzing with this... I'm sure it's not dead on with the physical reactions, but similar...
It's sorta sloppy; I really need to tweak it to work with arbitrary pixel w/h appropriately.
In my fairly limited tinkerings with OpenCL in QC so far, Id say it has potential despite the flaws. Certainly when you start to use powerful GPUs on Mac Pro it starts to get quite good, but can tell it is limited compared to well written standalone OpenCL-utilising apps, of which there also arent many yet! Certainly its not good to hear it touted as the solution to all our performance bottlenecks because in many (or present most) cases with QC it isnt. However I still feel that George is excessively negative about it, clearly had some good reasons to be but since its been a lot better in 10.6.3 I tend to feel this negativity is now slightly out of step with present reality. And some of the ongoing issues are due to silly issues with the OpenCL patch code in a few of Apples samples, eg a couple of the mesh filters which we discussed elsewhere along with some workarounds.
I suppsoe one of the reasons I still like OpenCL, apart from having good hardware to try it on, is that someone like me with limited programming skills can easily poke around in the patch code to learn something and modify the work of others - obviously I could do that with some of the plugins that have been opensourced but its a fair bit more hassle for someone who doesnt really know what they are doing.
Anyway on my last foray into OpenCL in QC I was trying to make an equivalent to the mesh blending stuff from Kineme, by using an openCL patch to blend between two meshes. I ran into a nasty performance problem with the mesh creator patch. I meant to ask you about it but now Ive forgotten the details, I'll get back to you on that one.
My 20 minute, super-hacky, unfinished attempt. Just uses a plain iterator to recreate the effect, with image pixel to grab the sprite colour (and motion vector, if you can call it that..)
It's badly bugging and not right.. i suspect a touch of feedback in the motion part (which is the 2 CI filters in the root of the patch) or cunning use of an integrator might improve it plenty.
Have to download the optical flow and the v002 plugins... is this Leopard friendly?
gtoledo3 wrote: I really need to tweak it to work with arbitrary pixel w/h appropriately.
Aha ok Ive looked at the issue I was having some weeks back with OpenCL & mesh performance.
Im loading in a dae and getting the mesh components normals and vertices. I am feeding these into an OpenCL patch, which outputs modified normals & verts.
If I then use 2 x Set Mesh components to apply these new normals and verts to the original model, and they are updating continualy, then with this particular model that has just over 98000 norms & verts, I only get about 24fps.
However if I use a Mesh Creator instead of Set Mesh, then I get more than 300fps! (GTX285). The problem is that the model doesnt look right, and I dont know if it is possible to recreate the model properly using mesh creator, eg in triangle strip mode I get the basic shape but with some additional points joined that should not be. I dont know if using the Indices input on the mesh creator would help at all or if it is just too limited to recreate models with certain structures.
So assuming the mesh creator cant do it, I go back to wondering why using 2 x set mesh patches causes such a large slowdown? Even if I only use 1 set mesh patch, disregarding the changed normals that my opencl patch creates, I only get around 40fps. And as these patches are used by apples example mesh filters, I thought it might be worth asking if there could be some avoidable bottlenecks in these set mesh patches that could make a big difference to performance if fixed, or if its just an unavoidable limitation of the stuff set mesh is doing. And is this is so, what is it about mesh creator that manages to avoid these performance hits in a big way.
Ive got all the Vade plugins (as far as I know) and I still get missing patch errors:
OpticalFlow Downloader was part of the Leopard Developer Tools examples, but doesn't seem to have made it to the new developer.apple.com website. I filed radr://8052314 on this.
Image PixelS can be downloaded here: http://www.magdatt.nl/software.html
Here's an "app" version (same as the qtz... I may have re-ordered some layers) that has all needed resources. Click on the app icon, go into contents, resources, and then get whatever plugins are needed.
In retrospect, I could have made mention of all of the plugins, since it uses a few. Sorry.
What I meant was the pixel w/h that are attached to the resize. Somewhere I don't have something referencing that and it makes the composition go a little screwy when changing the pixel w/h res on the resize patch.
In my qtz, what is probably muting the colors is a gamma patch (Point Gamma). On my system, the pixels end up too dark by default, so I had to put a gamma in to get it to be the same as the video feed.
The circles don't really get any bigger in one mode on the qtz I posted (the one that uses rgb average), but in the mode that uses alpha as part of the size calc, darker pixels are smaller. I liked that look, so I included it, even though it has nothing to do with the original video at the top of the thread.
I see how on the orig, after looking again how bright areas have denser concentrations of smaller particles. Gee whiz, this is a p.i.t.a.
I'm linking to a clip so you can see what it "should" look like (this has the op flow feedback billboard on and the point size beefed up a little from default). The app version has a clear background.
Ok, I totally think my version is better >) (psonice vs gt round 2 rah rah rah)
I cleaned the version/app from above up a little. I had the pixel calc mode reversed on the index, and forgot to tie one billboard to x/y offsets. I also sorta resolved the pixel density (changing the resize w/h).
Ok, I'm totally obsessed with this. I'm attaching another version that has a choice for sprites instead of Points, and a published source image.
I'm excessively negative about OpenCL? I think I'm realistically negative.
-How is it better since 10.6.3? Less things work for me than did in the GM, 10.6
-You just read how it rarely gives better performance than plugins, from someone who works developing QC.
-Post the workarounds you mention, and have them be correct.
-How could someone of limited coding ability poke around in OpenCL and do something correctly? It's a moving target, and a new language.
-Apple shipped a major feature in a non-working state, and this long later, it still doesn't work and WORKS LESS THEN IT DID IN THE RECENT PAST. IT HAS NOT GOTTEN BETTER SINCE 10.6.3 BY ANY STRETCH.
-You wind up by describing how when trying to actually do something useful with OpenCL, how you have had nasty performance problems.
Im not going to get into a protracted argument with you about this, especially if it causes you stress. But I will post a little more on this subject for now.
Im not saying its perfect, Im saying its a lot better in 10.6.3 than it was in previous released versions of snow leopard. OpenCL in QC does not do my head in any more, although there are performance issues which I am interested in, as I am unsure if they are natural limitations or can be improved upon in future.
I only wish that more people could talk about their experiences with OpenCL because there are not many of us talking about it at all. I would like to know if your poor experience of 10.6.3 is typical because so far you are the only person I know that says that 10.6.3 is worse than prior recent releases. It was broken when SL came out, then they fixed some stuff and broke something else, most of the something elses have since been fixed, making 10.6.3 the least broken release in my book. Some of the most glaring outstanding issues can be fixed by tweaking a couple of the mesh filter quartz compositions. This is hardly perfect, but to discuss this further you need to mention specific problems, shouting about it generally not working and not getting any better does not enlighten me and makes me tend to think there is something about your hardware or your os or what you are trying to do with OpenCL in QC that is causing you to have a far worse time with OpenCL than I am.
Please dont get me wrong, I was very upset with some of the problems that got introduced, and perhaps my initial position on OpenCL was too optimistic, but even so that alone is not enough to explain why we have such different opinions of OpenCL in QC at this stage,a nd is certainly not enough for me to consider it a broken shambles or technology that is useless.
Well @gtoledo3 & @SteveElbows, I'm a great OpenCL fan and am finding better ways every day to make use of OpenCL in visualizations, with all the Mesh Creator types and with finding better ways to proof kernels.
The rapid creation of complex structures still leads to the matter of visualizing the resulting structure, for which purpose the Mesh Renderer and also the Vertices and Normals display patches do a great deal.
However, I do find that some examples that worked AOK in 10.6.0 now don't, especially so the Advection example [Wind Tunnel] & the structured Images [Pages Jiggle] :-).
I think I'd broadly agree that it came out of the oven kind of half baked. I'd also agree that broadly speaking, OpenCL is more fixed and more reliable and predictable than at any other stage of roll out.
There always was a lot of stuff that worked pretty well in OpenCL - 2D Simulation, for instance. There was also some stuff that worked way faster with alternate solutions for precisely the same problem, albeit in some instances with one using a 3rd party alternate patch.
Dynamic Meshes , even mixing vertices and calculated and dynamically created Normals is becoming pretty stable and making the Volume patch take varying dimensions now works reliably too, although I have as yet to create an appropriately formed structure for that port without a working backend that I know of :-).
I could wax lyrical about OpenCL and I do look forward to its ongoing development.
However, it is not insignificant that my forays into OpenCL have sometimes resulted in a bounce back into CoreImage related alternative solutions, sometimes before I rebound back with an OpenCL variant.
The real gnarls for me, are ensuring that I have got a cogent and compilable kernel and avoiding changing the inputs too much in the Editor if I'm concurrently rendering.
Overall, in regards of the items originally 'placed on the menu' as sample code by Apple, both original examples and subsequent updates / changes, there are still items that await rectification.
A mixed bag still then, but I don't think it's going to be a mixed blessing in the future, even if it seems like one right now.
I rarely had serious issues in the initial versions of SL, with OpenCL specifically (though I had issue with other QC bugs that were introduced).
If the OpenCL compiler accepted code that it no longer accepts as valid, and that code is then used in stock patches that aren't updated, it's not a good thing or an improvement.
I'm not aware of what issues you were having that were resolved by 10.6.3 specifically, so if it happens to have worked better for you for some reason (no one has seemed to be able to point to specific reasons it's better), then that's great.
I also don't believe the mesh filters being broken is a trivial issue, and though I finally got them working on my system (noise is unstable, even with the correct mods), I've yet to see anyone post the correct modifications, though I've seen some discussion and stabs at it.
I have access to three different Macs regularly, and often have access to more models. When I make an assessment on OpenCL in general, it's based on regularly using a macbook, a macbook pro, and mac pro towers, from old to brand new, and not specific to a given computer.
I do consider it broken, but I don't consider it a shambles or useless. Rendering collada objects without any manipulation is fairly solid, unless one wishes to use different view modes, like FOV or Ortho, etc... I like being able to get coordinate info for normals or vertices of models so easily as well. However, both of those things have more to do with the mesh loading engine than the OpenCL kernel/compiler in QC and how it actually performs.
To me, it's just a language that's been welded onto QC that is no more useful, and probably less reliable than say, the LUA plugin that was made by just one guy. Using the collada and deformers is less reliable, flexible, or as good performance wise as K3D, which is happening on the CPU, and again, was made in a scenario of less resources than Apple. I guess I don't see how it's possible to be anything but underwhelmed. That said, I hold hope for many of QC's bugs getting fixed, and the dust settling with OpenCL at some point and it hopefully delivering on promises of being able to do something better or faster than alternative methods.
As far as "...there is something about your hardware or your os or what you are trying to do with OpenCL in QC that is causing you to have a far worse time with OpenCL than I am.", my reaction is that I use QC a tremendous amount, for a wide variety of projects. I've also never blindly stabbed in the dark at doing weird things with OpenCL, it's all been based on sample code, or principals that "should work".
Yeah, that's really all I have to say about it. My initial response was from the concept that using OpenCL to accurately emulate what was happening in the video, would somehow be easier or better than other methods.
Ok, this is actually getting somewhat far away from the vid I posted, but I had the idea in the middle of messing with this...
This version has a Lighting mode added, and has options for Sprites, Cubes, and Spheres, to be the object rendered. The Cube mode with Lighting looks pretty wild to me.
It also has a "rotation queue" option that will make the objects do a rotations chase. It takes forever to load because of the queue size, but it works interestingly, especially once it fully loads, in Cube mode. There's a control for monitoring when the rotation queue is full or not.
The Get Mesh and Set Mesh components are no guarantee of 'architectural' fidelity.
As you can see from the attached example, there are distinct differences between the ways in which the data is rendered, even though we are dealing with the same information. However, I think you can get a pretty good representation of the original with Point Sprite mesh type. This however, does not necessarily prove to be the case with other models.
BTW, the key thing with the Grid Indices and Normals generators is to have an x and y count that matches up when multiplied together to result in an exact or close correspondence with the number of vertices in the original model.
You can get some entertaining results by mixing up the expected inputs.
You can also get some stupendous crashes playing with such data input types.
Update attached an example of creating a dynamic mesh - varying x, y count & running those components into a Mesh Creator and recreating that mesh, which I seem to be able to do a better job of than faithfully recreating a DAE mesh from its components.
Weird, you'd maybe think it should be the other way around - I do, but it isn't.
OpenCL Circles - uses the GL Tools OpenCL circle to get the vertices.
CoreImage works on all Quartz Extreme capable machines, which is pretty well every Apple Mac that's been produced over the past seven - eight years.
OpenCL works [with some occasional problems] with a relatively limited number of Apple Macs produced during that period and probably will only work with - mmm, about the last three years Apple Macs, oh except those that have ATI GPUs of course.
That's the key difference & gnarl - OpenCL isn't actually full range ready as yet.
We've all been there, a machine that doesn't have this or that particular chipset or facility.
I wonder what fraction of currently 10.6.x capable Mac users are currently losing out, that's all •~
If the OpenCL compiler accepted code that it no longer accepts as valid, and that code is then used in stock patches that aren't updated, it's not a good thing or an improvement
If the OpenCL compiler accepted code that it no longer accepts as valid, and that code is then used in stock patches that aren't updated, it's not a good thing or an improvement
I have recently come across this OpenCL Compiling on losingfight and have begun to find it useful.
I like the comments regarding the lack of useful feedback in the editor panel.
Not really surprising, after the hectic rush i had to just get something that works before leaving work (it's a small miracle that it works at all ;) I was also somewhat crippled by working on my 'production' box at work.. where I don't want ANY foreign plugins installed (other than my own at least) and where QC is fixed at 10fps. In fact I'm still not totally sure what mine looks like or how fast it runs :D
Time for a quick play with it perhaps, see what happens.
Thanks for the detailed response.
In terms of what I think 10.6.3 fixed, well my memory may be faulty but as far as I remember I was deeply annoyed by OpenCLs failure to fallback to CPU properly within QC. This was fixed by 10.6.2, but they broke some stuff at the same time, like mesh filters to the point that the template wizard for those crashed QC. With 10.6.3, notwithstanding the failure of Apple to update a couple of their mesh filter examples to work properly with the updated OpenCL, I at least finally had a release that should fallback to CPU ok on most machines without users having to install QC and tick an obscure setting, without too many other things being broken. I ranted a lot about 10.6.2, so I was bound to make a noise about some of those horrors being fixed in 10.6.3 - it was a milestone in my book, though far from perfection.I think some of the iterator performance issues were also fixed in 10.6.3, but not all of them.
As for why I remain cheery about this stuff despite the past problems, a lot is because I am very attracted to being able to scale up certain computational-intensive tasks by upgrading the GPU. Clearly there are bottlenecks, as usual & especially usual for QC, but there are still some things I can now do on a fast GPU that I couldnt easily obtain before. Ive yet to do much on the CPU but in a more limited sense I may be able to get slightly better use out of 8 CPU cores than I could with most other QC stuff.
Anyways I'll leave it at that, I think Ive said everything I could possibly say about OpenCL at this stage, I look forward to a time when I get round to sharing some OpenCL compositions (my stuff is currently too half-baked and Im distracted by the ipad) and I hope that both hardware and software eventually reach a point where you can gain a little more from OpenCL than you have so far. I suppose whether we are underwhelmed by OpenCL or not depends on our expectations, although its still a bit odd because Im not underwhelmed even though it could be considered that I had pretty high expectations going in.
Its fair to say that I am not blown away by its complete awesomeness either, and beyond the QC community the original Apple SL hype did seem to lead to quite a number of users expecting various apps to start using OpenCL quickly and offering dramatic performance leaps, which has been shown to be quite unrealistic. Again away from the QC world, I have been vaguely impressed by a few demos and I really do like some of the stuff that is being done to use it for speeding up raytracing/rendering. From what Ive seen of benchmarks from those sorts of apps, OpenCL really starts to have a clear advantage once you get to a high-ish end desktop class GPU. Take this idea back to QC and it may also hold true - your example of Kineme3D performing better may well be true for most of the laptop & imac type machines that are out there, but kineme3D doesnt make full use of all the CPU cores on the mac pro and its mesh deforming stuff gets no faster if I upgrade the GPU, whereas OpenCL at least begins to hint at its potential when given a GTX286 or better to chew on. Its annoying that QC eats away at some of that advantage (though not all of it) which is why I was asking questions earlier about specific mesh patches.
Anyways I said far more than I meant to originally but hopefully Ive run out of things to say now :D
To be frank, I am always interested in your approach though; this one is real clever.
On my system, the pixels don't really line up correctly (their are some overlapping strips that seem they aren't in the right row/columns)...it's something with the horizontal rows- I get a light colored row horizontally at a place towards the top where it makes me think the sprites aren't iterated exactly how they need to be.
It's pretty simple how you're getting your output velocities, and I like that aspect...hmm. It is doesn't seem to quite get the same action as obtaining the output velocities with Image Pixel as with the Optical Flow Downloader (I do like to cut out as many patches as possible when possible... even if that's not always evident!)
This guy still has a really slick thing going, because the density of particles is more concentrated in bright areas, and the particles are smaller. The particles also seem to be able to bounce a bit independently of one another, against the bounds of the image and recoil. The effect of what looks like the source image "liquidizing" or feeding back, and the "particles" moving to that is pretty pronounced with the orig too. All the fine points that go into something being ultra snazzy.
I may like this Cube with Lighting look I stumbled on more though! I always like that cube/lego evocative stuff... I like the fact it ends up more 3D and kind of wall-like.
Although I get a similar fps on both, the patch based flow works much faster than the Core Image one you did @psonice, nice example though.
There is an immediate level of responsiveness and also a higher degree of luminosity in the Optical Flow based examples, no doubt testament to their being patch facilitated.
To be honest though, the OpenSceneGraph based example that inspired this thread works way more smoothly and that is a testament to some fine coding on that author's part which enables such a large number of points to be affected by the color and brightness read from the variable video image.
It's definitely a custom patch thing, IMHO to enable as good a result as in that video.
BTW, I'm still trying to get OpenSceneGraph to compile on 10.6.x, loads of source files.
All this GPU speed up is currently a horses for courses thing, just look at how Flash GPU acceleration works well with video but lacks proper tie ins with any pre OpenCL era Action scripting. Once reliable components can literally flash past [ pun intended ].
Thanks for these examples, I think it's the first time that I've seen the ImagePixels plugin put to such inventive use - what a sheltered life I do lead •~
Successfully deploying the Optical Flow Downloader is one of those very careful balancing act things - the slightest change to some parameter and pretty much everything is lost to a spinning beach ball.
I would still be curious to see the actual program from the original post - I have no idea whether it's realtime or offline. If it's offline, it's not quite as impressive. However, there are particle engines and methods that can render more particles than what are present in these qtz examples, more efficiently. The fact that the iterator is involved in these qtz's doesn't help.
Ironically, I was thinking about how to do this fully in OpenCL (at least I think I do), and I'll likely give a stab at that as well.
I'm not 100% sure that I agree that it requires a new custom plugin, but it likely does, because I have trouble getting a given sprite inside the iterator to have noticeable bouncy limits, along with the other stuff going on (motion displacement). It also seems like the pixels in the OpenSceneGraph/C++/Cephix thing may have different weights depending on color, the more I look. The physics part is more involved than a simple displacement based on left/right movement, that is for sure. The higher densities of particles, and decreasing size of the particles at bright spots is also something that isn't very straightforward to do with an iterator based approach.
I had used Image Pixel before, but the first time I've used Image PixelS for an actual qtz was today with these. I did check it out for a few minutes when it came out to see what it was about. So I've been learning / experimenting aloud with these today :-)
What is it that ends up causing the beachballing? I can vaguely recall having something like that happen with the optical flow downloaded at some point but I can't remember what I was doing (I'm talking a year or more ago). In general, it's been stable for me. It is a 32 bit plugin, in the apps I posted, so it requires running QC in 32 bit if one wants to run the qtz and not the demo apps.
This one is way pared down, chopped out the lighting and different renderers, because I wanted to concentrate on the aspect of density.
This one has a "crossover" control that makes areas that have pixels become more dense by a factor of 4; by default any pixels above a certain threshold of brightness become smaller, but are represented by 4 pixels instead of 1.
I tied both the "bright pixel" rendering group and the "darker pixel" to the same image, but fwiw, it's also kind of cool to feed them separate images - all the bright pixels can be squares, and all the darker ones, round, or that kind of thing. This same kind of setup could be used to make different types render depending on brightness/color, or more levels of density depending on color thresholds.
edit-Ok, I updated this a couple of times, and added in some color controls.
I did have a go at getting a fake fluids type effect working with my setup yesterday, but failed spectacularly. Somehow trying to do feedback in glsl with that motion detection filter I have just wasn't working, and I've no idea why.. i'll put that down to tiredness (didn't sleep much the night before as I was ill, and it was 1am :( ).
Is it just me that finds that original video fairly unimpressive then? :D It's fun, but it's little more than optical flow (we've had it for ages) plus simple fluids (2d navier stokes or similar, it's been around for ages and it's been done WAY better before now) plus a real basic particle system. It would be impressive if it was done in QC because QC lacks some of the things needed to do it in a straightforward way, but otherwise it's pretty basic. Check out fairlight's 'blunderbuss' and 'agenda circling forth', that's where I see some impressive particle systems ;)
GT, this tear you are on is hugely appreciated over here. Super cool stuff.
Changing the Flow Step manually is what gives me a real beach ball moment :-).
Ironically, I was thinking about how to do this fully in OpenCL (at least I think I do), and I'll likely give a stab at that as well.
Ironically enough - looks like some one has been working on this already OpenCL Optical Flow
& being a fast cook, I've adapted the OpenCL kernel routine posted on that site and post the results of my recent research and development here :-)
Still a work in progress - fun, though.
BTW - the kernel code was originally adapted by Marek Bazera from the GLSL shader posted by @vade as part of the v002 Optical Flow package.
Question -- how could one render this patch in QuartzCrystal?
I was half inclined to say just drop it onto the QuartzCrystal application and let it run. Not that simple though.
The setup does work AOK if given a video file and will Preview live in QuartzCrystal, but obviously our 10 seconds of video might take up to 2 minutes to get offline rendered and success seems codec dependent H.264 being more reliable than say Ogg.
However it doesn't click with the live video input into Quartz Crystal - back to screen capture [again].
BTW - great use of the blur patches in this growing collection of Optical Flow examples.
Yeah, I guess the question was premature re: live video input to QuartzCrystal.
Yep, it's definitely the collision effect that is lacking, in ease to do in QC with available tools.
Huh, I wonder what was the impasse on the feedback. What I used for the fluid aspect of it was to feed the Image PixelS an image that has been passed through the v002 Optical Flow GPU Distortion effect, that has that liquid sim sort of look.... which is based on 2d navier stokes to my understanding, but it doesn't have quite the same physics reaction as something like what's in the vid.
I do agree with you about flow, though I think the motion of the particles should be effected by motion of the subject, not just random motion ... the pixel colors should definitely "rest" in basically correct coordinates. That said, I have seen some things where color values get different objects, and those objects randomize - a program that made people look like they were composed of different swarming insects - and it looked very awesome.
So, I don't find it impressive from a coding standpoint, I just like the look. I'm a sucker for image to particle type stuff and liquidy stuff, and this combines both. In addition, QC's lack of an"really awesome" particle engine make stuff like this stick out more to me.
I totally agree with your comments about fairlight's stuff, it's really amazing looking - it's much more developed and nuanced than this.
Now that's interesting, because when I use that kernel in place of the v002 Optical Flow Displace object in my patch, it does a good job of yielding a similar look, but fps reduces.
...I was thinking of an OpenCL based image to particle type system thing actually.
Yeah, definitely just create an output splitter of the video patch, if I didn't already, and then feed a movie file to it. Then, offline render in Quartz Crystal. A screengrab would work too (but I sure hate how any I'm aware of tend to make QC slow down).
This plugin (PixelS) is exactly what I've been looking for to make some of my projects alive in Quartz. Thanks for bringing that up!
Yes it's unique in the functions it opens up.
Pity about the lack of speed but it's okay if you use it on a still image and do all your pre-calculation before your comp starts flowing.