Infinite Loop 2 (Composition by gtoledo3)

Author: gtoledo3
License: (unknown)
Date: 2010.07.28
Compatibility: 10.6
Categories:
Required plugins:
(none)

This patch shows how you can take the events inside of an iterator, record to queue, and then spooky them outside of the iterator, so that they can be used.

This principle makes it possible to create "interaction" structures where every vertex can be "grabbed" via the interaction patch, and they are all inherently "hot".

PreviewAttachmentSize
Infinite_Loop 2.qtz12.02 KB

cybero's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

A really nice construct and it also shows how well the Spooky patch can work, which is a patch I'm only recently re-introducing myself to.

dust's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

why do you think using two iterators one with renders and one without is bad form ? is it a fps thing or are you thinking it is cleaner to use spooky or osc ?

gtoledo3's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

Well, correct me if I'm wrong, but you have to, because this uses Interaction. It's not a problem to create a struct in an iterator, publish the results, and then render them using another iterator (or a structure renderer, or a mesh creator/renderer), but there is no way to write a queue of interaction events. If the Interaction patch is inherent to the construct, then the iterator actually has to be a consumer. Then, the only way to get the structure out is to spooky it, which actually works.

Also, in case it's not already patently obvious, this can be used with GL Tools.

PreviewAttachmentSize
Infinite_Loop GL.qtz12.97 KB

gtoledo3's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

Dude, I swear I want to go out and buy you SL! ;-)

usefuldesign.au's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

Thks, I swear I want you to splash on a new MacPro for me. SL comes free with each unit ;)

Does anybody know if those new ATI cards are OpenCL compliant and how compliant. I read a post on ars tech that suggested Apple doesn't want nothing of the Intel on board GPUs until they get them OpenCL compliant as they want everything going forward OpenCL safe. That would suggest the ATI stuff is OpenCL but that's not exactly a guaranty is it.

cwright's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

usefuldesign.au wrote:
Does anybody know if those new ATI cards are OpenCL compliant and how compliant.

CL compliance is a boolean -- either you are, or you aren't. I've not seen a shipping ATI card that supports CL on OS X.

[disclosure: my work desktop is ATI powered]

usefuldesign.au wrote:
I read a post on ars tech that suggested Apple doesn't want nothing of the Intel on board GPUs until they get them OpenCL compliant as they want everything going forward OpenCL safe. That would suggest the ATI stuff is OpenCL but that's not exactly a guaranty is it.

I don't know where ars suggested that, but that's not what I've seen -- Where is CL actually used? Apple has many more interesting technologies affecting their bottom line (iOS devices, for example, pays over half the bills I think?) that can't even make a mockery of CL support.

[more disclosure: my work desktop's lack of CL hasn't affected my user experience in any appreciable way]

usefuldesign.au's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

cwright wrote:
CL compliance is a boolean -- either you are, or you aren't. I've not seen a shipping ATI card that supports CL on OS X.
Anecdotally, some cards sound more capable than others don't they. In that more comps run without errors.

Quote:
I don't know where ars suggested that, but that's not what I've seen -- Where is CL actually used?
The notion, I guess, is about future proofing current hardware, although I agree USB3/latest FW would seem more urgent than OpenCL but I don't get to see the road map.

http://arstechnica.com/apple/news/2010/07/counterpoint-intel-and-apple-c... A polemic article (one side of a debate I gather) not a fact finding mission ;)

cwright wrote:
Apple has many more interesting technologies affecting their bottom line (iOS devices, for example, pays over half the bills I think?) that can't even make a mockery of CL support.
Yes desktop and laptop devices need only apply.

Does that mean OpenCL is currently a no-go for FxPlug effects plugin developers for FCP, Motion and also AE? If those things are written in Obj-C/C++ then maybe it's not such a gain but in QC the speed delta for processing float/point coord data in OpenCL over JS seems to be factors of 10 — even on existing laptop GPU cards.

Quote:
[more disclosure: my work desktop's lack of CL hasn't affected my user experience in any appreciable way]
That's 'cause you don't use big data sets in MS Excel ;)

cwright's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

usefuldesign.au wrote:
Anecdotally, some cards sound more capable than others don't they. In that more comps run without errors.

Those are driver related more than hardware related, sadly. But as far as hardware goes, I don't think there are any ATIs capable of offering the full CL spec, so there won't be a 90% driver (at least, I certainly hope not).

usefuldesign.au wrote:
The notion, I guess, is about future proofing current hardware, although I agree USB3/latest FW would seem more urgent than OpenCL but I don't get to see the road map.

http://arstechnica.com/apple/news/2010/07/counterpoint-intel-and-apple-c... A polemic article (one side of a debate I gather) not a fact finding mission ;)

I've read that, but I don't think it holds any water. (or at least, not very much). I mean, CL's definitely a rad technology that needs to happen one way or another, but it's niche enough that there aren't a lot of actual uses for it. That means it's not an essential feature. For 99% of users, I can't think of many things they do day to day that depend on CL in any way. The current GPU offerings are fairly dated likely for similar reasons: 99% of users don't care.

usefuldesign.au wrote:
Does that mean OpenCL is currently a no-go for FxPlug effects plugin developers for FCP, Motion and also AE? If those things are written in Obj-C/C++ then maybe it's not such a gain but in QC the speed delta for processing float/point coord data in OpenCL over JS seems to be factors of 10 — even on existing laptop GPU cards.

I don't see the mismatch -- if I've got C code, I can augment it with CL (provided I'm clever enough to make wise use of it). There's nothing preventing that. The delta on QC is largely due to over-encapsulation -- a structure of floats is too-far removed from a float *, and the price to pay for that is steep. I'd expect JS to be maybe half as fast as C (assuming conversions were free, which they're not, both on the JS side and the QC side). And CL can still work on the CPU, so it's not like a GPU that doesn't support CL makes it totally dead on the platform.

usefuldesign.au wrote:
That's 'cause you don't use data sets tables in MS Excel ;)

nah, it's because, at the end of the day, CL works on the CPU, so any CL workload still gets completed. Would it be faster on a more capable GPU? probably. but it's already fast enough for what I do that it doesn't matter (most of my day is spent waiting for Instruments and the Compiler and GDB to do their thing without chewing up all my memory -- CL cannot help with any of that). I've seen very few uses of CL in QC that actually made reasonable use of CL; mostly it's just to work around JS being too slow, which is useful, but a bit of overkill.

cybero's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

I'd be very surprised if in fact OpenCL, which ATI/AMD have a current OpenCL SDK / API for, doesn't get extended by Apple to those ATI cards that are supported by ATI's SDK. It is, after all, part and parcel of what these newer ATI cards are built to do. Plus ATI cards do the GLSL / CoreImage stuff beautifully.

usefuldesign.au's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

cwright wrote:
And CL can still work on the CPU, so it's not like a GPU that doesn't support CL makes it totally dead on the platform.

Well I think that fact of CPU fallback, which I'd overlooked, wins the standardised hardware argument hands down.

Speeding up JS is overkill. Really? JS is my bottle neck in at least 50% of under-performing comps — too bad that's irrelevant. And its more like 0.1% efficient not <50% I would guess. Yes I'm on old hardware but even the latest isn't going to give me a x100 performance boost and what I write in QC I'd like to work on machines less powerful than the latest and greatest. If QC could be improved to expose the float * more efficiently to the CPUs without OpenCL then great — bring it on!!

As for the 99% of users don't want it, while I know where your coming from — essentially a bang for buck of software engineering development cost argument — I just don't know where to start… when I hear 'average user' I'm usually listening to a swap meet PC motherboard dealer. Toyota didn't become a leading seller of cars to average drivers by trying to make an average car, quite the reverse they tried to get out in front of the trends, towards fuel efficiency, technological advancement* and so on...

Back to Apple, in the pre OSX days Apple computers always maintained an edge over PCs with artists, DTP, and others because the hardware and software could generally cope with demands of shuffling all those bits around without choking like most PCs did. Even so a 3 year old Mac has always struggled with the latest round of Apple software offerings eg: iLife apps. iPhoto was a sort of poster-boy app for selling Macs to average users but streaming 100s of image previews off the HD to scroll through your images actually sucked on an older Mac. Apple software demands that the hardware keep moving, MS with DirectX will push their equivalent to OpenCL and Apple needs an answer. 'Non-linear' editing same story, today video editing on your PC is an average user requirement.

Adobe and others writing there own GPU code to do zoom/pan and other redraw acceleration mitigates against having an OSX framework to do it but I'd hazard a guess that more than 1% of Mac users are earning their livelihood using Adobe and other apps that have GPU specific code. Waiting for Adobe to write code specifically for Apple after doing it for PC has always been interesting too. As sjobs noted, Adobe was last major App developer to rewrite there Carbon apps in Cocoa — >3 years I think.

  • after thought… maybe Toyota weren't so big in the US where the art of sell engaging the physiology of muscle, followed by hard-outside-soft-inside and bigger-is-better has been a much more favoured approach

cwright's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

cybero wrote:
It is, after all, part and parcel of what these newer ATI cards are built to do.

keyword is "newer" -- until a few hardware bumps, I don't think it's likely ;)

cybero wrote:
Plus ATI cards do the GLSL / CoreImage stuff beautifully.

GLSL / CoreImage is a laughable subset of what CI demands (integer types, atomic operations, some semblance of branching) -- fully supporting GLSL/CI doesn't mean you have the guns to do CL, fully supporting CL means you have the guns to do GLSL/CI.

cwright's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

usefuldesign.au wrote:
Speeding up JS is overkill. Really? JS is my bottle neck in at least 50% of under-performing comps — too bad that's irrelevant.

I think you misunderstood (or I wasn't very clear) -- I didn't at all mean CL didn't solve a useful, relevant problem in QC (slow JS), I meant that the tool used to do so (CL) is generally overkill. A parallel is a typical american SUV owner -- they have a car capable of transporting half a dozen humans, but 99% of the time it's transporting exactly 1 (under-utilizing capabilities). for CL in QC, it happened because there isn't really any middle ground between CL (atomic screwdriver) and JS (dramatically slower). For many (but not all!) CL uses, a middle-ground tool of some kind would be a better solution. (I have no idea what that middle ground is, and suspect it doesn't exist, otherwise it would have been used as well :) CL's designed to handle really big data sets with really demanding compute requirements, but I typically see small datasets (a few thousand elements) with trivial calculations (a couple multiplies or adds per element, chained together with other few-op kernels -- completely awful).

usefuldesign.au wrote:
Toyota didn't become a leading seller of cars to average drivers by trying to make an average car, quite the reverse they tried to get out in front of the trends, towards fuel efficiency, technological advancement* and so on...

I admit I'm not well-versed in automotive history, but I'm going to bet that Toyota came ahead because of 1 crucial optimization: cost. The same thing Ford did in his day, and the same thing Microsoft did in the 90's. Toyota today is perhaps a trend leader (in hybrids at least), but that wasn't the initial attack vector.

usefuldesign.au wrote:
Back to Apple, in the pre OSX days Apple computers always maintained an edge over PCs with artists, DTP, and others because the hardware and software could generally cope with demands of shuffling all those bits around without choking like most PCs did.

You mean edges like no preemptive multitasking, no protected memory? Windows 95 could recover from application crashes, and Linux has always been memory protected (so a dying app didn't kill the system), but Mac OS didn't get that till 2001 with OS X 10.0. By that fact alone, I am seriously in doubt of it "generally coping" particularly well with anything demanding. (The software was no doubt top-notch, which is why it didn't die entirely, but superior it clearly was not).

On the hardware side, I'll grant them one bit of genius: a memory mapped framebuffer, and 32 bit address space. Those hindered PCs for a very long time (and still do -- why does BIOS still exist, exactly?). in raw clock-for-clock power, it was likely a tie at best, or the crown going to intel (except for maybe the G4/G5, and when Altivec came about and kicked MMX's butt).

usefuldesign.au wrote:
Even so a 3 year old Mac has always struggled with the latest round of Apple software offerings eg: iLife apps. iPhoto was a sort of poster-boy app for selling Macs to average users but streaming 100s of image previews off the HD to scroll through your images actually sucked on an older Mac. Apple software demands that the hardware keep moving, MS with DirectX will push their equivalent to OpenCL and Apple needs an answer. 'Non-linear' editing same story, today video editing on your PC is an average user requirement.

Direct Compute (DX11) came after CL. CUDA and GPGPU and CTM came way way before (shortly after shaders in general, with some GPGPU stuff actually going back to fixed-function era (holy crap that's hardcore)). It's not like anyone has been jump-for-joy excited about any of those. They're all cool technologies, and useful, and they serve a small but valid niche.

I've edited video maybe twice. awright (spouse) has done it.. zero times. awright (my sister) has done it zero times. my parents, zero times. kwright (youngest sister), again zero times. I still doubt the average user requirement bit, and with a non-zero, non-cherry-picked sample set.

usefuldesign.au wrote:
Adobe and others writing there own GPU code to do zoom/pan and other redraw acceleration mitigates against having an OSX framework to do it but I'd hazard a guess that more than 1% of Mac users are earning their livelihood using Adobe and other apps that have GPU specific code. Waiting for Adobe to write code specifically for Apple after doing it for PC has always been interesting too. As sjobs noted, Adobe was last major App developer to rewrite there Carbon apps in Cocoa — >3 years I think.

And with spotty CL support, that problem isn't solved. Just yesterday I was having a chat with a CoreImage engineer about how you cannot write a (non-trivial) CL kernel that performs well on both CPUs and GPUs, because they're fundamentally different processors that handle data in fundamentally different ways. So when performance actually matters (as in, when it's measured and used for benchmarks, not when it's sold to users who don't know a GPS from a GPU), you're still ... you guessed it, writing 2 kernels (and if you're going to play on windows, you're going to write HLSL/DC as well, so that's 3, and maybe SSE, though that could work on both. But windows doesn't demand SSE, so you may need a fallback... oh crap, the problem isn't solved at all really... )

I'm really not trying to poo-poo CL or anything; I just think it's mildly delusional to expect GPU-accelerated CL on an ATI based on past trends. It's likely to happen someday, but it's not something so critically important that it's a business-changing decision. A contrast is the GMA offerings -- those were anemic to the point where they got ditched for better offerings from other vendors.

gtoledo3's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

I would think that the average user of Apple products doesn't even use macs, considering the proliferation of music players and phones. I don't know.

cwright's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

Putting it that way, I think you're on to something :)

gtoledo3's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

I totally agree with the thought that laptop/desktop is niche, compared to portables and mid size portables like the iPad, which every non-tech person I know is in love with, and many tech people as well. People mostly want these devices for consumption of media and social networking. It's armchair philosophy, but I think Apple is uniquely poised as the leader in computer based appliances now.

If QC enabled a box where you could shoot people on your TV and rack up points like duck hunt, or slime people or whatever, THEN you might have a widespread appeal QC based product that makes sense to run in a bigger footprint device like AppleTV, and actually directly relates to money, but I don't think it would be as popular as xbox. At that, I'm not necessarily sure it would have to/make sense to use QC (thought it's a heck of an established tool). All conjecture anyway...

usefuldesign.au's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

I don't know why pre-OSX macs were so superior on graphics (or perhaps why I and so many were delusional in that belief) but it was a widely held belief in the graphic design and multimedia world with the exception of anything 3D which was almost totally PC (software mainly?). I think typically PCs weren't spec'd on hardware side, with narrow buses, no SCSI drives, low memory, crap GPUs, no second display without a mini-computer price tag?

I'm sure a custom built highly spec'd PC handled typical DTP better than average PC but the ones I used used to go down ALL the time and definitely crawled on image based tasks. Macs went down a fair bit but not nearly as much and you got to know the kinds of tasks that would haze them in a work routine and be careful in those areas. Whereas PCs seemed to be flacky allover the shop when doing a heavy workload (maybe it was just the Adobe app code?, I can't say) . As you said when G4 hit, the gap was obvious. All that was because Apple invested in advancing the benchmark hardware, either off the shelf or getting their hands dirty when it seemed worth it. Linux, what was that? I used a primitive DTP software based that employed a mouse but still used command line interface stuff on MSDOS or Linux before I got a copy of ReadySetGo! and then Quark the difference was profoundly stark (even with Pagemaker which always sucked IMO).

On car industry which I know very little about I can tell you Hybrids are nothing more than a PR exercise at this stage for Toyota and I've read they haven't even paid off the capital investment of development yet. My opinion is that all-electric will kill hybrid in >9 out of 10 sales by end of next decade so whether or not Hybrid was a good investment long term for Toyota remains to be tested.

Also the cost thing was important for Toyota but not in the way you seem to be suggesting. Reliability, performance and therefore resale value and reputation was what they went for along with advancements in robotic assembly lines and so on. They really invested massively in that area and it was a long time before European and finally US and Aust (car labourers being a protected species in both countries) went down that path along with just in time manufacture etc. Continually cutting waste was a big cost&time saver. I've been told it was an American businessman who took a philosophy of continuous improvement and quality-first to Japan after being rejected in his homeland and was embraced by the Japanese — who love to do things well for the sake of it and (with exceptional humility, recasting a feudal society overnight) looked to US for leadership post WWII.

Paralleling QC, Section II of the Toyota Way relates to pull systems of production (principle 3: Use "pull" systems to avoid overproduction.) So perhaps the other sections of this philosophy are germane to QC development too. Check it out! (NB: Apple probably could teach Toyota a thing or two these days about the Toyota Way especially the way it introduces new consumer devices to a market that's 'already done the tablet PC' for example)

Even if nobody uses video on their desktop (and your sample is so small statistically as to be irrelevant but I'm sure you did stats in Yr10 maths ;-) ), just the internet browser requirements, multitasking, (and iWork glossy Cocoa interface overhead) demands are constantly driving the base requirements of av Mac/PC hardware up. It's not like Moores law is halving the price of an entry level PC or Mac every 18 months, although $1200 MacBook is a bit less than point of entry 10 years ago I suppose.

usefuldesign.au's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

Third that. To think how many years Apples be holding back the release of some of this stuff (eg iPad) — not that it was ready all those years ago like some people infer — but the concepts and continual refinement to point of release. Patently Apple has some interesting home automation via iOS device control stuff. Potentially another genre move for Apple. I wonder if there's any aspect of human activity they aren't considering ripe for a digital revolution (read market encapsulation). It seems to be the seduction of the interface that's driving the love affair/demand for these devices. How many kindles sold? How many tablet PCs other than to Coke-reps?

I still think that people who work on computers in design/engineering/science/finance professions will still want trucks for a long time to come; even if their homes are full of scooters, cars and pushies.

gtoledo3's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

Not getting into every point that's being discussed, but I know that when cwright starts throwing out 2001-ish era Macs, the first thing that jumps in my mind is using RADAR recording systems with BeOS because Macs weren't trustworthy (and neither were Windows based PC's), but that was audio, which was pretty demanding - and asking for something that could save the audio right up to the point that you yank the cord out of the wall was something that really only BeOS could pull off at the time. As a matter of fact, I'm not sure if OS X gives that kind of reliability on writing to file or not, never tested that...

cybero's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

Well, I guess it's all down to where there's a will - the lack of full support is especially galling though in some ways considering the fact that ATI cards are seemingly the standard in the new iMac range.

Still one could either go the MacMini route or get a work station Mac.

I don't have a clue as to how well these newer iMacs do OpenCL on their CPUs. Probably faster than on my 9400 iMac :-).

BTW the observation about GLSL/CoreImage was based upon seeing one of the ATI Radeons running a GL Tools / GLSL composition of mine and it looked markedly different in terms of sprightliness of performance. I do get that having a Quartz card doesn't mean its a GPU / OpenCL card too.

cwright's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

unfortunately, it's sometimes that "sprightliness" that causes non-conformance. For instance, on older ATIs they focused on 24bit floating point (depth buffers are 24 instead of 32, and the extra 8 bits is used for the stencil buffer) to the point where 32bit floats either weren't supported at all, or didn't have a full gamut of operations supported. So that optimization (chopping off 8 functional bits -> less bandwidth used, less memory used, less propagation/cascade delay) actually crippled any chance of supporting 32bit floats (which is required by CL). NV, on the other hand, had pipes for both 24 and 32, and that seemed to have fared much better. I don't think ATI does that any more (I'm super-rusty on GPU microarchitecture, so I could be totally making all this up), but it sure feels like there's "something" missing when it comes to missing CL support after all this time...

cwright's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

usefuldesign.au wrote:
I don't know why pre-OSX macs were so superior on graphics (or perhaps why I and so many were delusional in that belief) but it was a widely held belief in the graphic design and multimedia world with the exception of anything 3D which was almost totally PC (software mainly?).

I don't have stats, but I think that Mac OS carved out the top 1GB of the 4GB 32 bit address space for graphics -- it was absolutely insane at the time (we're just now getting 1GB vram gpus), but it allowed for extraordinarily simple blitting/drawing. PCs, on the other hand, had set aside a whopping 64k (!) at 0xA0000 (with ~32k of text mode stuff at 0xB8000 or so), so as screens got bigger, you started needing to do bank switching (bleh -- I remember writing that kind of code as a kid -- made fast blitters nearly impossible), or writing GPU-specific code (which had its own pitfalls). VESA 2.0 amended this with Linear FrameBuffer support, but that had its own complexities (32bit apps + not in protected mode + raw hardware access). That made Macs worlds easier to program when it came to graphics. IBM clones were still totally designed around text-displaying teletypes (and still are today -- look at bios screens; EFI has existed for how long now?)

usefuldesign.au wrote:
I think typically PCs weren't spec'd on hardware side, with narrow buses, no SCSI drives, low memory, crap GPUs, no second display without a mini-computer price tag?

bus was hit or miss -- 8088 was "poor man's PC" with an 8 bit bus, 8086-80186 had a 16 bit bus, 80286 was 16 still, 80386-80486 gained 32bit buses, and 80586 (pentium) and later had 64 bit with non-trivial DRAM protocols that made actual bus width "irrelevant" (it's important, but there are several factors at play that make it non-trivial to deduce which is "best"). SCSI was always too expensive for what it offered -- it had some really cool features (command queuing, for example, didn't hit IDE till the SATA era of the past ~5-7 years), but overall its cost/benefit ratio was inferior to IDE (I'm sure that'll start a flamewar; bring it on). Every GPU of the era was crap (either a dumb framebuffer + ramdac, a sprite rasterizer, or something with some simple primitives like points and lines) -- it was the accessing the framebuffer that made things nice.

usefuldesign.au wrote:
Whereas PCs seemed to be flacky allover the shop when doing a heavy workload (maybe it was just the Adobe app code?, I can't say).

When PCs were flaky, it was almost 100% the OS's fault (almost invariably windows) -- even in college (2001-2004), I'd absolutely hammer on my linux laptop because it was stable, while my windows-toting cohorts would cringe at how hard I pushed my machine. "Aren't you afraid you're going to crash it?" they'd ask. I think I can count the number of panics I had on no hands (my desktop, however, did panic due to some quirky chipset effects that AMD/Nvidia didn't clearly document until much after the fact). 1 sample's a poor statistic, I know, but the hardware has never been a weakness of PCs, only the software.

usefuldesign.au wrote:
As you said when G4 hit, the gap was obvious. All that was because Apple invested in advancing the benchmark hardware, either off the shelf or getting their hands dirty when it seemed worth it. Linux, what was that? I used a primitive DTP software based that employed a mouse but still used command line interface stuff on MSDOS or Linux before I got a copy of ReadySetGo! and then Quark the difference was profoundly stark (even with Pagemaker which always sucked IMO).

Linux was useful for non-trivial tasks in the mid-late nineties (smokris and I cut tons of code that only worked on linux, including audio processing and graphics stuff). There wasn't anything off the shelf, but the capabilities were there. Like you said, it was due to investment (not just for benchmarks, but for software in general -- great hardware is useless without great software).

usefuldesign.au wrote:
...so whether or not Hybrid was a good investment long term for Toyota remains to be tested.

Wasn't it John Maynard Keynes who said "in the long run, we're all dead"? :) It's very difficult to class decisions as purely good or bad with so many variables (that's more philosophical than I'd like to get with this discussion, just saying). Toyota's sunk costs in hybrids/electrical vehicle R&D can very much apply to all-electric vehicles, and unless there's an absolute breakthrough in battery energy density, the US at least will probably not go all-electric in my lifetime (unless someone invents a shrink ray to undo suburban sprawl) -- there simply isn't enough volume in an average US-sized car to hold a battery capable of propelling said car (plus additional battery weight plus cargo) to distances comparable to gasoline, and at cost parity (now, if gasoline truly cost what it incurs environmentally, that picture would be radically different, but that's another politico-economic quagmire for another day ;). No one likes it that way (except for oil companies), but it is what it is :/

usefuldesign.au wrote:
Also the cost thing was important for Toyota but not in the way you seem to be suggesting.

I think what you're describing is "TCO" (total cost of ownership) -- that's what I meant by cost too (though I wasn't as explicit). resale value is very much why they dominated, which is a cost thing (if you expect to realistically be able to sell the car for half its value, it really only costs half of what you're paying - apologies for the split infinitive, english purists ;). How they went about doing that is quite fascinating, as I'm sure you're aware, but those are details that no one cares about when purchasing a vehicle. Their car-purchasing decision is based essentially on costs and advertising effects. If Toyota could sacrifice baby seals to cut their costs in half, you can bet they'd do it (EPA/PETA notwithstanding) and they'd still sell tons of cars. Nike + child labor sell boat loads of cheap shoes at astronomical markups.

usefuldesign.au wrote:
Even if nobody uses video on their desktop (and your sample is so small statistically as to be irrelevant but I'm sure you did stats in Yr10 maths ;-) ), just the internet browser requirements, multitasking, (and iWork glossy Cocoa interface overhead) demands are constantly driving the base requirements of av Mac/PC hardware up. It's not like Moores law is halving the price of an entry level PC or Mac every 18 months, although $1200 MacBook is a bit less than point of entry 10 years ago I suppose.

The stats are small, but I think it has some merit -- it covers a fairly wide demographic (though the sample count is tiny, so the confidence is indeed very low as you note -- it's also cherry picked, because I do know some people who do edit video, but that's their job so I'm not sure how to factor that in :). I'm willing to bet that the number of people who actually edit video more than once per year is less than half of all users -- therefor, less than average :). Of those that do, simple titling and cross-fading is probably sufficient for what many need, so I don't consider it "editing" (cutting chunks out, rearranging them, etc.).

I've actually seen a leveling trend in computing power over the past several years --cpus are still around 2.4-3GHz (just like in 2003), memory buses still have terrible latency, harddrives don't read significantly faster (except for good SSDs). Some of those are due to physical limits (higher CPU clocks = more power dissipation, capacitive memory can never be as low-latency as SRAM, harddrives can only read so many magnetic transitions per second), but many are also economic: no one needs anything faster. (note how harddrive size is still bursting at the seams -- there's apparently demand for that). A lot of the requirement creep is because, unfortunately, many programmers have ridden the moore's-law-wave for a long time, and never learned good engineering practices and how to always be mindful of algorithmic complexity. Paired with shipping deadline pressure, writing great code is often sacrificed when good-enough code is already available. Technologies like Garbage Collection and GCD and OpenCL have been invented in the hope that it makes it easier to write great code, but I have yet to really see that take place generally (there are great examples, of course). GCD is by far the best of the bunch -- it actually does simplify a ton of threading stuff when paired properly with Blocks, but even that has pitfalls that you need to be aware of to write truly robust code. Very few people use GC because it's still possible to leak memory (via overrooting), and you still end up needing to do memory management sometimes (nulling-out dead pointers, etc). Additionally you need to be mindful of __weak vs. __strong (which is way easier than -retain/-release, right? err...). Very few apps use CL because ... very few apps actually work on really large data sets in ways that are conducive to CL. outside of QC, I don't think many of my computer-aided tasks benefit from CL in any meaningful way. web browsing? no. email? no. watching video? maybe, but h.264 hardware decode is the right answer there. spotlight searching? no. compiling code? no. listening to music? maybe, but hardware accelerated AAC/MP3 is the right answer there. Copying files? no. Doing my finances? no (ha, I wish I had enough transactions/dollars to make CL worthwhile for my finances ;). Image editing? Yes, but CoreImage/GLSL already exists, and can do most of what I need image-wise (this is a solid use case).

dust's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

ok that makes since. interactions are weird, it is possible to queue them up with a queue but i have yet to figure out how to get them out of a queue and into a structure splitter of type "interaction" that's why i use a custom hit test more often with things like (tuio) although interactions are awesome. i just found out the cross hair arrow looking button on the viewer window turns on interactions for all interaction objects. even though i just found this feature out i keep forgetting about it as i have grown a custom to using numbers to move things around now, but either way good example of how to use interactions within a consumer iteration context george

usefuldesign.au's picture
Re: Infinite Loop 2 (Composition by gtoledo3)

cwright wrote:
SCSI was always too expensive for what it offered -- it had some really cool features (command queuing, for example, didn't hit IDE
Except IDE pulled CPU cycles every time it blinked, with virtual memory demands what they were this may have been a factor? Don't underestimate the irritation of even momentary cursor lock, menu delay etc, it's wasn't just progress bar madness for every second PS>Filter>… that used to drive us crazy.

cwright wrote:
I think I can count the number of panics I had on no hands
Wish I could say that for my current Mac (dG5 10.5.8) so much for the rock solid Unix core, Apple!

cwright wrote:
Toyota's sunk costs in hybrids/electrical vehicle R&D can very much apply to all-electric vehicles, and unless there's an absolute breakthrough in battery energy density, the US at least will probably not go all-electric in my lifetime (unless someone invents a shrink ray to undo suburban sprawl) -- there simply isn't enough volume in an average US-sized car to hold a battery capable of propelling said car (plus additional battery weight plus cargo) to distances comparable to gasoline, and at cost parity (now, if gasoline truly cost what it incurs environmentally, that picture would be radically different, but that's another politico-economic quagmire for another day ;). No one likes it that way (except for oil companies), but it is what it is :/

Alot of hybrid engineering is about the 'camel' problem of engine && electonic drive chain inside 1 hood. Straight electric are so simple backyard operators are retrofitting them to regular ICE model cars — and car companies don't like cars they can't sell parts on to reclaim the car purchase cost many times over. A new car bought as parts cost about x10 the purchase price. ICEs need regular repair and maintenance parts over life cycle, electric do not.

There are thousands of threads on oildrum.net etcetc devoted to the remaining assertions quoted and this is too big to cover in convincing technical detail but I would note that billions of dollars of market capitalisation doesn't share your view. Australia is one five test case locations chosen by Better Place to prove all-electric for most of the car driving population in any location precisely because of the long driving distances we do, okay maybe not proof for Antarctica — the batteries might get too cold ;) Do you know of Shai Agassi, he started a software company with an Apple ][ and a loan from his dad. SAP bought him out and he was being groomed for CEO when he attended Davos and was asked how could he make the world a better place. He gave his attention to the problems of peak oil, the energy decent along with the global warming contribution of fossil fuel use. He looked at it for two or three years before arriving at all-electric rejecting hydrogen, biofuels and others along the way. Demand for commuter vehicles (India, China and South America about to sign a free trade agreement as we speak) is set to sky rocket. Supply pressure is also set to ramp as the remaining oil gets more and more expensive to extract (did I mention future liability insurance costs!) and many of our 50% remaining 'reserves' (some of what Bucky Fuller refered to as Spaceship Earth's capital account) are low grade like tar sands (did I mention env. disaster waiting to happen)

When Agassi sought a car company to partner BP, the company he left SAP to start, Renault was the only one to even give him the time of day. Since placing the first order for 100,000 cars from Renault every major car collective has magically revealed concept/pre-production all-electric cars. Read betterplace.com for how they think they will solve the range issues. It's a multi-prong approach including a battery exchange technology Japan has built for them that is robotic and is faster than filling with gas and you don't leave your car — the snacks are brought to you I guess ;). Also remember battery technology has improved in every metric including density but also re-use cycles, material re-cycling (100%) etc over last 20 years and curves haven't showed a plateau yet. BP (!oh the irony¡) will own the batteries too — which relieves consumer of a large part of the capital investment in an electric car (total COST of ownership that you identified). As tech improves batteries can be updated at no cost to car owner. Like mobile phone companies selling calls/data with a one off handset at start of contract cost BP intend to sell miles along with a one-off car purchase the central component of which can be replaced (that's Agassi's line anyhow… he's a slick salesman) If a consumer charges the car from their own supply source (PV panels) they pay 'nothing' for fuel if they charge off the grid or exchange batteries they just pay for the electrons not the swap. Range is actually met in a very high percentage of travel cases with one fully charged battery (distributed grid charging at carparks, work, home etc.) in cases it isn't there's exchange at key locations eventually ubiquitous like petrol.

It actually irks me that technically gifted people such as yourself, cwright, do a back-of-the-envelope 2-variable linear programming exercise and completely rule a technology out the way you just did. This happens all the time on sites devoted to these questions, arm-chair debunkers by the million. Less intelligent people in your sphere will listen to you and think (and repeat) oh forget about all-electric that will never have the energy/volume ratio to get up… It's not that the hard eng. questions do not have to be answered, they do of course, but answered by those who have given the problem enough patient attention to see through to viable solutions — solutions a safe climate is pretty desperate for to be truthful. That's when we call those people geniuses I guess, because they gave the focus, determination and patience that their creativity required to get useful answers.

(Mind you my dad (a physicist) and did the same about the hydrogen-car for 30 years ie. the energy cost to make and store hydrogen was too high to be a good fuel (even thought the density is great) and so far he's right unless you can own a very costly BMW and yes he's dead as well as right already ;) )

cwright wrote:
I think what you're describing is "TCO" (total cost of ownership) -- that's what I meant by cost too (though I wasn't as explicit).

Principle 1
Base your management decisions on a long-term philosophy, even at the expense of short-term financial goals.
People need purpose to find motivation and establish goals.

That's what I'm talking about. Ask a focus group to design a car and get what? Invest for the future not target the fashion of the day. Eg. What ever device a future tabletPC is, it will be running Windows (implication: who needs a touch specific OS), well that would be what iOS is and that took a modicum of forethought and investment.

cwright wrote:
If Toyota could sacrifice baby seals to cut their costs in half, you can bet they'd do it (EPA/PETA notwithstanding) and they'd still sell tons of cars. Nike + child labor sell boat loads of cheap shoes at astronomical markups.
Haha, a Japanese whaling ** by stealth — I love it! An irony of the high-tech Japan is the low-tech Japan and part of that is the backyard sweatshop supply chain of small parts manufacturers who are the 'baby seals' left out of the Toyota family (part of Toyota Way philosophy is to look after the family). Consumer activist backlash caused Nike to seriously address child labour btw. I'm not aware of the state of play on that issue today.

cwright wrote:
many programmers have ridden the moore's-law-wave for a long time, and never learned good engineering practices and how to always be mindful of algorithmic complexity. Paired with shipping deadline pressure, writing great code is often sacrificed when good-enough code is already available.
How many years before machines write better software than humans? That was the dooms-day Robot's-Rule prediction of some kids' parents when I was in primary school.