Darwin and API's

gtoledo3's picture

I have a number of thoughts in this one:

-Is it theoretically possible to take an Apple API (ala, Cocoa, Quartz, CI) and reverse engineer it to work in the Darwin environment, or some subset of that?

Also, in a roundabout way, after using VM Ware Fusion and thinking it was OK, but kind of thinking that an app should just be able to open up with a wrapper of sorts, and not having to have a one window virtual host, I had discovered this:

http://en.wikipedia.org/wiki/Wine_(software)

Which I thought was novel.

Then, even more fascinatingly, to me...

http://en.wikipedia.org/wiki/Executor_(software)

I thought that was cool because it sought to translate sytem calls from Mac into Win equivalents.

-Have there been any efforts by any groups to have a "system bridge" so to speak? I know there are many products that aim to make it easier for developers to do their thing, and then deploy for Mac, or then for Windows, and make it more of an automated process.

So it would be that an app would detect what OS it's on (for the sake of argument, OS X, Linux, Win), and then run the appropriate virtual bridge in it's own space.

Has any company attempted to make that kind of universal shell? I know that is a big task on the face of it, but it seems maybe not so much with certain approaches?

In any event... I'm surprised that there isn't even a small group of some kind trying to do anything along those lines.

dust's picture
Re: Darwin and API's

im not sure about porting qc. darwine or wine is handy for installing linux via fink to run on os x i have been using virtual box lately have you tried it ? virtual box is open source made by sun.

gtoledo3's picture
Re: Darwin and API's

Oh yeah, I'm not talking about porting QC, I'm really talking about something fairly different... That's why this is in "general/talk about whatever" :)

What I am thinking is more along the lines of this:

If you do something in OpenGL, or Mesa3D, theoretically, that gives cross compatibility, as long as the manufacturer supports that, except that OpenGL only supports rendering functions, and has no windowing, etc.

I'm wondering if anyone has ever endeavored to do a kind of emulation system where apps can run, or appear to run, natively. One would install this program on a given computer (OS X, Linux, Vista, etc), and it would install parts of the given OS that it needs to make said app function in a given environment. Bonus points, if it didn't have to do that, and apps could didn't have to deposit files like that everywhere.

I'm just surprised to not even see a tiny "org" doing anything like that. I see some of the emulation organizations, but that's not exactly what I'm thinking either.

It would be along the lines of something like VMWare, except there wouldn't be a humongous partition/one window setup. You would have the frameworks loaded on your computer, and the Windows app just opens.... or vice versa. If you installed the product on Windows, it would install the Linux, Mac OS, or whatever things that it needed, but not the bloat of the entire system.

MOST importantly, it would be designed so that it could be "smart" and take advantage of technologies that are part of the given system WITHOUT emulating anything... as in, instead of emulating a Linux OS function, it could take advantage of a technology that is part of Vista or OS X... or any permutation of that.

I guess I'm suggesting somewhat of a major hack.

cwright's picture
Re: Darwin and API's

Wow, that's a doozy (one of my life-long dreams is to essentially complete a project like that -- a sort of "run everything, everywhere" super emulator. Unfortunately, I'll need about 2million USD to retire comfortably on before I can even get started, as it's a decades-long project...)

(zeroth: I actually wrote a Win32 VST loader as an experiment for VST/AudioUnit crossover a couple years ago, with a tiny bit of success -- I know a bit about this trans-platform thing)

First things first, I'm going to kill the "single app that just runs by detecting what OS its on" idea with this (note to the uninitiated -- "binaries" == "apps"):

Mac OS X apps are in the Mach-O file format. This format is not used outside of the Mac OS X operating system (perhaps some small Mach-kernel based OSs use it too?)

Linux binaries can be "a.out" (if you're stuck in the 1980s), ELF, or a couple other obscure formats. ELF is the normal one.

Windows binaries are in PE format, or a variant thereof.

So right from the get-go, there's no native way to even load an OS X app into memory on a windows machine, and start "running it" (all library calls aside). Similarly, there's no way to load PE files on OS X, and have them just work. There are loader projects to sort of accomplish that (wine, etc), but they all have pretty severe caveats. Linux might have support to load other binary formats, but I've never seen it actually used/deployed.

Fun Fact: Sony PS2 games (and possibly PSone and PS3 games?) are also in ELF format. but this doesn't mean they work on Linux (explained in the next section)

Next up, we take a swing at Architectures. Mac OS X currently runs on PowerPC (ppc) and Intel (x86) processors. the Mach-O format is designed to allow multiple application architectures to live in a single app binary file (so-called "Fat binaries" or "Universal binaries") -- variations on this include ARM (for iPhone apps), ppc64 (64bit ppc apps), and x86_64 (64bit intel apps). So if you're particularly hard-core, you can have a single file run on 5 different CPU architectures, with OS-level support. I've never seen this elsewhere (excluding java).

PE only supports a single architecture per file -- fun fact, in the 1990's, there were ALPHA (another CPU type that's all but dead now) PE files, as well as MIPS (another CPU that's not quite dead) PE files. Windows NT ran on ALPHAs and MIPS machines. Today windows binaries (PE files) are just x86 or x86_64, for the most part. (disclaimer: I've not worked on 64bit windows before, so somethings might be incorrect regarding 64bit pe files).

ELF only supports a single architecture per file -- MIPS, Sparc, Alpha, ARM, x86, PPC, all kinds of cpus. PS2 uses MIPS, so your run-of-the-mill linux box (likely an X86 or possibly a PPC) can't actually execute it.

Wine currently takes advantage of the common x86 base on OS X (since 2006), Linux (usually), and win32 platforms. If you stray from x86, you won't succeed.

Next up, we get to decide how we'll actually run the code. if the host CPU is the same (x86, generally), we can sort of cheat, and just run the code nativly. This requires calls to be mapped, and a compatibility layer to be written (to map calls that don't map 1:1 with host OS calls). Wine is one such compatibility layer -- it reimplements much of the Win32 API, and can load PE files without OS support -- it then maps in its compatibility calls where appropriate, and says "run". As long as the program doesn't do anything too crazy, stuff more or less works. The result is Win32 programs that can run on Linux, OS X, or any other x86-based operating system (as long as the compatibility layer works). The alternative to this is Emulation -- you then have to write a CPU interpreter, and do some fun stuff that's generally slow or complicated (faster emulators will re-compile on the fly to improve performance, but then you have to check for all kinds of crazy stuff, in addition to writing a compiler, etc).

Further up the chain, there are System-level frameworks that touch drivers (opengl, opencl, coreimage, some parts of quicktime, quartz) -- in these cases, pseudo-drivers need to be written that behave just like native apple drivers. This can be cake (opengl's pretty straightforward these days) or impossible (opencl doesn't have hardware support outside of OS X 10.6, so it'd all have to be done in software/an emulation layer of some sort, which would be a disaster).

The real things that make it easier for developers to work on multiple platforms is a consistent OS api -- for example, it takes almost no additional work to make a 4-architecture program on OS X that works. This is because Apple has made the API identical on all their supported platforms, so the developer doesn't have to care. When you move off OS X, you have lots of different APIs to fight with: win32 and win64 are subtly different. linux is a joke (every nerd in their mom's basement can have a subtly different setup, so you have to compile a huge static binary to get anything to work anywhere, and even then you sometimes lose). Its these slight differences that make the cross-platform thing non-automatic. even on OS X, going from 32bit x86 to 64bit ppc isn't completely automatic -- there are cases where you have to step in, and tweak a few lines of code. Now imagine every system call requiring said tweaks, and it's a nightmare.

There's OpenStep, which reimplements much of the Cocoa framework. Cocoa doesn't really touch hardware though, so it's pretty simple. OpenGL's present everywhere, so that's more or less done. Writing a CoreImage framework would be lots of work. Writing a Quartz framework would be a lot of work. Porting CoreGraphics would be a lot of work. CoreAudio, lots of work. OpenCL, lots. QuickTime (needs redone because some parts, like QTKit, aren't on win32, and QT's entirely absent on linux), Lots of work.

So, you'll be able to run small toy OS X apps elsewhere without too much effort (for still large values of "too much"), but anything that does anything remotely interesting will require significant amounts of work.

cwright's picture
Re: Darwin and API's

gtoledo3 wrote:
I guess I'm suggesting somewhat of a major hack.

Ha! you don't even know how major that would be :) conceptually, it's simple ("OpenGL's everywhere!"), but even then, you need to tweak stuff (does the host OS use the same calling convention? If not, you corrupt stack, and die strange deaths) for every call to every library.

You also can't legally distribute Apple or Win32 libraries, so you have to write your own (or break the law, or sell a license, which apple doesn't allow for non-apple-label hardware).

gtoledo3's picture
Re: Darwin and API's

cwright wrote:
Wow, that's a doozy (one of my life-long dreams is to essentially complete a project like that -- a sort of "run everything, everywhere" super emulator. Unfortunately, I'll need about 2million USD to retire comfortably on before I can even get started, as it's a decades-long project...)

Done. Uh. But wait, you say to even get started? Geeeeez. What a hustla.

cwright wrote:
(zeroth: I actually wrote a Win32 VST loader as an experiment for VST/AudioUnit crossover a couple years ago, with a tiny bit of success -- I know a bit about this trans-platform thing)

Oh really? Interesting... did it see release?

cwright wrote:
First things first, I'm going to kill the "single app that just runs by detecting what OS its on" idea with this (note to the uninitiated -- "binaries" == "apps"):

Mac OS X apps are in the Mach-O file format. This format is not used outside of the Mac OS X operating system (perhaps some small Mach-kernel based OSs use it too?)

Linux binaries can be "a.out" (if you're stuck in the 1980s), ELF, or a couple other obscure formats. ELF is the normal one.

Windows binaries are in PE format, or a variant thereof.

So right from the get-go, there's no native way to even load an OS X app into memory on a windows machine, and start "running it" (all library calls aside). Similarly, there's no way to load PE files on OS X, and have them just work. There are loader projects to sort of accomplish that (wine, etc), but they all have pretty severe caveats. Linux might have support to load other binary formats, but I've never seen it actually used/deployed.

So, one has to resort to making an install of the OS, and running a window that it operates in. That totally makes sense... but could the OS be stripped down? I guess that's kind of abstract.

Using the VMWare Fusion, to Windows paradigm:

If I wanted to run some Windows app on MAC, at time of install of VMWare, I would say, "I want to run Cubase". Then it would install only what it needed to accomplish said function. Also, when I would launch the VMWare, I wouldn't see Windows, I would just see Cubase.... for example.

So, ostensibly, someone could run an app on Windows, have an install that actually sets up a Windows OS, and whenever the program runs, Windows is running in the background. I'm thinking of that kind of ultra crude approach as the start point. The flipside of that is that the same could be done in reverse, and an OS X kind of bundle would be installed in a Windows computer, or Linux powered computer.

cwright wrote:

Fun Fact: Sony PS2 games (and possibly PSone and PS3 games?) are also in ELF format. but this doesn't mean they work on Linux (explained in the next section)

Next up, we take a swing at Architectures. Mac OS X currently runs on PowerPC (ppc) and Intel (x86) processors. the Mach-O format is designed to allow multiple application architectures to live in a single app binary file (so-called "Fat binaries" or "Universal binaries") -- variations on this include ARM (for iPhone apps), ppc64 (64bit ppc apps), and x86_64 (64bit intel apps). So if you're particularly hard-core, you can have a single file run on 5 different CPU architectures, with OS-level support. I've never seen this elsewhere (excluding java).

PE only supports a single architecture per file -- fun fact, in the 1990's, there were ALPHA (another CPU type that's all but dead now) PE files, as well as MIPS (another CPU that's not quite dead) PE files. Windows NT ran on ALPHAs and MIPS machines. Today windows binaries (PE files) are just x86 or x86_64, for the most part. (disclaimer: I've not worked on 64bit windows before, so somethings might be incorrect regarding 64bit pe files).

ELF only supports a single architecture per file -- MIPS, Sparc, Alpha, ARM, x86, PPC, all kinds of cpus. PS2 uses MIPS, so your run-of-the-mill linux box (likely an X86 or possibly a PPC) can't actually execute it.

Wine currently takes advantage of the common x86 base on OS X (since 2006), Linux (usually), and win32 platforms. If you stray from x86, you won't succeed.

Interesting rundown, and good points about the limitations there... giving me some thoughts to chew on.

I would definitely think that x86 would be the only way to go, and hadn't even seriously considered any other future (even though this is all a total fantasy/conjecture discussion anyway). Interesting point about the MIPS and the fact that the PS2 games don't run on Linux because of that!

cwright wrote:

Next up, we get to decide how we'll actually run the code. if the host CPU is the same (x86, generally), we can sort of cheat, and just run the code nativly. This requires calls to be mapped, and a compatibility layer to be written (to map calls that don't map 1:1 with host OS calls). Wine is one such compatibility layer -- it reimplements much of the Win32 API, and can load PE files without OS support -- it then maps in its compatibility calls where appropriate, and says "run". As long as the program doesn't do anything too crazy, stuff more or less works. The result is Win32 programs that can run on Linux, OS X, or any other x86-based operating system (as long as the compatibility layer works). The alternative to this is Emulation -- you then have to write a CPU interpreter, and do some fun stuff that's generally slow or complicated (faster emulators will re-compile on the fly to improve performance, but then you have to check for all kinds of crazy stuff, in addition to writing a compiler, etc).

Is there a middle of the road approach that would use the compatibility layer approach, plus emulation when necessary? I'm laughing because that makes no technical sense, but I'm a provocateur, what can I say?

cwright wrote:

Further up the chain, there are System-level frameworks that touch drivers (opengl, opencl, coreimage, some parts of quicktime, quartz) -- in these cases, pseudo-drivers need to be written that behave just like native apple drivers. This can be cake (opengl's pretty straightforward these days) or impossible (opencl doesn't have hardware support outside of OS X 10.6, so it'd all have to be done in software/an emulation layer of some sort, which would be a disaster).

I've been curious about that. OpenGL is obviously universal, and that's the whole point. That point about OpenCL has totally gone over my head, and I hadn't realized that hardware is unsupported outside of 10.6... I mean, I had, but had not consciously thought about it in those terms.

cwright wrote:
The real things that make it easier for developers to work on multiple platforms is a consistent OS api -- for example, it takes almost no additional work to make a 4-architecture program on OS X that works. This is because Apple has made the API identical on all their supported platforms, so the developer doesn't have to care. When you move off OS X, you have lots of different APIs to fight with: win32 and win64 are subtly different. linux is a joke (every nerd in their mom's basement can have a subtly different setup, so you have to compile a huge static binary to get anything to work anywhere, and even then you sometimes lose). Its these slight differences that make the cross-platform thing non-automatic. even on OS X, going from 32bit x86 to 64bit ppc isn't completely automatic -- there are cases where you have to step in, and tweak a few lines of code. Now imagine every system call requiring said tweaks, and it's a nightmare.

Yeah, that's a whole barrel of worms I hadn't thought of like that.

cwright wrote:

There's OpenStep, which reimplements much of the Cocoa framework. Cocoa doesn't really touch hardware though, so it's pretty simple. OpenGL's present everywhere, so that's more or less done. Writing a CoreImage framework would be lots of work. Writing a Quartz framework would be a lot of work. Porting CoreGraphics would be a lot of work. CoreAudio, lots of work. OpenCL, lots. QuickTime (needs redone because some parts, like QTKit, aren't on win32, and QT's entirely absent on linux), Lots of work.

No doubt... I kind of find the history of NeXT, and the development of NextStep and OpenStep as sort of fascinating...

In that regard... can Quartz/CI/blah blah, just be viewed as being OpenStep programming inside of a kind of separate program shell, and something like a qtz is just a XML property list?

I think the whole thing of GNUstep/HippoDraw is also very interesting historical note... it's like "let's not rewrite the program, let's just rewrite the layer"!

cwright wrote:

So, you'll be able to run small toy OS X apps elsewhere without too much effort (for still large values of "too much"), but anything that does anything remotely interesting will require significant amounts of work.

Now, I would imagine that in an alternative to doing things the "right" way, someone could make a Mac partition on a Windows machine, it could defeat the chip rigamorale ala Psystar/Hackintosh stuff, and when you could click on a Mac app and it would open, and you wouldn't see a Mac desktop. The app would just open up and it would be in it's own window. I would also bet that the same could be done with a Windows app on a Mac, and not have to see something like the Windows desktop. I would go even further to say that I would guess that large parts of each OS install could be lopped out...aaand, I would go EVEN further to say that it could be made part of an installer where a user said what OS they are on, it would make the right kind of partition, and automate a bunch of it. In lieu of an actual OS install, an emulator of sorts could run on that partition.

Ah... to dream! Someday someone is going to do something approximating this, I'm sure of it.

gtoledo3's picture
Re: Darwin and API's

Oh, I totally know how major, that's why I thought it would be fun to post this instead of emailing it to you... this is a fun kind of concept.

I do totally realize that to do this correctly libraries would have to be written. On a conceptual level of total hack and without writing libraries, it seems doable within a human lifetime. With each company actually consenting to this activity, it seems more doable.

I have to admit, I think it is a wee bit lame of Apple for doing that on the software license, but I also understand why. My contention is that they should just offer a different software license and pricing to run on non Apple hardware, but then again, they would probably have to offer some level of support. I would likely make the same decision as Apple from a business perspective. Totally digressing, but I think they should offer a wider array of hardware if they want to be that way about the software.

cwright's picture
Re: Darwin and API's

Apple was run into the ground by software licensing in the past. Since then, it looks like their approach has been "sell good software, and only on hardware that makes it look good". To illustrate, when the MacBook Air first came out, it featured solid state drives. However, all but 2 or 3 SSD manufacturers produced absolute garbage (4k random writes would stall the machine for 1 second -- even spinning platter disks aren't that bad. OS's do small writes like that all the time, so it made the a bad impression on SSD in general, even though it was entirely a poorly-written-firmware problem, not an inherent hardware problem) -- instead of using trash, Apple simply picked the ones that didn't exhibit that particular problem, because of image (even though it cost ~2x-3x more). I'm guessing they still don't want their software touching anything that will risk making it look bad. MS is still catching flack for marking Intel GPUs as Vista Capable.

It's totally doable in a human life time, it just takes a small nimble team to work on things. APIs change very quickly, so it takes continually evolving work to make and maintain such a portability layer. If I started now, and finished in 2014 (just 5 years), no one would care that there's a working Win32-OS X 10.5 compatability layer, because both platforms would have moved significantly by then (Windows7 will be dated in 2014, and Snow Leopard will probably have a predecessor by then). With company support it would go much faster (much less reverse engineering), but it would cut into their bottom line (how much does apple make because of Final Cut?), so that's unlikely :/

cwright's picture
Re: Darwin and API's

gtoledo3 wrote:
Done. Uh. But wait, you say to even get started? Geeeeez. What a hustla.

It's something I'd love to do, and it would be fully engrossing -- once I started, I couldn't do anything else (including working at a real job ;)

gtoledo3 wrote:
Oh really? Interesting... did it see release?

Nope, purely proof-of-concept. actually making a full-featured win32 VST layer requires much of Wine (win32 compat layer), and it's not viable anyway because everything's moving to 64bit.

gtoledo3 wrote:
So, one has to resort to making an install of the OS, and running a window that it operates in. That totally makes sense... but could the OS be stripped down? I guess that's kind of abstract.

It's possible, and even done today (iPhone is a stripped down OS X, AppleTV uses a stripped down OS X, many routers and PTZ cameras use stripped down Linuxes. There are a few options for stripped down windows as well I'm sure).

However, just running the app doesn't make stuff just work -- if they need to interact with other apps, you have problems (drag and drop isn't cross platform, and there's no real solution to that that I've seen, filesystem stuff is icky, but there are hacks to at least make it work, if not automatically).

gtoledo3 wrote:
I would definitely think that x86 would be the only way to go, and hadn't even seriously considered any other future (even though this is all a total fantasy/conjecture discussion anyway). Interesting point about the MIPS and the fact that the PS2 games don't run on Linux because of that!

They also didn't run because the hardware was fundamentally different, even if the CPU was the same.

x86 means 32 bit and 64bit, so there's still not a single "oh, this is everything!" option.

gtoledo3 wrote:
Is there a middle of the road approach that would use the compatibility layer approach, plus emulation when necessary? I'm laughing because that makes no technical sense, but I'm a provocateur, what can I say?

This is actually what VMWare, Parallels, and some other virtualizers (Xen) do -- they run native instructions until said instructions fault (faults happen if an application tries to do an OS thing, like touch hardware directly, or control physical memory) -- they then catch the fault, figure out what the OS was trying to do, emulate what should have happened ("oh, looks like it was trying to write to video memory. I'll do that using a system call instead... done, return control to virtualized app as though nothing happened"). It's doable, and it works, but it's not fast (some memory management stuff is a single instruction in x86, but requires several thousand emulation instructions in virtualized mode).

gtoledo3 wrote:
I've been curious about that. OpenGL is obviously universal, and that's the whole point. That point about OpenCL has totally gone over my head, and I hadn't realized that hardware is unsupported outside of 10.6... I mean, I had, but had not consciously thought about it in those terms.

universal isn't even accurate -- there are extensions (Apple has a bunch of GL extensions that aren't found elsewhere), there are calling conventions (win32 and OS X 32 use different conventions, so every call needs some stack thunking/cleanup (did that a bit for the VST experiment))... the basics are there (drawing vertices in immediate mode), but as you get fancier, it gets riskier and riskier...

gtoledo3 wrote:
In that regard... can Quartz/CI/blah blah, just be viewed as being OpenStep programming inside of a kind of separate program shell, and something like a qtz is just a XML property list?

Functionally, everything is a box -- you put stuff in (function calls and data), and it give you stuff back (or does something interesting -- OpenGL doesn't really give much back to the program, but it does draw pretty pictures). CI could be all software, and ultimately anything can be all software (any turing-complete CPU can emulate any other turing complete CPU, given sufficient time and memory). The trade off is complexity and speed.

a QTZ on disk is indeed a property list (usually binary, not XML, but the underlying info is the same). However, the useful aspect is the QC framework (and, by extension, the OpenGL, CI, QT, and other frameworks that make QC what it is). Saying a qtz is just an xml plist is like saying that your harddrive is just ones and zeros. it's true, but it's oversimplifying.

gtoledo3 wrote:
Ah... to dream! Someday someone is going to do something approximating this, I'm sure of it.

I'm actually kinda surprised at the lack of OS X compat work done by the OSS community. I don't know if such a project will ever finish while it's still relevant.

psonice's picture
Re: Darwin and API's

If you want windows apps running "natively" (as in appearing to do so) in osx, I think both parallels and vmware will do it already. VMware has "unity" mode where the windows, err, windows will show inside OSX so windows itself is hidden. Can't remember what the parallels equivalent is called. You can launch windows apps from within OSX, and windows will load in the background and the app will fire up when it's ready.

Wine: yeah, it works.. for some stuff. Have a look for crossover (basically wine for OSX, designed mainly for running games.. I've had no luck at all getting demos to run in it, but perhaps I was unlucky/doing it wrong). There's also something called Cider which is based on Wine I think, that's intended as a wrapper but it's made for game developers looking to port their game to OSX quickly.

Cross-platform stuff: as cwright said, a single binary is pretty much out of the question, but writing cross-platform apps isn't all that hard. You can stick to a common language (i.e. avoid cocoa, or pretty much anything other than plain c/c++) and use a framework like SDL to handle windowing, UI widgets etc. There's still some work to get it running on each platform, but it's not a whole lot. And you still need to compile for each platform of course.

Speaking of cross-platform, there's a demo group called Fit that are well into that. Check the list of platforms this runs on! http://www.pouet.net/prod.php?which=13047 That's a result of using SDL and avoiding platform-specific features. They're using software rendering too, so there's no reliance on openGL even.

dust's picture
Re: Darwin and API's

i have space on my machine to run pretty much all the operating systems i want i have a few unbuntu, backtrack, xp, windows 7, leopard, SL, etc..

virtual box has a bunch of systems i have never herd of in it. the thing is that with what darwine can't due or what cross-over can't do parallels and vmware can do. even then some things don't work like i can not run unity3d in a virtual environment.

things like c, c++ with mondo i think its called the new c# cross platform IDE seem to bridging the library computer science side of things but then there is always java it seems to work on everything even my ipod.

as far as designing a system that runs everything well thats a bunch of licensing headaches but would be a profitable venture. thats just some low down assembly dirty work in the bowels of computer science there are not many that like the guts level but some do. im more of a high level if not visual guy so thats not a job i would want to go for. i think with the open source darwine, and virtual box alot of work is done for you. i don't really know i love macintosh.

i felt completely daft the other day with vista when i couldn't figure out how to get my mates computer on the internet at my house ?????? couldn't find the dhcp tcpip setting in vista. ???

i actually want to try hackintosh. have not done this yet but my mate just got a brand new intel laptop for like 200 bucks at best buy couldn't believe it.

gtoledo3's picture
Re: Darwin and API's

I thought this was interesting:

http://www.cocotron.org/

gtoledo3's picture
Re: Darwin and API's

cwright's picture
Re: Darwin and API's

yellowbox and cocotron are both libraries -- these are only useful if you have the source available. For closed source apps, they provide absolutely no benefit.

gtoledo3's picture
Re: Darwin and API's

I know this... I always wish that Apple hadn't drifted into the "black box" zone with it's stuff, but it's understandable.

Cocotron is really incomplete. I think it's a cool effort though.

I find the Yellowbox stuff interesting from the perspective of thinking that in essence, at one point everything could have gone down the road of compatibility. It's more of a historical footnote than anything else. I actually remember reading about Yellowbox/Bluebox in a computer magazine while waiting for a haircut, and the whole system being a kind of mindbender to me at the time... it was an innocuous moment, but it stands out. I think it's interesting that the thrust was initially to allow one to build for a variety of platforms.

I don't post that stuff to suggest in any way that it benefits QC.

cwright's picture
Re: Darwin and API's

I think blackbox is common to all platforms though -- win32/.net (which is sorta cross platform? -- like java). there's glibc on linux (and it's 80-gajillion versions all mutually incompatible).

I kind of like the variety -- it helps cover lots more ground and see pitfalls/benefits, without having them all on a single platform (linux has this "feature", which makes apps on linux a comical effort in futility). I can totally see apple sticking to their guns, simply because there's a technical advantage to a lot of their things (and a lot of problems as well - don't worry, I haven't finished all the kool-aid yet ;). Aiming for interoperability means picking lowest-common-denominator type stuff (No Core Image, java only recently getting OpenGL, wonky file name constraints, path separators, single-root/multi-root filesystems, a million little details that go all the way down to the kernel, like semaphores and Mach Ports).

I know you're not posting this to benefit QC -- I'm not replying to do so either. It's more of a stream-of-consciousness of the history of design/implementation of various computer technologies over the past 15 years. All reaching for different goals, etc, etc...