Kinect on OSX !

mfreakz's picture

Hi there, Here is a begining for all those who dream about integrate the new Microsoft Kinect camera in Quartz Composer project: http://www.maclife.com/article/news/xbox_kinect_modded_work_mac_os_x?utm...

Please, please... Developpers, jump into that project ! Then our comps could have:

  • Z-depth real time video, mapping, detection, etc...
  • A "Z Threshold" to increase OpenCV detection...
  • A good HD Webcam + a basic stéréo Microphone...

After the WiiMote Patch, the Kinect Patch should mobilise us !

monobrau's picture
Re: Kinect on OSX !

I am sooo waiting for this to happen!!

franz's picture
Re: Kinect on OSX !

theo watson from OF already released a library to use it: http://theo.tw/deliver/kinect/000-libfreenect-modded-osx.zip

The kinect outputs a Zdepth image, but you'll have to do the processing by yourself... I've already ordered mine.... waiting for the package to arrive !

gtoledo3's picture
Re: Kinect on OSX !

Is it just me, or can't two cams do this job? It's nice this is a self contained hardware package, but is it really a big deal?

vade's picture
Re: Kinect on OSX !

I don't believe so, 2 cameras cannot get the same depth map as the Kinect, as it uses disparity mapping via IR point cloud, so I dont think you will get the same accuracy.

gtoledo3's picture
Re: Kinect on OSX !

Point about the depth map taken - I was figuring that an OpenCL kernel could be used to derive a depth map from two cams. I'm looking at that right now as a matter of fact. If this uses infrared, that's an interesting approach... definitely not the same as two cams.

If I place two cams at the correct (or hell, even incorrect) interocular distance, and checkerboard them correctly, it can play them back as 3D with convincing imaging. It seems like if that can work, I can derive a depth map that from the two channels using OpenCL. I already have one working in CI that looks way better than the gingerbread man standing in front of a sprite with shadow from that webclip. The CI seems sort of slow though.

I think the Kinect approach of one channel IR and one RGB is interesting though.

vade's picture
Re: Kinect on OSX !

My understanding is the depth map from the Kinect is 12 bit precision, the most you will get from 8bit cameras is 8 bits. Sure, you can do disparity checking and depth mapping with cameras, Apple has shipped a CI kernel to do this for a long time now, but I'm pretty sure the people at MS are not stupid and found an appropriately fast and accurate algorithm that works just as well for their needs for accuracy (which is most likely going to be better than most new installations will have by a decent margin). The Kinect does all of this in hardware. Using CI for these sorts of things is indeed slow.

gtoledo3's picture
Re: Kinect on OSX !

The RGB cam is 8-bit and the depth cam is 11-bit apparently...

It seems like the interesting advantage will be in dim lighting.

It's sort of fake to abstract it to "happening all in hardware", but I think I get what you mean.

Using CI for something like this can be ultra slow. Using OpenCL can be extremely quick; I'm outputting pixel difference values from two cams right now with the kernel seemingly adding no performance lag. I don't have a proper depth map going yet though, kind of figuring it out...

cybero's picture
Re: Kinect on OSX !

I haven't had a chance to look at or use the Kinect camera .

It would be hugely ironic if the device could be more effectively exploited on the Mac via Quartz Composer than is , according to some sources, underwhelmingly achieved upon the X-Box at present.

memo's picture
Re: Kinect on OSX !

You can use two cameras to get a depth map, but it involves a lot of heavy calculations, for relatively inferior results. First you need to calibrate both cameras (calculate coefficients necessary to completely undistort lens distortion for both cams), then rectify the cameras (find their precise transformation relative to each other), so eventually each pixel in one camera image appears in the same scanline on the other camera's image, and then you can calculate the disparity. The slightest offset in the cameras will cause you to lose accuracy. You could invest a couple $K in an industrial stereo cam like this http://www.ptgrey.com/products/stereo.asp But then you're still dependent on lighting, and detail of the subject. If you're wearing relatively smooth, non-detailed clothing, then there will be no details for the algorithms to latch onto, and as you move you're going to get noisey depth maps, full of holes - while tying up a lot of your CPU or GPU. http://www.flickr.com/photos/golanlevin/2564500689/in/set-72157600974132...

Or for £130 you could get one of these which gives you a much cleaner depthmap for relatively free :)

gtoledo3's picture
Re: Kinect on OSX !

....or you can use two crappy logitech cams and some openCL...

The calcs aren't heavy at all, it's the noise/cam quality that's more of an issue. Here, I'm analyzing two video feeds, preparing an image that's a normal map, and then rendering. I'm projecting the normals on a sprite along with color texture from one of the cams.

If I would have taken the time to really calibrate the cams, I'm sure the results would have been nicer. Tomorrow I'll actually calibrate them instead of just clipping them on my laptop, and then play with it some more.

The kinect seems like a cool device, and I'm interested to check it out, especially IR. I guess that the calcs don't seem like a big deal, and now I'm more interested in getting an IR sensor than I am a kinect, but I'm kind of diy.

I'm well aware of the available hardware and how much money can be spent.

Maybe it sounded like I was asking a question or something... (?)

dust's picture
Re: Kinect on OSX !

this thing looks sweet even if its not as accurate as a wii-mote look forward to a qc-plugin. might order one of these things, not sure i might wait till there is a plug. i'm a little uncertain if the camera itself outputs some sort of point cloud data accelerometer type of data in addition to the images. i suppose right now its up you to derive any useful information from the images ? ? i have an idea for a few types of uses for this type of camera but don't really have any clear need for it other than furthering my understanding of computer vision.

might be an excuse for a xbox though.

this video explains a bit more of whats going on with this thing.

gtoledo3's picture
Re: Kinect on OSX !

Double rainbow!

When the person talks to kinect, I kind of wish it talked back like Kitt from Knight Rider.

waxtastic's picture
Re: Kinect on OSX !

Kinect depth image in Quartz Composer.

I used Theo's openFrameworks project (http://www.openframeworks.cc/forum/viewtopic.php?f=14&t=4947&sid=3066689...) and just added ofxSyphon to it.

vade's picture
Re: Kinect on OSX !

Not be be an ass, but that hardly looks anywhere near the quality / level of detail that the kinect gets you.

http://www.openframeworks.cc/forum/viewtopic.php?f=14&t=4947

gtoledo3's picture
Re: Kinect on OSX !

I don't take it that way. I in no way would compare 15 minutes of putting together some stuff I already have, and hooking up cams, nudging them into place without checking if 3D imaging was really correct, to the work of the entire microsoft corp, and the hack that has brought it to OS X.

My point wasn't about quality at this point, it was that taking two cam feeds and deriving a depth channel can be quick. I wasn't necessarily shooting for quality of the cuff, just that I could get a normal map quickly using OpenCL with OS X.

I have to say, I'm not ultra convinced of quality from anything I've seen, but I do think that it's accuracy for minority report type "wave hands in air" menu stuff is very nice indeed. The dots in the room and quality of z channel, not so convinced about yet, but hopeful.

vade's picture
Re: Kinect on OSX !

Did you look at the images in that thread I linked? Nothing I've seen from 2 image based maps come close to that.

gtoledo3's picture
Re: Kinect on OSX !

I don't know... this looks like more info than the post below by waxtastic; note more depth info on my face; not just a gingerbread man. It's happening 60fps. Not some onerous and horrible number crunch.

I'm still waiting to see something awesome, but hopeful about the IR aspects, and low light scenarios.

PreviewAttachmentSize
dual cam normals_.png
dual cam normals_.png258.14 KB

gtoledo3's picture
Re: Kinect on OSX !

Yeah. I see a threshold gingerbread man with a slight shadow cast behind him, then I see some pics of what looks pretty close to the quality of a single cam depth map rendered with a bunch of color points. Not trying to be flip, it just doesn't like so awesome or groundbreaking at this point.

Again, interested to see the hardware and use it. The cool part seems in the concept of using IR.

PreviewAttachmentSize
dual cam normals 2__.png
dual cam normals 2__.png489.7 KB

vade's picture
Re: Kinect on OSX !

nevermind.

memo's picture
Re: Kinect on OSX !

Only problem is, that's not a depth map :) I'm not saying it's a bad depth map. I'm saying it isn't a depth map at all. You could pass it as a bump map or a normal map, but not a depth map. If you were to try and create a point cloud out of that data (ala House or Cards) you wouldn't get a 3D model of your face, you'd just get a flat wavy surface with bumps and ridges where your features are. And that isn't because the cameras aren't calibrated properly, but due to the algorithms. But please carry on trying, i'm sure it's a fun challenge :)

Anyways, like I said, getting an accurate depth map is perfectly possible from a stereo image pair. People dedicate years of their life into this research, and opencv have a huge section on this. To do this with opencv you need to use a lot of the functions on this page http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_r... You can get a decent disparity map from that, but at the cost of eating up all of your CPU (I know, I've done it, for very good quality dataset, expect 5-10 fps, 30fps if you're prepared for big compromises). Sure you can look at GPU solutions, but you can't deny when Kinect can give you detailed accurate data like this http://www.flickr.com/photos/kylemcdonald/5167174610/ for £130 out of the box with no calibration requirements, and no processing overheads other than reading the stream from USB, that's pretty impressive :P

cwright's picture
Re: Kinect on OSX !

Is it perfectly possible from a stereo pair? I'd think there can be pathological cases (e.g., where there aren't good/reliable edges to match, or when something is occluded from one view, but not the other).

I have to admit I'm a bit rusty on these kinds of things (smokris probably deals with this kind of stuff more than I do).

cwright's picture
Re: Kinect on OSX !

Here's something potentially awesome:

get 3 or 4 kinects, and point them all around some focal point. (I hope they won't interfere with each other? If so, this won't work.... I don't know the details of the IR tracking -- it looks like it's strobing in the videos, but that could be a camera artifact).

You then have a 3D volume of whatever's inside. You could marching-tetrahedrons or marching-cubes this data set to generate actual 3D models (and paired with camera input, you could texture them correctly too).

Maybe this would work with just 2, pointing at each other. Stuff on-edge (perpendicular to the axis between the two devices) wouldn't be well-captured then, but it might also interfere less.

vade's picture
Re: Kinect on OSX !

That does not look like a depth map at all. That looks like luminosity displacement a la Rutt Etra, with some lighting and normals thrown in.

vade's picture
Re: Kinect on OSX !

Yea, this would be an amazing cheap 3D digitizing solution if someone way smarter than I could figure it out :D Would be amazing. Combine that with a mesh output system and voila.

If they did interfere, you could potentially hook up some gen-locked gating system so they each strobe at timed intervals out of phase with one another to not interfere.

gtoledo3's picture
Re: Kinect on OSX !

.

PreviewAttachmentSize
5167174610_8be6e17c55.jpg
5167174610_8be6e17c55.jpg98.86 KB

vade's picture
Re: Kinect on OSX !

Yes. That looks like a depth map. Notice the lack of background, not because the background is black , but because its not picked up in the depth scan at all.

memo's picture
Re: Kinect on OSX !

Yea you can get semi decent results, see some of the images on this page http://disparity.wikidot.com/ Bear in mind these stereo image pairs are already rectified, i.e. lens undistortion and other transformations have already been applied to the image to ensure that for each pixel in one image, the corresponding pixel in the other image lies in the scanline.

Of course you will always have the occlusion problems so you can't get a full 3D model from just one POV, you'd need to travel around the model (or rotate the model) to build a full 3d dataset. Same applies to Kinect. You can only get the 3d data relative to it's POV obviously.

And yea, 4 Kinect in the center of a room looking out, or hold a Kinect in your hand and wave it around to scan an entire space, are all exciting ideas which will probably become a reality in the near future!

gtoledo3's picture
Re: Kinect on OSX !

The IR sends out 60 pulses a second.

That is a cool concept. I saw a kid doing that with a setup a year or so ago that he put together doing the 2 cam method you describe, along with IR.

The thing can actually pivot the cam automatically, at least when controlled by the xbox software. There's also some audio localization stuff. Those aspects are pretty cool, and I didn't get that out of the tech writeups.

gtoledo3's picture
Re: Kinect on OSX !

Have you had much success with getting more detail yet?

gtoledo3's picture
Re: Kinect on OSX !

I think that's the very method implemented in CI in the apple dev examples, and what I'm running in OpenCL. Sooo, if that's a depth map, then yes, I'm deriving a depth map. Now, I did then convert it to a cruddy normal map and render it that way. Maybe I should have just rendered it to a billboard or something to show the depth channel purely. Ehhh...

gtoledo3's picture
Re: Kinect on OSX !

Are the places in the face black because of lighting applied after the fact, or because the IR didn't pick it up? (no challenge meant, just asking) That looks like luminosity thresholding to me, and like the background actually was dark.

vade's picture
Re: Kinect on OSX !

If it was luminosity then features like the dark shadowed area of his ears would not be near his ear, but in the background. Kinect does not do Luminosity based psuedo 3D like the Rutt, it gets legitimate 3D data. If you were all in white or all in black, you get valid points. This is my understanding.

To be honest, I dont know details of the shoot, but the all white rendering would indicate if there was a background, it would be white, in the background. Also, look at his hoodie, it has similar luminosity values yet has different 3D positions, as does the actual hood, which is shrouded in shadow in the folds, yet appears where it ought to.

Point cloud rendering like this makes things look a like, sure, but devil is in the details. MS needed real 3D data without having to rely on luminosity of the scene, which is why Kinect uses the technique it does, regardless of the living room its in. Can't have someone wearing different outfits get different results day to day, etc.

gtoledo3's picture
Re: Kinect on OSX !

I got you. There are things in the shots that look like they couldn't be occluded, and that aren't rendering though.. like, by the nose, etc. I think maybe some lighting was applied later (?). Perhaps that's from him adding color back onto the points. That would explain alot.

I guess, in my mind, if i can take two color channels images, and play back 3D imagery with good imaging, it doesn't seem insane to think that I could derive usable depth info in a similar manner.

It's hard for me to wrap my mind around the concept that this could be any better, save for mitigating ambient light -which is no trivial issue for sure! There is still going to be occlusion. If the interocular distance from two cams (rgb) can derive depth to create 3D imaging then it seems like that depth can be extracted similarly, given good lighting conditions.

Secondly, I don't think the calculations are really taxing on the computer... maybe it's more than I think, but it seems like doing the calc on the two channels is quick, it just that I don't think I'm preparing the color quite correctly either.

I may be choosing poor ways to represent it, and it's pretty lame to have two usb cams hanging on my laptop instead of a little calibrated setup I have (a case for the kinect I guess), but in my testing, by hovering over nodes, I can tell I'm getting very clear three channel map, where foreground is distinct from background. It's looking "weak" though... like, not as saturated (don't crucify me here, I know that's not the right word) as I'm getting from stereo photo sets.

I'm more interested in investigating what I'm doing with the openCL kernels right now, with the thought of getting ahold of some IR's cheap, and getting an even better deal than the kinect. I can see where it might not be worth the trouble, but it doesn't seem really hard. If the price of infrared sensors is cheap, it could be a fun project. (For reference, I enjoy making electronics for fun, especially audio, so this kind of activity of putting the hardware together, to do multiple IR, augment it with some RGB cams, and then have my own software seems awesome. I'll definitely get a kinect to tear apart though.)

memo's picture
Re: Kinect on OSX !

For the benefit of anyone wondering what actually is this depthmap thing. The brightness of every pixel in the depth map indicates the distance to the camera. And what appears to be a non-detailed, flatish gray gingerbread man, is in fact very detailed. You shouldn't expect to be able to pick out features like eyes and nose. The distance between the tip of the nose and cheekbone is only a few cm. The depth resolution of the kinect is <2cm-10cm (The further away you are, the less resolution). So you should expect 0.05% intensity difference between tip of the nose and cheekbone (1/1^11), and likewise very very subtle intensity differences around the face. Way too subtle for the eye to be able to pick up. However, take that exact same data stored in the what appears to be a non-detailed flatish gray gingerbread man, and use it to displace a point cloud, and you get this: http://farm5.static.flickr.com/4085/5174106004_7874829d9f_o.png http://vimeo.com/16788233

As you can see, that's pretty damn detailed and accurate :)

usefuldesign.au's picture
Re: Kinect on OSX !

Kinect for Xbox 360 – Preview Video

MS are kinda heavy on the body in this promo, very little on screen action and it doesn't start until quite a way in. Something different for gaming promos ;-)

usefuldesign.au's picture
Re: Kinect on OSX !

Speaking of Point Clouds, Memo (and comparing them with 2 camera set-ups), I can vouch for the fact that in Architectural 3D data acquisition there is absolutely no comparison regards accuracy.

Laser point cloud generators were about $1 000 a day to have somebody go and set-up multiple point clouds when they became publicly available (prior to that only military use) about 15 years ago. I'm sure they're cheaper now. I always had ideas for doing an MTV with them, eventually somebody did it with the Radiohead video. They made the cloud data available but it's not clean — they messed with the acquisition to be arty (shootin though running water on perspex etc etc) which negated the point of releasing the source clouds to my way of thinking.

Regarding cwrights suggestion of mulitple Kinects:

Obviously the laser device derives the depth data by itself by timing the reflection (as proportional to distance) whereas Kinect must use post-processing to infer depth. But even on it's own one point cloud isn't so useful. Special reflecting sphere markers are placed as references and multiple clouds are acquired. Then the are superimposed (using the reference marker) to make a composite cloud. This is then sliced through in sections to reveal sections of the building (say inside a massive paper manufacturing plant with undocumented pipes going everywhere.)

So multiple kinects would need some kind of IR markers to composite the clouds, I think. And that's if the IR band is wide enough or the stobing/encoding is fast enough to allow multiple spreads at the same time (big ask no?)

From the sliced sections 3D drawings can be built up, often using special pipe tracking tools etc.

By contrast the last time I saw 2-camera 3D acquisition software was pre 2000 and very rudimentary it was too. There was a plugin for various 3D apps that claimed to be able to take two shots of a city block of tall buildings, kind of axonometric angled shots and generate 3D rectalinear blocks by mapping the image. The demo shots looked great but I note the technique wasn't around for long so presumably people voted with their feet on that technology.

gtoledo3's picture
Re: Kinect on OSX !

FYI, parallax should be set fairly low if trying to do this with color channels. My tinkering around yesterday had much higher parallax than ideal for deriving depth from stereo pair...

http://www.3dphotopro.com/soft/depthmap/help.htm

I find it interesting that reviews of kinect say it's less accurate than the wiimote! That's a head scratcher.

I did see it getting tested on Late Night a few months ago, and I think I got a bad impression because of the game play. Watching people do car racing seemed really unnatural, and the only thing that looked like it was working how the users expected was jumping. However, these demo vids that use it to control menus are pretty cool.

I'm curious how much depth comes into play besides calibrating where the person is generally, and how much is x/y.

cwright's picture
Re: Kinect on OSX !

60 pulses per second is perhaps a bit misleading; where did you find that stat?

In the "nightvision" videos you can clearly see hundreds of IR dots. I'm guessing each "pulse" above that you mention includes all of those dots (hundreds or maybe thousands). Each of those dots is likely a pulse itself (otherwise there'd be a continuous IR line connecting all the dots).

In the end, you get 60 depth-frames per second, each composed of many depth-pixels (each requiring a pulse)

gtoledo3's picture
Re: Kinect on OSX !

I read an article where the person wrote that the pulses were updated 60 times per second. In reality, this just referred to the 60fps max fps of the cam.

This is the data sheet from the people that make the sensor...

http://www.primesense.com/files/FMF_2.PDF

gtoledo3's picture
Re: Kinect on OSX !

Cool thing about the original implementation is that the middleware was doing stuff to derive skeletal info:

http://www.primesense.com/?p=515

Supposedly that aspect is not part of the kinect product... I can't find the link to that, read it last night.

gtoledo3's picture
Re: Kinect on OSX !

In reading more about it, it looks like each system would have to put out a unique infrared pattern, if you wanted to use the technique with multiple systems. It's pretty fascinating.

The color method is pretty low quality with small resolution images, and I see the futility of it after messing around with it more using reference stereo photosets. The calculation is much quicker in OpenCL than with CI, but it doesn't look great, even with the reference sets (as opposed to usb cams).

However, I've seen a system like the kinect that uses dual stereo color channel and flushes a scene with known visible light. That produces nice depth channels, but seems pointless next to the infrared.

I guess it's moot, since Steve just released a kinect beta! I'm pretty eager to get ahold of one after seeing more video. I definitely see the interest now.

dust's picture
Re: Kinect on OSX !

well GT i don't see how you can go wrong with one of these things. i mean RGB-D cameras are like thousands of dollars. So your basically getting a system like this for a few hundred if you can order one. i mean your getting an objects accurate z depth down to centimeters or millimeters from a meter or so away. recognizing an object seems like it might be the tricky part. it seems the rgb image is color coded by motion velocity in the image. that is why waving your hand around around is easy to track. i suppose color tracking might be the simplest approach but if your using two hands there goes that theory.

the funny things is microsoft says it doesn't work if your sitting down. or at least the xbox games don't work sitting down with the exception of the menu scrub thing which would be easy to anyways.

i think this is a very big step forward to the standing a few meters away interactive tv experience that MS has been pushing towards for some years.

im sure there are solutions to problems that MS has not thought of even if MS can get the best people. i can see typing a bit funky. maybe take the one button mac approach.. see vid

i mean thats a funny joke but its not to far off how you would have to type. a scroller wheel with selection button ? maybe even a giant keyboard could be used with a second hand wave for a enter button would work as well.

besides all the cool robot vision stuff you can make, i think this cam seems awesome for interactive artist (qc) types of people. i mean doing a realtime chroma key thing will now be so simple and accurate. no need for fancy reflective green screen things just using a variation of the depth image as a mask takes care of that stuff. once you have a clean plate then tracking gestures will be a lot easier etc...

im with you though george a modified ir webcam and some ir led's sewn into some long johns and or an ir flood lamp/ping pong balls probably would produce a better game solution. sort of make your own mocap for cheap.

i can't really say anything as i haven't played used or even seen one of these games yet, the graphics certainly look better than nintendo wii's demo games.

has anyone tried a playstation motion yet ?

gtoledo3's picture
Re: Kinect on OSX !

I'm interested in how this works if it's not using Time of Flight or structured light to derive the depth (according to primesense at least). Maybe it's a semantics thing. I see why this being so cheap, it totally makes sense for experimentation.. I'm interested in the mechanics of how it works so that it can be done for larger areas. It's interesting checking out variations on the principles that have been put together.

This is the method I was referring to that uses visible light dot pattern with two color channels to do something similar to kinect. http://www.willowgarage.com/sites/default/files/humanoids09.pdf

It seems like there's no shortage of spins on it.

dust's picture
Re: Kinect on OSX !

hmm that abstract is interesting GT. i like the part about crt or probabilistically classifying your points with hidden markov annotations. they are using p(y|x) instead of p(y,x) i suppose because its a faster calculation.

i see how this kind of relates to the kinect but this is a robot picking up a cup. i don't think your going to see a robot anytime soon picking up cup using kinect sensor.

well unless it was a giant robot with 2 meter long arms.

gtoledo3's picture
Re: Kinect on OSX !

That's one they should put a bounty on! Robot picking up a cup with kinect...

Though I noted it above already, in messing with the color channels more, I see it's an exercise in futility to derive a true depth map, just like memo and vade noted. That's why they're flushing the scene with visible light. I've seen some other stuff where a mix of different visible light colors is sent.

In looking at the materials for the panasonic depth cam, it explains the concept in depth of using the near IR method, and the placement of their IR sensors and the way they make a grid on the victim.

I didn't really realize at first that the kinect was actually putting out a high quality depth channel from what I saw in the first vid that was making the rounds. It seemed more like a really good threshold filtering thing.

The other thing I was thinking was that even if it turns out that you can't use kinects facing each other, one could possibly set them side by side(?) for a large area.

usefuldesign.au's picture
Re: Kinect on OSX !

3D Video Capture with Kinect http://www.youtube.com/watch?v=7QrnwoO1-8A One to watch.

By the way those laser scanners will show you the roofing perlons, corrugations on the roof metal and even hexagonal bolt-heads on the steel column joins from 50 metres away.

offonoll's picture
Re: Kinect on OSX !

I just found out this project on kickstarter, thanks to this hardware we are able to use Kinect!!!

http://www.kickstarter.com/projects/bushing/openvizsla-open-source-usb-p...

dust's picture
Re: Kinect on OSX !

here is a list of some kinect hacking blog sites. they seem to be growing in numbers.

freenect.com

matrixsynth.blogspot.com

modmykinect.com

musikgear.com

kinecthacks.net

adafruit.com

modmykinect.com

kinect-hacks.blogspot.com

freenect.com

kinecthacks.net

boonism.net

pcbheaven.com

worldlingo.com

kinectable.net

mikekotsch.tumblr.com

gigantico.squarespace.com

ovelf.com

anythingbutlogs.tumblr.com

paper.li

netvibes.com

logic-sunrise.com

blog.livedoor.jp

xbox-360.logic-sunrise.com

games-hack.fr

musheen.com