Macbook vs Macbook Pro graphics

snerg's picture

Hi,

I upgraded my Mac from a 2GHz C2D Macbook (which is now being used by my girlfriend) to a 2.4GHz C2D (Penryn) Macbook Pro, both have 2GB RAM.

I upgraded because I though the Nvidia graphics chip would speed QC things up quite a bit (I noticed that the GMA950 really sucks at 3D). After using the Macbook Pro for two weeks I really value the extra screen size, backlit keyboard and FW800 connection.

However I think that some compositions actually run slower on the Macbook Pro compared to the Macbook, how can this be?

I ran some tests and in some cases the old Macbook actually is a little faster, in other cases like with GLSL patches the Macbook is slower and doesn't even output anything above 1024x768 resolution.

The Macbook has a Inter GMA950 with shared RAM and the Macbook Pro has a Nvidia 8600GT with 256 RAM.

Does anyone have any insight on this?

Snerg

Some benchmarks done with the stock QC3 (63) examples: (maximum framerate: umlimited)

MBP MB Example Compostion
23 20 Conceptual / Noise 3D
26/30 29/30 Conceptual / Image TV (minimal fps, 683x384)
40/50 30 Coreimage / Blurrier (512x384)
60 20 Coreimage / Star Shine (fullscreen, 1440x900/1280x800)
60 0 Core Image Filters / dejong (doesn't seem to run at all on MB)
30 8 Core Image Filters / Julia Iteration 2 (6% CPU MBP vs 63% CPU on MB) 1060x673
60 10/30 GLSL / Julia
30 60 GLSL / VertexNoise (MB is faster?)
60 30 Particle Systems / Particle System

cwright's picture
(un)unlimited ...

To actually enable unlimited framerate mode, you need to hold "Option" when you open the preferences pane, and then under (Editor) enable the "DisableVBLSync" option -- this should make the framerate differences really shine, and may change which is faster.

For Vertex stuff, it's a bit weird: Intel Vertex shaders are typically executed on the CPU, and of course Intel can write fast code for an intel CPU (Core2Duo, in this case). In contrast, your vertex noise might be just difficult enough to split into a 2-pass (internally, due to complexity, not externally as in multi-pass effects), which would force it through the GPU twice -- I'm not sure how this sort of thing is handled honestly, but it's a possibility. Also, if your shader code contains branches, it's usually dropped to CPU mode.

Effects with lots of VRAM-System Ram transactions can be faster on Intel because VRAM Is system ram, so no bus transaction needs to take place -- this is kinda dodgy, but I've seen a few places where it happens, with unexpectedly large performance gains.

that's about all that I can think of. Software rendering (also available in the option-preferences) is also a good test platform to see actual system differences, rather than GPU differences.

snerg's picture
I didn't disable VSync so

I didn't disable VSync so that might indeed change the benchmark a little. The GMA950 actually does a pretty decent job for QC work even without hardware T&L.

I guess I got carried away with benchmarking in stead of learning QC.

It's time to get back to QC programming!

Snerg