Normal Map demo (Composition by cwright)

Author: cwright
License: MIT
Date: 2010.05.17
Compatibility: 10.5, 10.6
Categories:
Required plugins:
(none)

for whatever reason, there has been a bunch of back-and-forth on bump mapping, normal mapping, and whatnot. To make matters worse, I actually talked about them as if I knew something (generally falling back to harping on needing vertex attributes to do normal mapping due to tangent vectors etc).

Lately, however, I've been experimenting with deferred shading (not in QC, of course, because you need a bunch of render in images, which is slow and annoying) for some demos, and it was at that point that I then realized how normal mapping actually works.

With that knowledge in hand, I whipped up a quick-and-dirty sample composition to show off how this can be done within a GLSL shader.

Sorry for being an idiot, and thanks to psonice for talking about this a long time ago :)

[Also, I think tangent vector can be faked via dFdx/dFdy in the fragment shader -- handy little functions]

PreviewAttachmentSize
normalMap.qtz329.61 KB

usefuldesign.au's picture
Re: Normal Map demo (Composition by cwright)

Hey cwright, any idiocy you feel (nice to know even cwright's QC knowledge has limits) is more than outweighed with this comp. It's blitzing fast even on my old 2xG5. Obviously I'm on 10.5 so I'm seeing a little clipping on hot edges and general jaggies on edges. Would like to see a still of this running under SL with AA set to on. Awesome comp!

Oh and love thyself even when feeling stupid! (though I suspect it's merely modesty in your case)

gtoledo3's picture
Re: Normal Map demo (Composition by cwright)

Here's something I've wondered along these lines... is there a common filter routine that can make a normal map out of the standard texture?

(edit: Nevermind, I see why that isn't always a good idea, because of the fact that one might not want all the info on the texture map to get the normal mapping.)

Nice one.

psonice's picture
Re: Normal Map demo (Composition by cwright)

Cool! I'll have a good play with this at some point. I've written something along the lines of normal mapping ages back, but it was for a 'special case' where I could predict the normals pretty easily (the mesh was defined in a texture). Wonder if this is faster?

I'm interested to hear what you're doing with the deferred shading. Can you share it? And is it along the lines of what Smash did/discussed on his blog? (Speaking of smash, check out his recent particle stuff if you missed it, it's amazing: http://directtovideo.wordpress.com/ )

(And I also feel like an idiot seeing what people like smash manage to do ;)

gtoledo3's picture
Re: Normal Map demo (Composition by cwright)

Actually...

There is a real lack of documentation about making normal maps, besides using nVidia's app, or a couple other apps (photoshop plugins, gimp plugins, etc).

I can tell that the "cheap" method of creating normal map from texture image (not necessarily the best), is to convert the texture to greyscale and then that the greyscale needs to be changed to blue, red, green, where dx=red, dy=green and dz=blue. I also see ideas about blurring to get rid of extraneous detail from photos, compositing stacks of blurred images, etc.

What I'm running into, is that I can't figure out if what corresponds to what in the greyscale, as in, does green=white, red=grey, blue=black? That doesn't seem quite right, I think I'm missing something. Can this be done reasonably by (blurring if desired) converting to monotone, then using Spot Colors, or should it really be done with a custom filter? It seems like having something that brought out some edge, like a sobel, might be useful....

gtoledo3's picture
Re: Normal Map demo (Composition by cwright)

I hope that cwright is looking into deferred shading to deal with the craptastic shadow engine that keeps revealing it's flaws...Using my powers of observation ;-) I mean, that is a glaring GLSL related problem in QC right now...

psonice's picture
Re: Normal Map demo (Composition by cwright)

It's a normal map, not a bump map - the normal of the surface at the point where the texture pixel lies is stored in the texture, so RGB simply means the XYZ vector of the normal. I.e. the normal map pixel contains the angle of the surface at that point, not the 'height' or anything like that. You then use the angle to determine lighting etc.

This way your object basically stores a low res mesh, but the normal data for a high res mesh, so when you light it you get the appearance of a really high res mesh (except near the edges, where it's suddenly blocky again ;)

cwright's picture
Re: Normal Map demo (Composition by cwright)

If you can wait till tonight, I'm going to try and mock up a version that works with arbitrary meshes (not just planes) -- this version works by simply replacing the normal at each fragment with one from the texture, multiplied by the normal matrix (to work with rotation), which is fine for planes, but bad for non-planes :)

(I think I have the math worked out, I just need to do some tests, and not let it get in the way of other work ;)

Looking into Deferred mostly for personal edification (and definitely not for shadow stuff -- that's QC's execution/idle state paranoia at play, not the shadow algorithm itself. Sorry george :/) -- I made a cheap-o teapot with 96 point lights, and it renders at ~100fps, which was absolutely incredible to me.

I think Smash's blog somewhat inspired me latently (since I read it like 4 months ago?). Later, I was reading up on PS3 engines (I was interested in stream processors, incidentally), and saw some cool tricks using deferred shading + the cell stream processors to do all kinds of cool tricks (DoF, volumetric lighting, antialiasing), and it all kinda clicked. I mean, doing all this stuff in a single geometry pass is astoundingly, wildly efficient. It also reminded me of how RenderMan worked (at least, back in the early days) -- they'd essentially do per-pixel position, normal, depth, etc etc, and post-process the attributes to give the final image. It's not entirely applicable to deferred shading (since they did sub-pixel surface subdivisions, which aren't really possible till OpenGL4.0 at best), but there are enough similarities that I can start to understand it :)

cwright's picture
Re: Normal Map demo (Composition by cwright)

Just for kicks -- the "blocky edge" thing is called the silhouette (I know you know this Chris, just for other readers who may be interested). For example, when you render a sphere it's not perfectly circular because it's still composed of triangles (unless you make them so small that there's only 1 per pixel, which is a lot of geometry to push around :).

great explanation of the normal map. If it's still not clear, please keep asking (this is really cool to discuss :)

psonice's picture
Re: Normal Map demo (Composition by cwright)

Reading up on PS3 deferred rendering? Umm, there's a good chance that had something to do with smash too :D He works on deferred rendering stuff for sony's ps3 r+d team ;)

dust's picture
Re: Normal Map demo (Composition by cwright)

i have been using normal maps and displacement maps for some time when modeling. i think its a good practice to use them particularly in the a realtime environment but equally the same for a non realtime environment.

anytime you can use low res poly and get as much detail from it as possible is a good thing. the work flow i use to get my normals is a bit tedious as i have to use a few autodesk programs.

first i make a model in mudbox then paint the texture on the model. then i export the normal map and displacement map plus uv textures etc...

all these textures are made from a high res model. when the model is made i make sure to save the low res mesh and then work from a hi res subdivided mesh. that way all the details from the hi res get stored in the texture files.

the next step is to import the low res model into maya then re graph the shading network in maya and bake the texture to a uv map.

once this done in my experimentation with displacement maps i was able to then re-graph the shaders qc. i was supper stoked when qc 4 came out and i could get this working in qc. that was until i tried to light the patch and found out the old lighting issue with not being able to get the displaced vertex data back out of the shader to the lighting patch ;(

i have since gave up but now i'm thinking that if the normal mapping works thats good enough as i can fake the displacement in the baked texture. i know it sounds messed up because im building a shader at each step (3 of them in total) but thats the price you have to pay to get the detail. so this is exciting chris.

and to the other chris that direct to video stuff is amazing. the blog is very informative as i now understand what was going on that demo as i couldn't figure out if that particle demo was 3d or 2d. i guess it was kind of both 2.5d.

like i mentioned i use mudbox to make my normals. it seems possible to make a normal map with the nvida plugin for photoshop which might save time but for what ever reason working in three-d in photoshop is still weird to me.

how did you make your normal map chris ?

dust's picture
Re: Normal Map demo (Composition by cwright)

this is really slick... even if its only working on planes right now. this example could make some really nice 3d button pushes i guess by simply multiplexing the lights....

gtoledo3's picture
Re: Normal Map demo (Composition by cwright)

psonice wrote:
It's a normal map, not a bump map - the normal of the surface at the point where the texture pixel lies is stored in the texture, so RGB simply means the XYZ vector of the normal. I.e. the normal map pixel contains the angle of the surface at that point, not the 'height' or anything like that. You then use the angle to determine lighting etc.

This way your object basically stores a low res mesh, but the normal data for a high res mesh, so when you light it you get the appearance of a really high res mesh (except near the edges, where it's suddenly blocky again ;)

I didn't say bump map anywhere....maybe the process I'm describing is related to doing bump maps, but I was reading about it on a tutorial about setting up normal maps (maybe that author was giving erroneous info?). I also didn't think that height info was contained anywhere; that doesn't make sense.

I understand that the RGB relates to the XYZ vector, which is exactly what I'm relating in the post above. When I've looked on the web about generating normal maps, it seems that the process is to make a greyscale image, then to convert the greyscale to the colored/filtered image. There is just little (well none that I can find) info about the filtering process after turning it to greyscale. I can't find much info about it at all other than a few apps that setup normal maps for you.

You know way more about this than me, so shedding light on it is appreciated.

cwright's picture
Re: Normal Map demo (Composition by cwright)

I know -- I think it was one of his papers (among a few others that definitely weren't his). I know for sure that I found one of them from his blog :)

It's kinda weird -- I've been a huge proponent of multithreading/parallel processing (right now I'm harping on smokris and bmellen to do some threading in places I can't touch anymore), and as such I was reading about PS3's cell because it's an interesting/different paradigm when it comes to threading. So it all kinda comes full circle -- GPUs are massively parallel (but not the same as threading in the CPU context), so there's a lot of overlap in the research being done. Whichever end I started from, they both wound up at the same place :)

cwright's picture
Re: Normal Map demo (Composition by cwright)

regarding bump data, there are actually some shaders that do crazy per-pixel displacement stuff (though it's rare, because it's expensive and you need to do weird stuff with your geometry).

generating a normal map requires special tools -- dust's reply above seemed to have the typical workflow (from my brief skimming). http://www.bencloward.com/tutorials_normal_maps3.shtml has some tools that do it.

Generally (as far as I understand it), you have a high-poly mesh, and a low-poly mesh, and you feel them both into the tool. The tool then takes the high-poly normal detail, and bakes it into a texture suitable for placing on the low-poly mesh. I don't think it's really something that can be done in real time (at least, I've never seen non-procedural normal mapping, but maybe I'm wrong).

I'm not sure where the greyscale thing comes up -- I don't think that's really a necessary step. All the normal maps I've seen are blueish, with cyan/magenta tinting where the non-camera-facing bits are. Grey, when converted to a surface normal, would give you a zero vector (no normal), which isn't useful (unless you don't want lighting at that particular point).

cwright's picture
Re: Normal Map demo (Composition by cwright)

I just grabbed some normalmaps off the net (google image search for normal maps :) -- I don't have access to any tools that can make them, unfortunately. Smokris may still have access to a maya lab, and endless numbers of grad students who might know something more?

gtoledo3's picture
Re: Normal Map demo (Composition by cwright)

This is the page I'm referring to that talks about the greyscale stuff when describing doing what I'm talking about; getting something that will "sorta work" as a normal map from your source texture when you don't have a real normal map available...

http://www.katsbits.com/tutorials/textures/how-not-to-make-normal-maps-f...

gtoledo3's picture
Re: Normal Map demo (Composition by cwright)

This seems to work ok even though it was originally Tiger era.

http://homepage.mac.com/nilomarabese/Menu13.html

dust's picture
Re: Normal Map demo (Composition by cwright)

oh man... whenever i put in sometime trying to figure out glsl i get all overwhelmed. its not that the syntax is complicated, but its really complex stuff. i decided to try and implement the steep parallax mapping shader and im down to one error but having a hard time wrapping my head around all this.

i barley got a displacement map working a while ago. i wish i could find it now. i got the basic concepts of parallax down as i did a stereo project for a class in perceptions.

im glad there are guys out there that understand this stuff and take the time explaining and sharing because its one thing for me to drag some nodes around and make a shading network in maya but its entirely different to lets say make your own in qc...

i know that steep parallax method is 101 gl programming stuff but they don't teach this stuff at my school.

psonice's picture
Re: Normal Map demo (Composition by cwright)

The greyscale image = bump map, which is where I picked that up from. They're converting an image to a bump map, then converting a bump map to a normal map. You can do that, but it's not a clever way to work ;)

Normal maps are used for storing high res mesh details in a lower res mesh's texture like mr. Wright said. Really, you want to generate the normal map from a high res mesh, ideally with proper tools. If you want to try it yourself, the process should be something like this (disclaimer: i've not tried this):

  1. take a high res mesh, convert it to a low res mesh.
  2. for each poly in the low res mesh, create a texture (or part of a texture with some UV/unwrapping magic).
  3. work out which part of the high-res geometry maps to this particular area of the low-res poly, and store the normals in the texture. You'd probably do some kind of projection using the normals of the low-res poly to select a volume of the high res mesh.. this part will be hard ;)
  4. bake your newly created normal map into the low res mesh. Now when you render it with lighting, it lights up pretty much like the high res mesh (except the silhouette which still has sharp corners..)

You COULD create normal maps from photos, but it's either going to be 'abstract', 'wrong', or 'very hard' ;) Creating a normal map from a photo is easy, but creating one that looks good and is realistic is really difficult - you have to analyse the lighting to determine the shape, taking into account shadows and surface colours.. only recommended if you're into hardcore maths. I'd walk away from this job fast, and look for some (probably expensive) software to do the job.

gtoledo3's picture
Re: Normal Map demo (Composition by cwright)

Ohhh, I gotcha about greyscale equaling bump map.

Now I understand it much better, given your breakdown.

In experimenting more with filtering setups that are "supposed" to make normal maps from regular textures/photos, I see exactly what you mean about creating a normal map being easy, but creating one that looks realistic being "hard".

However, with simple things (like ultra simple iconic image... a thought bubble, heart, etc) I'm finding it works pretty well. I have been doing some brick walls, etc... all looks fairly reasonable. Now, a diamond plate metal texture... looked like crap when trying to make a normal map using one of these little apps. So, I see what you mean about it not necessarily being a clever way to work, but if don't have a high res mesh to generate the normal map from to begin with, doing the "pseudo normal map" from picture could possibly be better than nothing (...but maybe not if it's plain ugly in end result).

I still think that having a little routine that does something like the app I linked to via CI would still be nice, even if not completely accurate. It really seems like QC/CI could do the exact same job that normal making app is doing (right or wrong).

I don't know... I don't remember thinking Photoshop did any better of a job at generating normal maps. I never used the nVidia app, but it looks like you just feed it a simple source texture as well.

cwright's picture
Re: Normal Map demo (Composition by cwright)

Generating a normal map from a height map should be pretty simple in CI -- just sample the surrounding pixels, and find the average "direction" as if each sample was a height. I might whip something like this up eventually, just for kicks...

Anything that takes an image input to generate a normal map won't be too great -- good normal maps require good tools that take meshes for inputs. [in other words, I doubt the photoshop tool was all the good]

cwright's picture
Re: Normal Map demo (Composition by cwright)

dust, can you create a post where you elaborate on "shading networks" in maya? (with pictures)? I'm interested to see how they tackle creating shaders without requiring artists to have an advanced mathematics degree...

psonice's picture
Re: Normal Map demo (Composition by cwright)

From what I remember of it (it's getting on for 10 years since I last touched maya..), think QC filters. You have a bunch of standard generator patches, combining patches etc. The end result would be a glsl shader though, rather than a straight image. For anything not built-in though, you need that mathematics degree ;)

psonice's picture
Re: Normal Map demo (Composition by cwright)

I wrote one of these ages back, but it was built into the glsl shader rather than being a normal map generator (my 'bump map' stage was animated by QC, and the mesh was generated using a lower-res version of the bump map so that made more sense). I'll dig it out if I can find it.

usefuldesign.au's picture
Re: Normal Map demo (Composition by cwright)

cwright wrote:
If it's still not clear, please keep asking (this is really cool to discuss :)

I have a remedial question, more theoretical I guess, but just asking.

I can see how you would be able to calculate a normal for any point on a height-field texture by comparing it's value to neighbouring pixels, 4-pixels in a cross (say). How do you do it for a vertex mesh?

For instance take the vertex of a primitive like a cube or a pyramid, how do you calculate the normal? I guess if you were working out the normal for the edge of a cube you would average the two coincident planes and draw a perpendicular line but I can't see how you calculate a mesh vertex. Do you just average all the co-incident planes? Math question I suppose…

cwright's picture
Re: Normal Map demo (Composition by cwright)

See Normal Map II demo -- it uses texture coordinate derivatives to approximate the original (non-deformed) normal of an arbitrary surface, about which you can then deform the normal.

gtoledo3's picture
Re: Normal Map demo (Composition by cwright)

A few things have come to mind...

-As I read about normal mapping even more, many articles and explanations simply call it a subtype/form of bump mapping.

-What would the steps be to get something that works as well (or like "it should") with simple greyscale bump mapping?

-I do think that really useful results can occur from generating normal maps from textures, even if it can be a hit or miss proposition. For example, I'm attaching a composition where I generated a normal map from a simple texture and it works in a way that looks accurate. In this scenario, anything is going to look like a higher poly mesh than a simple GLSL grid anyway.

-I should post this in Normal Map II's conversation... but it seems like if I have a Sphere with a low stack/slice count, and put a normal map that would be "flat" that it should actually smooth out the look of the Sphere, but I don't see that happening. I don't know if that's a quirk specific to QC, or something that was a concept specific to whatever video game engine was being referred to in the blog/tutorial I was reading.

  • Given the pretty darn good results with using photo texture for normal maps, I am firmly re-resolved that even if it's not a totally standard approach, that it's useful for a CI kernel routine should be developed that does as close to what NMG does as possible.

So, in the example attached, I have a basic texture, and the normal map generated in NMG, from photo texture. The lighting adjusts/shadows just as I'm expecting it to, even if originally generated from photo texture. So, I don't think this method has to be "artistic" or inaccurate looking all the time.

gtoledo3's picture
Re: Normal Map demo (Composition by cwright)

On that note, I posted something below that shows that pretty good normal maps can be generated from photo texture, if enough time is spent on tweaking, and if the source texture lends itself to it.

The more color variation/high level of detail, the more it falls apart. For example, I've made a few normal maps for some MD2's that leave a lot to be desired, because of the drabness of the original colors.

I totally "get" the concept of making the normal map from a high res mesh, and why it is better, etc., just to reiterate.

cwright's picture
Re: Normal Map demo (Composition by cwright)

gtoledo3 wrote:
-What would the steps be to get something that works as well (or like "it should") with simple greyscale bump mapping?

Read the pixel and its surrounding pixels. Generate vectors based on texture coordinate and a virtual height based on texture (luma = height, red = height, whatever).

Then do some cross product magic, normalize where appropriate, and map the resultant virtual normal to 0-1 (so it fits in a texture -- typically you add 1 and then divide by 2, since normals are from -1 to 1 for x, y, and z).

gtoledo3 wrote:
-I should post this in Normal Map II's conversation... but it seems like if I have a Sphere with a low stack/slice count, and put a normal map that would be "flat" that it should actually smooth out the look of the Sphere, but I don't see that happening. I don't know if that's a quirk specific to QC, or something that was a concept specific to whatever video game engine was being referred to in the blog/tutorial I was reading.

At this point QC has almost no role in what's happening (only that we have to do a lot of work per pixel on the GPU because we can't provide tangent vectors). To smooth a low-poly mesh, you simply need to normalize the normal per-pixel (GL by default will not do this, which causes normals to be interpolated linearly, causing non-unit length normals. When these are fed to the lighting calculation, their non-unit-lengthness causes inaccuracies) -- you can find that type of shader at http://www.lighthouse3d.com/opengl/glsl/index.php?pointlight (it can't change the silhouette, and there's only so much it can do when the stack/slick count drops too much).

gtoledo3 wrote:
- Given the pretty darn good results with using photo texture for normal maps, I am firmly re-resolved that even if it's not a totally standard approach, that it's useful for a CI kernel routine should be developed that does as close to what NMG does as possible.

Definitely -- it's not something you'd want to do every frame if you could avoid it, but it's not intractable. It's also possible you could write a shader to outputs normal map data, and use it + Render In Image to generate normal maps based on various geometry + filters or something.

gtoledo3's picture
Re: Normal Map demo (Composition by cwright)

A pretty awesome tool for turning texture images into high quality normal maps:

http://www.polycount.com/2010/10/06/ndo-normal-map-creation-toolkit/

"Yesterday at about 10:43am (PST), teddybergsman posted a thread in the Pimping & Previews forum that caused quite a stir and also caused a lot of jaws to drop. With his first post, he became an instant legend in the Polycount community by sharing his normal mapping toolkit called nDo. nDo has a wide variety of normal map creation features, which gives the user great control to create quality normal maps all within Photoshop in a simple and easy to use script. nDo has the basic features of converting your colour map into a normal map, but with far more control than most of the other normal map generators. Not only that, but you can create normals from selections (marquee, lasso tools), paths and you are even able to rotate/skew/resize/flip elements using the transform tool while still maintaining the correct normal information. Better yet, you are even able to “sculpt” your normals using the default brushes as well as your own custom brushes! Mind grapes = Blown!"

This is the tutorial that shows some of the capabilities:

http://philipk.net/tutorials/ndo/ndo.html

What makes it killer is the ability to add extra "sculpting" to the normal map.

It really seems like this should possible to do with CI, and I know there si a bit of an explanation from cwright above. I remember trying back when this thread was new and not having the color values quite correct.

PreviewAttachmentSize
peterk_ndo_02_thumb.jpg
peterk_ndo_02_thumb.jpg7.73 KB

gtoledo3's picture
Re: Normal Map demo (Composition by cwright)

I just figured something out, and milage may vary.

When using a setup like this, if one forces mipmapping of the normal texture by running the normal map through an image texturing properties set to mipmapping enabled/target2D, the result with lighting looks more "correct" if you're dealing with big textures and a big sprite/extreme angles (anything that would bring on moire effect).

Since mipmapping makes the color homogenous at far away pixels, it does the trick.

PreviewAttachmentSize
enabled.png
enabled.png1.52 MB
disabled.png
disabled.png1.62 MB