Results 1 to 10 of 10

Thread: Why shouldn't Bump, Normal and Displacement maps be gamma corrected?

  1. #1
    Join Date
    Jun 2014
    Location
    Göteborg, Sweden
    Posts
    50

    Question Why shouldn't Bump, Normal and Displacement maps be gamma corrected?

    So I'm trying to wrap my head around this from a fairly technical point of view.

    When you add a Bump or Normal or Displacement map in your shader they should not be gamma corrected. But diffuse textures should be. But why?

    When you save a file in 8bit (or 16bit Integer) with a format like JPEG, PNG or TIFF they get the gamma of 0.4545 (1/2.2) burnt in. So a camera captures the light from the real world (that is linear) and when saving the photo as a JPEG a gamma value of 1/2.2 is added, as a way to compress the information into 8bit, and then when you view the image on a monitor that adds a gamma of 2.2 making the luminance linear again.

    So when using a diffuse textures (like a photo) you need to a "remove" that 1/2.2 gamma to make it linear (called linearizing) so now the texture looks darker but these values are correct from a mathematical standpoint so the renderer will make the correct calculations. Side note: the software used a display gamma of 2.2 to make sure the textures looks correct it's just for the internal calculation that they are linearized. But when I create a Bump map in Photoshop and save that as a JPG or PNG then that 1/2.2 gamma is burnt into that image as well, so why doesn't this Bump map need to be linearized as well?

    If I load a Bump map in 3ds Max with a input gamma of 2.2 just like a would do with a Diffuse texture, the diffuse looks right but the bump looks wrong, so I have to set the input gamma to 1.0 for the bump in order to make it look right, but as I said: why, both image files have the 1/2.2 gamma burnt in, right?

    I did a test in 3ds Max. I created a image 256x1px with a gradient, black to white, with one pixel for each value, 0, 1, 2, ..., 255, I saved it as a 8bit TIFF. I then used a plane with 256 segments and used the gradient map as a displacement map.

    The plane to the left uses a input gamma of the displacement map of 2.2 (the same setting I would use for Diffuse textures), and it looks wrong since the gradient is linear.
    The plane to the right uses a input gamma override of 1.0, and now the displacement effect is linear.
    Displacement.jpg

  2. #2
    Join Date
    Nov 2008
    Location
    Mantova
    Posts
    352

    Default

    A diffuse map is the albedo of a surface, ie. the incident light that is reflected by the surface, captured linearly by the camera sensors and then simply gamma'ed to please our eyes that have a non linear response to light. So a photo holds luminance values. A RAW image holds linear luminance values. An sRGB image for example holds gamma'ed luma values.

    Now a scalar map instead holds mere 'values'... it's a simple container to hold numbers... you don't call it 'an image' nor 'a photo' .. you call it a 'map container for values' because you wanna map different values onto your surface.. it's like having an excel or csv file with a list of values. Why you should process those values if they are already what you wanna pass to your renderer ?!

  3. #3
    Join Date
    Jun 2014
    Location
    Göteborg, Sweden
    Posts
    50

    Default

    Yes but both the Diffuse texture and the Bump texture are being edited in Photoshop and saved out as images files. And thus the reverse gamma 1/2.2 get's burned into the image, Photoshop doesn't know that one of the images are supposed to be a bump map and skips the revers gamma for that particular image?

  4. #4
    Join Date
    Nov 2008
    Location
    Mantova
    Posts
    352

    Default

    Nothing gets 'burned into'. A value of 1 is a value of 1 when you save an image from psd.

    However if that 1 is luminance it needs to be pow(1/2.2) to be in linear space. If it's not, - it remains 1.

    If you paint a diffuse albedo texture instead to take a photograph.. psd will save it out with the values you input. However as you're on a monitor.. what you see is gamma'ed... so the luminance you're painting is what your eyes see not what linearly you would see.. that's why it would still to be de-gamma'ed. But when you paint a scalar texture you ain't doing that because of what you see but because of what you want as a numeric input.
    Last edited by maxt; August 12th, 2016 at 18:44.

  5. #5
    Join Date
    Jun 2014
    Location
    Göteborg, Sweden
    Posts
    50

    Default

    In V-Ray there is an option in the Color mapping section: "Mode – Specifies if color mapping and gamma are burned into the final image."

    And in the 3ds Max help file it says:

    "When saving a bitmap file, 3ds Max saves a gamma value if that is possible."

    "Certain file formats contain the applicable gamma values as metadata, and others do not."

    "You can avoid worry about gamma correction by using only image formats that do embed the gamma (such as PNG) rather than those that do not (such as JPG and BMP)."

    Source: http://help.autodesk.com/view/3DSMAX...3-DCF5172219B5

    And I've seen demonstration by Zap Andersson that the mr Photographic Exposure Control (the 3ds Max name, mi_Photographic_Exposure lens shader or something like that in Maya) also burn in the gamma.

    So I don't understand where or when this reverse gamma (1/2.2) is applied, if it is not baked into the image file.

    Here is a YouTube video also explaining that the stored value in an image is the 1/2.2 (but he simplifies it and just say that it's the square root, there is a disclaimer of that in text.)
    Computer Color is Broken - MinutePhysics

  6. #6
    Join Date
    Nov 2008
    Location
    Mantova
    Posts
    352

    Default

    Kristoffer, gamma is not burned into any image however image pixels in sRGB space for example are in gamma space by definition. But that means simply that any thing you see or paint on a computer that looks right is of course kinda gamma'ed.

    A value of 1 in gamma space if it holds luminance is a value of pow(1/2.2) in linear space, but that same value if it holds displacement is simply what it is.. 1. So we can say that a pixel as luminance has a 'perceptual' value of 1 ... or if you want.. it has gamma embeded.. but that doesn't mean psd is embedding gamma into your pixels.. it's you painting them doing so ! Pixels you paint are already in gamma space because you painted them perceptually on a monitor.

    Some renderer may take advantage of metadata and such but that's not even a matter here. Every thing should be linearized. You shouldn't tell a DCC when to linearize an image but really when to not do that ! And you don't want to do that when they hold generic values instead of lightness.
    Last edited by maxt; August 13th, 2016 at 01:41.

  7. #7
    Join Date
    Jun 2014
    Location
    Göteborg, Sweden
    Posts
    50

    Default

    What about if I paint the Diffuse texture in Photoshop? I use no photographs I just paint it, some sort of cartoon texture, and save it out as an image file. Would that texture then be needing gamma correction when I use it as diffuse color in my shader, or should I use a gamma of 1.0 just as I would with a painted displacement or bump map?

  8. #8
    Join Date
    Nov 2008
    Location
    Mantova
    Posts
    352

    Default

    Anything you paint or just apply to a diffuse albedo channel needs to be linear.

    If what you paint looks right to your eyes on a monitor it needs to be considered has having gamma applied. Technically color image data has always been coded (and thus exchanged) in terms of monitor R’G’B’ – that is, image colors are implicitly perceptually coded (ie. assumed to have monitor gamma applied)... because historically R'G'B' from (c)lut table directly translated linearly into voltage applied to the monitor.

    Even if you choose a single color.. let's say you color-pick the green in the nvidia logo .. it needs to be linearized.

    On the other side when you paint a displacement texture even if you do it artistically you're not passing to the render what you see but the numbers you should know you painted on the texture. However generally people just paint a displacement texture and then tweak it in the renderer (by applying a multiplier) to have the displace looking good, - that's just bad practice (and that's there because different renderers have different ranges for displacement).
    Last edited by maxt; August 13th, 2016 at 01:31.

  9. #9
    Join Date
    Jun 2014
    Location
    Göteborg, Sweden
    Posts
    50

    Default

    If I paint a gradient in Photoshop, let's say 256 pixels wide, and I make sure those pixels are linear going from 0, 1, 2, 3, ... to 255 in grayscale value. Then I have created a linear ramp, and if I use that as a displacement in 3D without any gamma correction in the 3D software I get a linear displacement effect, right? That's what I did with the rendering above.
    But does that same gradient look linear on the screen in Photoshop?
    And if I wanted to use that same gradient as a diffuse texture, I would have to apply the gamma correction to it, right?

    So when piping the map into Displacement you keep the pixels as is (you keep them linear), but for Diffuse you have to linearize the texture by gamma correcting? But I already made sure they were linear as I painted them in Photoshop - this is what's so confusing to me! It sounds like the displacement shader knows they are linear, but the diffuse shaders doesn't so I have to gamma correct them to make them linear, even though they already are, at least according to Photoshop and the displacement shader?!

  10. #10
    Join Date
    Nov 2008
    Location
    Mantova
    Posts
    352

    Default

    So for our summer talks..

    A numerically linear gradient is not a perceptual linear gradient, ie. it doesn't look linear to your eyes on a monitor. In facts.. if you code a linear (progressively discrete) 256 steps gradient.. technically you're coding luminance... but what you see is lightness (as it is called the perceptual response to luminance).

    If you want a perceptual linear gradient (ie. that looks to you linear) from photoshop you can first paint a 'numerical linear' gradient and then apply a reverse gamma correction (that tecnically means you are applying a transfer function that mimics the lightness sensitivity of vision). To my eyes on my monitor a perceptual linear gradient corresponds to a numerically linear gradient with a gamma correction of about 0.56. In other words to display luminance(aka lightness), the codes should be assigned to intensities according to the properties of perception.

    Ideally with a color gradient, luminance would be first matrixed, ie. formed as a weighted sum of linear-light (tristimulus) RGB signals (different channels have different impact on human vision *). Then, the CIE L* transfer function (which is a log function because lightness perception is kinda logarithmic **) would be applied, to code the signal into a perceptually uniform domain. At the end, the inverse of the L* function (ie. gamma) would restore luminance, then the inverse matrix would reconstruct RGB.

    That said once you painted your gradient the only thing you should ask yourself is if that's a perceptual artwork that you wanna see the same once rendered. Then you apply a reverse gamma to it. If instead the values you got are physically measured light intensities then you don't want to de-gamma your image and you should save it in a RAW format.. doing that you're saying to the renderer.. this is not a perceptual image, - it is a container that holds linear luminance values. So roughly speaking just same rules as for displace maps.. just think if you're passing values or stuff you directly see with your eyes.


    * Camera sensor photosites btw have different sensitivities over RGB so we should first apply sensor response functions before convert RAW image data into a color space. http://www.rombo.tools/2016/04/18/se...nse-functions/

    ** Historically CRT’s power function has always been seen similar to the inverse of the L* function that instead of encoding luminance using the L* transfer function, we encode RGB intensities to the inverse of the CRT’s function. That's why you hear more about 'de-gamma' or 'inverse gamma' or 'reverse gamma' instead of log transfer functions.
    Last edited by maxt; August 13th, 2016 at 02:07.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •