RAW, actually

What’s actually in a RAW file, and how does software convert it to something tangible that we can view and edit? Let’s find out.

If you’re a photographer, you’ve no doubt come across RAW. Our general understanding is that it behaves a bit like a ‘digital negative’, rewarding us with more image ‘information’ and more flexibility during editing. But why does it offer us these benefits? What is actual RAW data? And why can you dramatically alter the white balance so easily, pull highlights and push shadows, and perform other numerous tweaks that are more successful with RAW compared to JPEG? Let’s have a look.

Contents of a RAW

Let’s start with what’s in a typical RAW file. Most camera manufacturers use their own proprietary RAW format, and they can have various filename extensions such as .ORF (Olympus), .CR2 (Canon), .NEF (Nikon) and .RW2 (Panasonic) just to name a few examples. Despite this variance, these RAW files all follow a typical structure:

  • A header: common in most file formats. Contains information such as identifiers, byte-ordering etc. Although interesting, it’s not our main concern for now.
  • Metadata: this includes both camera metadata, e.g. camera settings like ISO, shutter speed, aperture, and sensor metadata, which is used to aid the software RAW processing.
  • Sensor image data: this is the scene information captured by the sensor, and is used to produce the final image we see on screen.
  • Embedded JPEG: a JPEG is often embedded into the RAW file, and may be full or partial resolution. It’s often used to preview the image on the camera’s LCD screen, as can be used for initial previewing in RAW processing software whilst the actual RAW file is being processed and thumbnailed in the background.

Now we know what’s in a RAW file, let’s take a closer look at the sensor image data. Certain RAW tools allow us to extract this data before any significant processing is applied to it. This processing may include:

  • Demosaicing
  • Hot pixel remapping
  • Lens corrections
  • Translation to a colour space
  • Gamma correction
  • Tone mapping

Let’s strip all that away and look at an unprocessed image. Without further ado, here it is!

It's... just black?

.. at first glance, it’s a little underwhelming. Without wanting to get too complicated, this is due to the captured sensor information’s bit depth. Most RAW data is captured at 14-bit or 12-bit precision, which determines the levels of brightness the image can contain. 12-bit contains 4,096 levels, for example, whilst 14-bit contains 16,384 levels. For our purposes, the unprocessed image data is represented in 16-bit, which contains 65,536 levels. The RAW file I’ve chosen is 12-bit, so we need to remap its brightest pixel value—4095—to the equivalent in 16-bit, which would be 65535.

Whilst rudimentary, for the purposes of demonstration we can achieve this by using a simple Levels adjustment, bringing the white point right down to remap it. We’ll also want to do some gamma conversion. This is because the RAW sensor data is encoded as linear; we perceive light in a non-linear fashion, however, where the shadow range is expanded and the highlight range is compressed, so we’ll want to account for this. Similar to the white point remapping, we can approximate a gamma tone curve by using a Curves adjustment:

With Levels adjustment and gamma correction

Much better - we can actually see the subject now!

Demosaicing

You’ll notice that the image we’re working with is greyscale—so where’s the colour? Most cameras use what’s called a colour filter array in front of the sensor—this is often a Bayer filter. Light passes through this filter onto the sensor’s photosites, but the sensor doesn’t capture ‘colour’—rather, it captures brightness values. The sensor data is in a greyscale, mosaic format. It’s then up to the software to interpolate these mosaic patterns using the appropriate Bayer matrix and produce a full colour output. Since each pixel only contains brightness information for one colour, the other two colours must be interpolated using information from neighbouring pixels. This is called demosaicing.

Cameras can use a variety of mosaic patterns, but a commonplace pattern is a 2x2 RGGB (Red, Green, Green, Blue) matrix. This pattern mimics the characteristics of the human eye: we perceive more brightness than colour, and our eyes are significantly more sensitive to green light compared to red or blue light. The RGGB model takes advantage of this sensitivity by capturing twice the amount of green pixels, resulting in a perceptually brighter image.

The image without demosaicing. Note the mosaic pattern

It likely goes without saying that because constructing a full colour image requires a lot of ‘guesswork’ on behalf of the software doing the demosaicing, issues can arise with the final output: these include colour bleeding, false colour artefacting and moiré patterns.

Chromatic noise following a mosaic pattern

Interpolating colour information also leads to chromatic noise, which in the majority of cases is undesirable for the end user. Most software removes this before the user even sees it.

A hot pixel in a processed image - notice it has spread to neighbouring pixels as a result of demosaicing

Some corrective steps are also best applied before demosaicing. A great example of this is hot or stuck pixel remapping: if software removes it from the mosaic pattern image, it only has to remap one pixel. If software waits until after demosaicing, it then has to remap multiple pixels, because the other pixels will have averaged from it to reconstruct the colour information. Users who do lots of long exposure or high ISO photography will likely be more familiar with seeing hot pixels on their images.

A hot pixel in a pre-demosaic image - only one pixel is affected

Note that, whilst the majority of cameras use this Bayer filter approach, some cameras actually capture full colour information at each pixel location and therefore their images don’t require demosaicing. Although RAW files from these cameras can skip the demosaicing stage, the software still needs to translate the colours from the camera’s internal colour matrix to something that we can see and edit on screen.

Interestingly, more and more cameras are becoming available that feature a high resolution capture mode, such as the Olympus E-M1 mk2, Olympus E-M5 mk2 and Panasonic G9. The sensor is shifted rapidly and several exposures are captured and merged together, but it’s the difference between the JPEG and RAW output that is most intriguing: for example, the E-M1 mk2 produces a 50 megapixel JPEG and an 80 megapixel RAW file. The RAW file requires demosaicing as usual, but the JPEG’s lower resolution suggests that the sensor shifting is used to capture full colour information for every pixel. The benefits here are notably reduced (or absent altogether) chromatic noise and better colour precision. Despite being 8-bit, tonal information also appears easier to manipulate without causing posterisation or banding.

A standard 20 megapixel image - small details appear jagged

A High Resolution, 50 megapixel capture. Note the increased colour accuracy - more colours are represented in the sign and the pixels are smoother

Colour

Unlike JPEGs, RAW files are not bound by a standardised colour space. The colour space option you see in your camera menu (usually offering sRGB or Adobe RGB) does not apply to RAW. It simply tells the camera which colour space to assign when the in-camera JPEG is produced. RAW processing software can sometimes take its cue from this option as well when determining the default output colour profile.

During RAW development, the software assigns the image a colour space, and translates the colour values from the camera’s colour matrix to that colour space. Most software will typically ‘develop’ in a wide colour space (such as ProPhoto), which provides more flexibility for manipulating colour values, before converting to a standardised colour space like sRGB upon saving. Affinity Photo is no different in this regard: in its Develop Persona, colour-based adjustments are performed within ROMM RGB (otherwise known as ProPhoto, a large gamut profile), which allows for colour values far outside the range of sRGB. When the RAW image is finally developed and passed to the main Photo Persona, it is converted to sRGB by default. This output profile is of course configurable, should you wish to continue working in a different (often wider) colour space.

What this all means for the end user is flexibility. A JPEG has already been mapped to a colour space and its white point has already been defined. Any colour values outside of the colour space range (e.g. sRGB/Adobe RGB) will be clipped and discarded. Add to this JPEG quantisation and compression, which further discards detail, and it becomes clear you’re working with a very limited subset of image information. You can only push the tonal information so far, and major alterations such as white balance prove very difficult to achieve. With RAW, you can push colour intensity values much further before clipping them, and changing the white balance produces a more natural, accurate result.

White balancing a JPEG vs RAW file. Left: original image with incorrect white point. Middle: JPEG with attempted white balancing. Right: RAW with superior white balancing

Dynamic Range

One of the most prolific benefits of shooting RAW is the increased detail you can gain from an image’s shadow and highlight tonal areas. Rather than suggesting an increased dynamic range (thus opening up an argument about bit depth versus dynamic range and whether they are intertwined), we can put this down to RAW software being able to compress or boost information that already exists.

Think of it this way: firstly, a JPEG output needs to have gamma conversion to make it look perceptually correct. On top of that, an additional tone curve is applied (otherwise the image would look washed out), and that’s before additional processing is applied based on the picture profile settings: most cameras have contrast, sharpness and saturation/vibrance sliders, along with several presets so the user can pick a particular style or look. It’s entirely possible that the application of the tone curve may clip pixels to pure white or pure black—at this point, they’re unrecoverable. In addition, we then have JPEG quantisation: a process that maps many values to a smaller set of values to achieve lossy compression. This results in less precision, particularly in the darker areas of an image, meaning there’s less information to work with when boosting shadow detail.

Highlight recovery. Left: Original JPEG with clipped highlights. Middle: highlight recovery on a JPEG. Right: RAW with superior highlight recovery and no banding.

With RAW data, software can ‘rescue’ clipped highlights and push shadow detail. The key is that all the information already exists within the RAW data: it’s simply up to the software (and user) to shape the tones in a way that all the pixel values can be displayed. It’s possible to take this too far and end up with an unnatural result that’s difficult to edit further. A common technique is called ETTR—expose to the right—whereby you expose your image so that the brightest part of the histogram is just short of clipping. During editing, the shadow detail is then pushed to produce a tonally compressed image. This sounds fine in principle, but many use it in scenarios with extreme contrast (such as a bright sky and dimly lit foreground) and end up sacrificing precision in the shadow tones, resulting in a muddy, noisy foreground.

With the realisation that highlight tones appearing clipped on the preview JPEG may still be present in the RAW data, it may be worth experimenting with eschewing ETTR and instead exposing for a more balanced histogram; you would be surprised at just how much highlight detail is hiding initially behind the default tone curve applied to a RAW image.

Left: Shooting with a balanced histogram, initial image. Highlights appear to be clipped but actually aren't. Right: Same image but with a custom tone curve and highlight recovery

The above image is a great example. If I had exposed this using ETTR, the shadow detail would have been pushed even further left on the histogram, which would have resulted in less detailed foreground information. Instead, by exposing for a histogram with more even distribution, I was able to remove the default tone curve (Affinity Photo allows you to do this amongst other software), bring the highlights back and use tonal adjustments to balance the foreground and sky.

RAW. What is it good for?

Now that we’ve explored the meat of what a RAW file is, we can round up and touch upon the advantages and disadvantages of shooting and editing with RAW.

Advantages

  • Precision: as mentioned previously, RAW files are usually 12 or 14-bit, whereas JPEGs are 8-bit. 8-bit only offers 256 values of intensity per channel; as we’ve observed previously, 12-bit offers 4,096 values, and 14-bit offers 16,384. Some high end cameras produce full 16-bit RAW files, so they will contain 65,536 values. Having more values translates to smoother gradients and variations in colour. You’ll notice this with blue skies, especially when you try and push them tonally: 8-bit JPEGs ‘fall apart’ very quickly, with banding becoming quite problematic, whereas 16-bit images maintain fine detail and can withstand heavier tonal work.

  • Full control: if you want complete control over how your image is processed, RAW is the way to go. JPEG is hugely convenient and fast to work with because it already contains a number of adjustments out of the camera, such as sharpening, contrast, noise reduction, lens corrections and white balance. With RAW, these adjustments are all carried out by the software, giving you more control over the end result because you can customise the parameters.

  • Flexibility: adding to the notion of full control, RAW offers you a large degree of flexibility depending on your chosen software, and you can develop your own workflow around the treatment of RAW files. One such example is using Affinity Photo’s Develop Persona to create a flat image with maximum shadow and highlight detail, then developing to a wider colour space such as Adobe RGB in 16-bit precision. This then allows a completely non-destructive workflow, making full use of layers and the various adjustments, filters and tools available in Photo’s main Photo Persona.

  • Archival: Essentially like having a digital negative, having RAW files means you will always have original versions of your images that retain their full quality. RAW files are never directly overwritten: they are always processed and saved to another format, and any developments to the files are stored separately in ‘sidecar’ files. You can always revisit RAW files years later, and unlike with JPEG there is no danger of accidentally overwriting them.

  • Noise: Digital noise is typically reduced as part of a camera’s processing when saving to JPEG. Whilst this does produce more pleasing images straight out of the camera, it also takes away control over how the noise is treated. As RAW remains unprocessed, the user is given more flexibility when deciding how to process the noise. One such example is removing the chrominance noise, leaving just the luminance noise to produce an image with a grainier texture. For removing unwanted noise completely, there are dedicated software solutions that offer better results than those produced by the camera’s processing.

Disadvantages

  • Workflow: shooting and editing RAW demands an extended workflow. Whereas you can copy JPEGs across from a memory card, open them up and get editing straight away, you must develop RAW files first, adding extra time to the process. Furthermore, the results from developing a RAW file will vary between software, as will the processing speed—both of these factors should be taken into consideration.

  • Storage: a cold, hard fact is that RAW files occupy more drive space than JPEGs. JPEGs are an 8-bit lossy delivery format, whereas RAWs contain higher precision image data, often 12/14-bit. Most RAW formats are lossless, although some camera models offer compressed variations that, despite notable file size savings, are more complex for software to support.

  • Integrity: a more recent concern is the manipulation and subsequent integrity of image data, which is a particular concern for photojournalism where an image can tell a powerful story. The news agency Reuters has issued a ban on images processed in RAW over concerns of journalistic integrity, which obviously has ramifications for photographers who might choose to shoot exclusively in RAW. A given argument for this approach is also that having a disciplined approach of shooting JPEG can help photographers to focus more on the story being told in front of the lens, and reduce the temptation of heavy post-processing work or editing later on.

  • Support: this is contrary to the advantage about archival. Although somewhat unlikely, we can speculate that support for RAW formats could easily be discontinued in many years to come. Perhaps software developers would not want to maintain older RAW formats past a certain point and gradually phase out support for older cameras as time moves on. This wouldn’t be a huge concern if manufacturers adopted the open DNG format, but many choose not to do this, instead opting to implement their own proprietary RAW formats.


Both JPEG and RAW clearly have their advantages and disadvantages. JPEG offers speed, convenience and reliability. It’s a tried and tested format, readable on practically any device or with any piece of software. Being a lossy format, there is a technical quality drop over shooting in RAW, but it’s up to the user to determine whether this is an acceptable compromise.

To Conclude

RAW offers the maximum potential quality from the camera’s processing, and allows for much more flexibility when editing. You can choose to shoot RAW simply for the peace of mind that you are obtaining the best possible quality at the source. A big factor in shooting RAW is control. You retain ultimate control over your final image; its tone, colours, sharpness and noise profile. You get to decide how cool or warm an image is, bring back or crush shadow detail, saturate or drain the colour and find the best balance between sharpness and noise.

Many cameras allow you to shoot a combination of both RAW and JPEG together, meaning you can reap the benefits of both formats at the expense of greater storage requirements.

Hopefully this article has in some small way highlighted the complicated process of taking RAW sensor data and creating a result that looks good to us, the end users. With very few exceptions, the process involves guesswork (interpolation) on behalf of the software developing the RAW images, so perhaps we should take a step back from time to time and appreciate how great a job software does of delivering us the final results.

Affinity Photo and RAW

Finally, do have a look at Affinity Photo. Its RAW conversion capability has come a long way since its initial release in 2015, and with the ability to process and work in 32-bit unbounded, coupled with the option of removing the tone curve and applying your own, it’s a solid choice if you’re looking for more control over RAW processing.

See the following videos for more information on Affinity Photo’s RAW processing: