1D vs 3D LUTs

What are LUTs, and what’s the difference between a 1D and a 3D LUT? Let’s find out.
Various LUTs that can be applied non-destructively in Blackmagic Design’s DaVinci Resolve

The acronym LUT stands for ‘Look Up Table’, and its meaning differs depending on the context. Generally speaking, however, a LUT is used to map a set of input values to a set of output values. Within the realms of image editing and video, we’ve traditionally used LUTs to map one colour space to another, both for non-destructive previewing and for colour grading. More recently, however, LUTs have taken on a creative purpose too—from dramatic day to night tonal shifts to film ‘looks’, you’ve probably come across at least one pack of LUTs provided either as a free or paid download. Modern image editors support LUT import and export too, so the convenience and power of this system is not restricted to just video.

Here’s a great example: image editors may have to quickly produce several versions or ‘looks’ of an image or render. Using LUTs, what would usually be a time-consuming process of mixing adjustment layers and blend modes is reduced to a few clicks.

A LUT designed for a quick “day to night” workflow.
A LUT that lends the image a blue tone (designed for images with artificial lighting).

The above examples demonstrate when LUTs are used for tonal effect, but as mentioned previously there’s also colour space conversion. LUTs have a more practical use when it comes to converting footage from a camera’s colour space to a standard video colour space like Rec.709 or Rec.2020.

You’ll typically see LUTs in use when working with LOG or flat footage. Many cameras offer the ability to shoot with a logarithmic or tonally flat colour profile in order to obtain a greater usable dynamic range, but the resulting footage often looks washed out and desaturated. LUTs are often used to non-destructively convert and ‘preview’ the footage whilst it’s being edited, and are sometimes used to reach the final colour grade too.

A LUT used to convert from a camera’s LOG colour space to Rec.709, producing a usable image.

So what’s the difference?

As we’ve explored a little about LUTs and their uses, let’s look at how a 1D LUT differs to a 3D LUT. I’m passionate about communicating this, because there doesn’t seem to be an explanation that breaks it down in simple, practical terms.


1D LUTs remap individual pixel values to new values. For example, a LUT might specify that an input value of 4 (near black) becomes 230 (near white). It’s elegantly simple, because you can be incredibly precise—consider that you might be working with a 10-bit video clip. In order to work out the number of pixel values available, we can calculate:

2^10 = 1024

In our LUT, then, we only need to specify remapping for 1024 values.

A single 1D LUT on its own, however, can only adjust brightness, contrast and black/white levels. In this case, you would take the cumulative Red, Green and Blue values as the inputs, which means you can’t adjust those colour channels individually.

What you can do instead is have a 1D LUT comprised of three separate tables (or matrices), and this is known as a 3x 1D LUT. Each table contains input and output values for the Red, Green and Blue channels, meaning you’re also able to tweak the colour values separately. This triples the number of values to be remapped, so 1024 values would become 3072 with a 10-bit LUT, but this is still a relatively small amount.

A representation of a 3x 1D LUT. Notice the red, green and blue lines denoting how to remap values for the individual RGB channels.

A Curves adjustment is a good example of how a 3x 1D LUT would function:

A practical example of how a 1D LUT works: a typical Curves adjustment in Affinity Photo.

In terms of equivalency, a Curves adjustment in Affinity Photo is like a 3x 1D LUT. We can adjust the red, green and blue channels separately, and the given input values are then mapped to the new output values (which are defined by the spline curve we draw by adding nodes).

From this, then, we can understand that the limitation of a 1D LUT (or indeed, a 3x 1D LUT) is that it doesn’t offer a way of mixing the channel information, which can be important for both colour space conversions and colour grading. This is restrictive, as it doesn’t accurately represent the sheer complexity of colour and its nuances.


3D LUTs are a different proposition entirely. There are two key aspects of a 3D LUT to note:

  • They work in 3D Space, otherwise known as the XYZ colour space. The three X, Y and Z planes map to red, green and blue.
  • They can utilise floating point values so you can convey complex mapping, e.g. values like 0.00452, 0.74002 etc.
A 3D LUT opened in a text editor.

The way a 3D LUT is applied is also very different to a 1D LUT. Whereas a 1D LUT contains explicit input and output values based on the given bit depth (e.g. 1024 values for 10-bit), a typical 3D LUT will not contain values for every possible combination in the XYZ colour space. As an example, let’s take the 3D LUT created in Affinity Photo you can see in the image. This LUT’s size is 64 (or 64x64x64), which means there are 64 input to output points for a value on each axis (XYZ).

For a 64x64x64 3D LUT, we would calculate 64^3, which equals 262,144 values. That’s still not enough to cover the range of 10-bit footage in the XYZ space, though—to get that number, we would calculate 1024^3, which equals 1,073,741,824. Quite a difference, and the resulting filesize of a LUT that contained this many values would be around 4GB. This is prohibitive for both storage and performance considerations, since caching and reading a LUT file this large would be difficult: therefore, a 3D LUT would never realistically contain this many values.

Even for 8-bit footage, a LUT that contained every possible value would still need 16,777,216 values (calculated from 256^3), resulting in a filesize of around 65MB.

Since we don’t have the sheer range of values available needed to cover this range, it means that the values inbetween must be interpolated. The method of interpolation will vary depending on the software used (common methods tend to be trilinear, prism, pyramidal or tetrahedral interpolation). For example, these two mappings may be mandated by the LUT:

0.083188 0.158670 0.298659

0.087048 0.160843 0.302750

The software would then have to interpret what the values would be inbetween these two mappings based on the relative relationship between the three axis.

So… why?

A 3D point cloud demonstrating the distribution and complexity of an image’s colour in 3D (XYZ) space.

At first glance, then, the approach of a 3D LUT must seem slightly bizarre—why use a system that requires interpolation when you could just use a 1D LUT that offers precise mapping?

To answer this, we have to appreciate the complexity of colour science. Mathematically, having an absolute set of input and output values would of course make sense and be preferable, but it’s not the best representation of colour—it doesn’t describe the behaviour of colour. Colour can be erratic, erroneous, random—and that’s before we even get to the devices that actually try and reproduce colour in a way that looks meaningful to us.

In addition to that, colours don’t behave individually to one another: they mix and interact. A 1D LUT cannot replicate this behaviour because the red, green and blue channels are mapped individually, therefore you can’t match the level of complexity that advanced colour manipulation requires.

A still frame grab from a video clip with no LUT applied. Notice the over-saturation and general red cast in the image.

Here’s an example: a 1D LUT could be used to colour correct a scene. You might, for example, choose to boost the blue channel and weaken the red channel, thereby removing the red colour cast in the image. This is fine, but there’s no interaction between red and blue—they’re tweaked individually with no regard to one another.

Here, a simple 1D LUT is applied that boosts the blue channel and weakens the red channel. Notice the blue cast in the peoples’ faces, however.

This example has been applied to the above image as a 3x 1D LUT. While it cools down the red tones in the background, the peoples’ faces have taken on a pale blue cast that is unflattering.

The result of applying a 3D LUT, which is able to change the colour tone of the yellows and blues in the background without altering the tones in the peoples’ faces.

Here, a 3D LUT is applied for much finer tonal control: notice the yellow tones in the background have been corrected and tamed slightly, and the blues have been made deeper. Crucially, though, the skin tones are left in-tact and do not suffer from a colour cast.

Looking at the cube image here, imagine that you represent colour in this 3D space. Reds, greens and blues are no longer constrained to their own channels or ‘planes’ (as mentioned previously, the XYZ axis represents the three colours): instead, they are free to move wherever they like, meaning that very precise colour adjustments can be achieved. A simplified explanation is that you can change any colour to another colour.

Ironically, despite the requirement of interpolation, we can now argue that a 3D LUT with its XYZ colour space offers us more precision for transforming colour than a specific value-bound 1D LUT with a fixed number of input and output values.

Both an HSL and a Channel Mixer adjustment represent how a 3D LUT would behave:

A practical example of how a 3D LUT works: an HSL adjustment and Channel Mixer adjustment.

Both of these adjustments ‘mix’ colour information in a way that’s typical of a 3D LUT. The adjustments are used to bring out the red tones on the wheel, whereas the yellow tones are desaturated to remove the light pollution in the sky. Yellow tones can’t be manipulated individually through the red, green or blue channels, so you couldn’t achieve this with a 1D LUT.

There’s one final avenue to explore as we wrap up our exploration of 3D LUTs. Mentioned previously was the fact that colour doesn’t behave in a linear, predictable fashion. This is especially true of organic material like film stock and camera sensors, where you can have unpredictable variations in colour, gamma, brightness and saturation. A 3D LUT can accommodate all of these variations and correct or refine them to a very precise level, since these values can be remapped anywhere within the 3D XYZ space.

A 3D representation of converting BMDFilm to Rec.709

A practical example of this would be converting LOG footage from a specific camera to a normalised colour space such as ACES, Rec.2020, Rec.709 etc. The provided LUT would be created specifically for that camera and its individual characteristics, including the variations mentioned above. With a 1D LUT, you simply wouldn’t have the flexibility to correct or accommodate these characteristics. The image here demonstrates a conversion from BMDFilm (a colour space used by Blackmagic Cinema cameras) to the standardised Rec.709 colour space.

Wrapping up

Let’s summarise, then. Despite the importance of using 3D LUTs for colour-critical transformations, 1D LUTs still have their uses.

1D LUTs:

  • Have values for every input to output value—they are accurate within their confines and require no interpolation.
  • Are useful for changes in brightness/contrast/gamma.
  • Can be used for colour alterations where there is no required interaction between the three primaries (RGB).
  • Are mainly useful for basic colour grading and conversions to and from standard colour spaces, e.g. Rec.709 to sRGB, Rec.2020 to Rec.709 and so on.

3D LUTs:

  • Operate in 3D space, known as XYZ.
  • Can handle complex operations like gamut alteration, saturation and channel mixing because colour values can be altered relative to one another. Colours can also be changed entirely (so a blue could become a green, or vice versa).
  • Require interpolation, because producing floating point values for every single point in the 3D space would result in a costly file size. It is up to the software to determine the interpolation’s implementation.
  • Are ideal for converting from and between camera colour spaces where the colour values are unpredictable and varied.

It’s also not uncommon to have scenarios where both types of LUT are used in conjunction. Camera colour space conversions, especially through OpenColorIO configurations, will typically use a 1D LUT as a ‘shaper’—usually to convert to linear. This then reduces the precision requirement of the 3D LUT, meaning a smaller cube size such as 17x17x17 can be used. The combination works because the initial ‘shaper’ conversion does not require the complexity of a 3D LUT, especially if it is a straightforward gamma transform.

LUTs and Affinity Photo

Affinity Photo has full LUT import and export capability. You can export to .cube, .csp and .3dl formats, which are 3D. It can also export to .look, which is 1D.

See these tutorial videos for more information:

Affinity educator
James is the voice of Affinity Photo and creates most of our Affinity Photo tutorial videos as well as providing in-house training. A self-proclaimed geek, James’ interests include video, programming and 3D, though these are eclipsed by his passion for photography which has now reached an obsessional level.
Credits & Footnotes

Photography by James Ritson (http://www.jamesritson.co.uk)
3D LUT Cube images and spline graph captured from Lattice (https://lattice.videovillage.co)
3D Point Cloud image generated using G’MIC (https://gmic.eu)