As a photography enthusiast, I wanted to invest in a high-quality monitor for photo editing. Naturally, I turned to Google and searched for Best monitor for photo editing. I assumed that the most important factor for photography was a monitor that could accurately reproduce colors as they appear in real life. But which specifications truly matter? I had often come across the term color gamut coverage, but I never fully understood it. Specifications like 98% sRGB or 95% AdobeRGB appeared frequently, and while it was clear that higher percentages were better, I wanted to dive deeper. What do these values actually represent? When researching color gamut, sRGB, and AdobeRGB, I repeatedly encountered this diagram, called the CIE 1931 xy-chromaticity diagram.
Determined to finally demystify this diagram, I embarked on an in-depth investigation. This article is the result of my findings. By the end, you'll be able to answer these questions:
What is a chromaticity diagram? I would quickly describe the chromaticity diagram as a representation of the Human Visual Gamut, the full range of colors that the human eye can perceive. But then you would ask me: all the colors? but I don't see the grey? neither the brown nor the burgundy? and why would a color have a (x, y) coordinate? Does it mean you could represent mathematically a color? Then I would try to be more specific: Consider that a color could be fully characterized by two attributes: its chromaticity and its luminance. The chromaticity diagram represents only the chromaticity of the Human Visual Gamut, ignoring luminance. For example, the white point in this diagram has a chromaticity of (x = 0.33, y = 0.33), but its luminance isn't represented.
If this short explanation does not satisfy you, I invite you to keep reading and start by asking yourself a simple question: what is a color?
You've probably learned in physics class that light is an electromagnetic wave, and the portion visible to the human eye falls within the wavelength range of approximately 380 nm to 750 nm. You've likely seen a diagram illustrating this visible spectrum, where violet appears at the shortest wavelengths (~380 nm) and red at the longest (~750 nm).
Each wavelength in this range seems to correspond to a specific color. But here's an interesting question: Does this mean that every color we perceive can be produced by an electromagnetic wave with a specific wavelength? In other words, if you could generate a light with electromagnetic waves at any wavelength and any intensity, would you find one that corresponds to pink? Let's test this idea with a simple simulation. Try matching those patch colors:
Not possible right? That brings us to an important conclusion: not every color we perceive can be produced by a light of a single wavelength. A light composed of a single wavelength is called monochromatic light . The colors produced by monochromatic light belong to a group known as the spectral colors. But what happens when we mix multiple monochromatic lights? We get polychromatic light. And this is how we get the other colors.
A perfect example is sunlight, which is a polychromatic light made up of all the wavelengths in the visible spectrum. You've probably seen this in action with a prism experiment, where white light is split into the different spectral colors.
Now, imagine we have a device similar to a prism that not only decomposes light into its individual monochromatic components but also measures the amount of energy carried by each wavelength. Good news: such a tool exists, and it's called a spectrometer! The spectrometer's measures allows to create a graph known as the Spectral Power Distribution (SPD), which shows the flux density Φ(λ) [W/nm] at each wavelength. Here are some exemple of SPD.
We might imagine that to create a color from its SPD, we could combine all the monochromatic lights of the visible spectrum, adjusting their intensities according to the SPD. However, this approach is impractical for two main reasons: first, pure monochromatic light does not truly exist, and second, we would need an infinite number of such lights. There must be an easier way!
Let's take a classical color as the yellow. As seen in the visual spectrum, it seems to correspond to the spectral color at about 575nm. But you've also known your whole life that you could mix red and green to get yellow. So it seems we could create yellow in two different ways:
On the graph below, you can visualize the SPD of the the monochromatic light yellow being a peak at 575nm and the SPD of the combined red and green having a peak at 650 and 545 nm.
So two different SPD but the same color! At this point, you might wonder 🤔: "Maybe it's not two different SPD because if we mix two wavelengths, it could create a single monochromatic wavelength that corresponds to their sum?" Not quite! For one, wavelengths don't add up like numbers: if you recall some trigonometry, adding two sinusoidal waves doesn't simply result in another sine wave of their summed wavelength. Plus, even if you wanted to add the wavelength of the green and red, it would not give the wavelength of yellow (545nm(green) + 650nm(red) ≠ 575nm(yellow)).
This brings us to an important concept: light of different SPD can have the same color! This phenomenon is called Metamerism. Metamerism is the phenomenon where two different light sources, with different SPDs, are perceived as the same color by the human eye. Perhaps, instead of defining color based on the physical properties of light, we should focus more on how the human eye perceives it.
In the early 1800s, Thomas Young proposed a groundbreaking idea: the human eye has three types of receptors, each sensitive to a different range of wavelengths, corresponding roughly to red, green, and blue light. This meant that color perception is determined by how these receptors respond to incoming light. Yes, you've guessed it, these receptors are what we now call cones! Since different light spectra can produce the same cone response, this explains metamerism, why different SPDs can result in the same perceived color. In a few words,
A color is a specific cone response.
As a qualitative example of cone response, imagine looking at blue light. The first and second types of cones would show low sensitivity, while the third type would have high sensitivity. Later, you'll see that we can represent cone responses as a triplet of number, for example, a blue light might be expressed quantitatively as (0.1, 0.1, 0.9).
This was purely based on experiments at the time. Scientists in the 1800s had no way to physically analyze the eye's cells. But in the 1960s, technology finally caught up. Using microspectrophotometry, a technique that measures the light absorption of individual cells, researchers confirmed the existence of three distinct types of cones:
This discovery provided definitive proof of trichromatic vision!
If we can determine the full range of possible cone responses, this range corresponds to the complete set of colors humans can perceive, also known as the Human Visual Gamut. Yes, this is the same gamut we mentioned at the start of this article! Remember, we're working toward explaining the chromaticity diagram, which represents the chromaticity attribute of each color of the Human Visual Gamut.
But here's an important point: the CIE 1931 xy-chromaticity diagram, the very diagram we're trying to explain, was created in 1931, before scientists could directly measure cone sensitivity. So how did researchers determine cone responses and construct a representation of the Human Visual Gamut before they had the technology to analyze the eye's cells? To answer this, we need to step back into the 1920s, where two scientists, John Guild and William Wright, working independently, devised a similar way to map human color perception. Their key breakthrough came from an experiment known as the Color Matching Experiment. In the next section, we'll break down this experiment and by the end, you'll understand key concepts such as: primaries, reference white, chromaticity, luminance, color matching functions.
A natural conclusion of the trichromatic vision theory is that all perceivable colors can be created by mixing just three primary colors in varying proportions. The Color Matching Experiment was designed to determine how humans perceive color by recreating a test light, a monochromatic light of varying wavelength, using a mix of three primary monochromatic lights, referred as primaries. Here’s how it worked:
Before conducting the actual Color Matching Experiment, the primaries needs to be calibrated using a white light, called a Reference White. But we can't just pick any light that looks white, otherwise, others wouldn't be able to reproduce the experiment consistently. To ensure standardization, scientists use Standard Illuminants, specific light sources defined by their SPD. These illuminants serve as reference white lights for color experiments. Some commonly used standard illuminants include:
What does calibration mean? Calibration is the process of adjusting the intensity of the primaries so that their combination matches the reference white. For example, at the start of the experiment, let's say that we can control the amount of light emitted by each primary by setting an amount of power in watt, then we could find that to match the reference white, the red primary needs to be set at 60W, the green primary at 30W, and the blue primary at 90W. It means that after calibration:
Those units defines a new system to quantify an amount of primary. Using this new unit system, we can say that the sum of one unit of each primary matches the reference white.
You can try calibrating the primaries yourself using the animation below. Adjust each primary with the arrows until it matches the reference white. Once you're satisfied, click Calibrate. The three calibrated values Rp, Gp, and Bp will appear. These notations will be used in the explanation later on.
Now that the system is calibrated, what we want to measure is the chromaticity of a color.
In the color matching experiment, what matters is the amount of each primary required to match the test light set at a specific wavelength. At every wavelength, the participant was asked to change the intensity of each primary to match the test light, and the amount of each primary was recorded. Mathematically, we could represent this match as an equation called the color matching equation:
Those strange symbol \(\triangleq, \hat{+}\) indicates that they are not algebraic operations, but instead physical operations, such as the result of mixing light.
Since there is no unit defined for \(C(\lambda)\), we don't know the amount \(\alpha\). But multiplying each side by a factor would actually keep the match. It is equivalent to increase the intensity of the test light and increase by the same amount each primary by a factor \(s\)
Wright wanted to define the equation so that 1 unit of C would correspond to an equation where the sum of the coefficients \(r, g, b\) equal 1.
From this definition, the normalized coefficients \(r, g, b\) could be computed from the measured amount of primaries \(r', g', b’\).
The equation can be rewritten as
meaning that 1 unit of color C is matched by \(r\) unit of Red, \(g\) unit of Green and \(b\) unit of Blue. Remember that \(r, g, b\) are not the initial amount found during the experiment, they are normalized value so that \(r+g+b = 1\). Those coefficients \((r, g, b)\), computed for each wavelength, are called the chromaticity coefficients.
In the animation below, you can see these coefficients represented on a 2D graph. By hovering over it, you'll directly observe their relationship with the experiment. I've used the calibrated value \(R_c=60, G_c=30, B_b=90\) from the previous animation. You may notice that some spectral colors cannot be matched simply by mixing the three primaries. The only way to match them is by adding a negative amount of a primary color, which is equivalent to adding a positive amount on the test light side, as shown in the animation.
Now is the time to give a proper definition for chromaticity! As we saw earlier, the chromaticity coefficients represent the relative amounts of the primaries that, when summed, match a given color. Since these coefficients are relative, we can multiply them by the same factor and still obtain the same perceived color, but with a different intensity. Here, the word color can be misleading. One might argue that blue and dark blue are different colors, and that's fair! Instead, we can say: Blue and dark blue have different intensities but share the same chromaticity. In this context, intensity is a vague term. A more intuitive way to describe it might be luminosity or brightness, but the most precise term used in color science is luminance.We'll explore luminance in the next section but to conclude with the chromaticity, here is definition:
The chromaticity is only one of the two attributes to describe a color. The other attribute is the luminance. To fully characterize a color, we need both. The chromaticity graph is not enough, we need another graph to take into consideration the luminance.
The luminance describes how bright an object appears. The term luminance can be confusing because it has different meanings in photometry and radiometry:
In this article, luminance refers to the photometric luminance.
To illustrate the luminance, let's take a monochromatic red light and a monochromatic green light. If both emit the same power (let's say 1 watt), you might expect them to appear equally bright. But when you look at them, the green light appears brighter than the red, even though they emit the same amount of energy! Our eyes perceive some wavelengths as brighter than others, even when their physical energy output is identical. Therefore we can only compare the brightness of colors relative to each other.
Imagine we have a monochromatic light source that emits exactly 1 watt of power at any given wavelength. Now, if we ask people to compare the brightness of different wavelengths, they will likely report that the brightest color they perceive is around 570 nm (yellow-green light). Since brightness is a relative measure, we can assign a value of 1 to this maximum perceived brightness and scale all other brightness perceptions accordingly. By gathering these observations across the entire visible spectrum, we obtain a curve known as the photopic luminosity function V(λ).
This unit-less curve, derived from human-based experiments based on Wright and Guild works, illustrates how our eyes perceive brightness under well-lit conditions. So, how does luminance relate to the photopic curve?
Luminance is computed based on the photopic function V(λ). Therefore, it remains a subjective measure, meaning it reflects human visual perception rather than an absolute physical quantity. However, unlike raw brightness perception, luminance has a unit and can be measured. Given a light source having a Spectral Power Distribution (SPD) \(\Phi(\lambda)\), its luminance is defined as:
To simplify notation, I often omit the constant $C_l$ and express luminance in watts instead. The purpose of luminance is to match perceived brightness rather than actual emitted power. For example: A red light and a green light with equal luminance will appear equally bright, even though they may emit different amounts of physical power. This makes luminance an essential metric in color science, display technology, and lighting design, where human perception matters more than raw energy output.
Now that we understand what luminance is, together with the chromaticity, we can fully describe a color! In the next section, we'll modify the trichromatic graph to take into account the luminance information of a color. The results are three functions called the Color Matching Functions.
Remember the 2D graph representing the trichromatic coefficients? It allowed us to describe colors independently of luminance, but now, with our understanding of luminance, we can enhance this representation to fully account for both chromaticity and luminance. To achieve this, we introduce in this section three key functions known as the Color Matching Functions (CMFs).
In the Color Matching Experiment, the goal was to match not only the chromaticity of a test color but also its luminance. This means that when mixing the primary lights, their combined luminance had to be equal to the luminance of the test color. When we write
using the chromaticity coefficients \(r(\lambda),g(\lambda),b(\lambda)\) , this equation only describes the relative amounts of chromaticity, but it tells us nothing about luminance. The resulting color C has an unknown luminance. We are now looking for an equation that will involve the color luminance. To determine the luminance of a color given by mixing three primaries, we use Grassmann's Law, which states that: The luminance of a color is the sum of the luminance of the individual lights that compose it. In other words, when mixing three primary lights, the total luminance of the result is simply the sum of their individual luminance. Let's pose \(L_r, L_g, L_b\) the luminance of each unit of primary \(\mathcal{R},\mathcal{G},\mathcal{B}\) respectively. Let's choose a specific spectral color C of wavelength \(\lambda_C\), whose chromaticity coefficients are \(r, g, b\). Mixing the primaries according to these chromaticity coefficients would result in a unknown luminance \(L_c\). Mathematically,
Now let's decide that we want the luminance of a color C that emits one 1 watt of light (\(L_C^1\)). We then have to scale the amount of each primary by the same factor $v$ to keep the chromaticity of the color.
This equation tells us that to match the color C with luminance of 1 watt, you need to mix \(vr\) amount of \(\mathcal{R}\), $vg$ of \(\mathcal{G}\) and $vb$ of \(\mathcal{B}\). So far, here are the unknowns : \(v, L_r, L_g, L_b\). We actually know the luminance \(L_C^1\). We said that it is the luminance given by 1 Watt of C. Remembering the luminance equation
If we omit the constant \(C_l\), \(V(\lambda)\) is the known photopic curve, and since for a monochromatic light $\lambda_C$ emitting 1 watt, its SPD is just a peak at \(\lambda_C\) and zero elsewhere, we can write
But what are the luminance of one unit of Red, Green, and Blue: \(L_r, L_g, L_b\)? Actually, we don't need the absolute luminance values, just their relative luminance is needed here. Here is what I mean. Without modifying the problem, we could have said that the factor $v$ could instead be written as \(v = k\frac{1}{L_r}\). Equation \(\eqref{eq:photopic-equation}\) becomes
This change of variable illustrates that only the relative luminances are necessary. Those coefficients \(L^r, L^g, L^b\) are called the luminance coefficients (notice the superscript instead of the subscript). Injecting them in \(\eqref{eq:photopic-equation}\) gives this luminance equation involving the relative luminances.
But we are still missing \(L^g\) and \(L^b\) ! Wright measured that one unit of Green primary was about 4.4 times brighter than one unit of Red primary. And that one unit of blue primary was about 0.04 times less bright than one unit of Red. The exact value given by CIE are the following coefficients:
If you are curious on how to find those coefficients, here is the procedure.
Let's generalize equation (\(\ref{eq:specific-lum-equation}\)) for any spectral color \(\lambda\),
which tells us that to match the luminance of 1 watt of a color whose chromaticity coefficients are \(r(\lambda), g(\lambda), b(\lambda)\), you need
Now you have it, the three Color Matching Functions \(\bar{r}(\lambda), \bar{g}(\lambda), \bar{b}(\lambda)\). To give a proper definition
On the animation below, you can hover the color matching functions graph to see the mathematical relationship with the chromaticity coefficients and the photopic luminosity function \(V(\lambda)\).
In the next section, we'll see how we can use those function to create any tristimulus values, which are the three values used to describe any colors!
Why are the CMF important? Remember at the beginning of this article, I suggested the idea that to create a color, we could just try to reproduce its SPD. I said that it was not realistic to take every monochromatic light with a different intensity to try to reproduce the SPD. But then we discovered the concept of metamerism. So actually only three monochromatic lights are needed to reproduce any colors. We just didn't know how to mix them together. Thanks to the CMF, now we know the amount of each primary required to match a specific SPD. To reproduce any color C (not just a spectral color) having a SPD \(\Phi(\lambda)\), the CMF tell us that we need those amount of primary:
$R, G, B$ are called the tristimulus values of C. They are the amount of Red, Green and Blue primary respectively required to match the color C in chromaticity and luminance. Note that the mix $R, G, B$ does not reproduce the SPD of C. They create a light whose color is perceived the same as the one of SPD \(\Phi(\lambda)\). In the animation below, you can select a color using the dropdown menu. You will see how its SPD is multiplied by the CMF. The area under each resulting curve represents the tristimulus values.
With the tristimulus values, we have found a way to represent mathematically any color using three numbers. Remember that we said that a color is a specific cone response and that if we could find all the cone response, they would compose all the colors a human can perceive, i.e the Human Visual Gamut. The R, G, B tristimulus values are those cones response! It would be more correct to say that they are the attempt of Guild and Wright to represent the cone response. In an other chapter, we'll compare the real cone responses (L, M, S) found later by scientists to those tristimulus RGB. Since the tristimulus are 3 numbers, we can use them as dimensions on a 3D graph! Each axis will represent the amount of each primary. If you forgot what one unit of R primary is, it is the look of the Red Primary found during calibration! Our unit system is the one decided during calibration.
Let's start to plot the tristimulus values of the spectral colors corresponding to monochromatic lights that emit one watt! Why? because we already have those tristimulus values, they are equivalent to the CMF! Remember that the CMF values are the amount of primary needed to match a monochromatic light emitting one watt. Mathematically, you could express the SPD of a monochromatic light as dirac function \(\Phi(\lambda) = P_0\delta(\lambda-\lambda_0)\), with P0=1 watt. Therefore the tristimulus values of a monochromatic light of wavelength \(\lambda_m\) emitting 1 watt are given by
Fine, now we have the spectral colors, but what about the other colors? Remember when we talked about polychromaticity? You can combine monochromatic light to create other colors? Then mathematically, we can also express it in this 3D graph. Let's take the color Red an Green. If you mix half of both, you do the same for their tristimulus value and you will have a resulting color C. Name it as you want (maybe yellow?). But what your brain will perceive is a linear combination of both of the spectral color. On the graph you notice that it almost coincide with the yellow spectral color. Yes! Metamerism! In your brain, the response of the cones will be similar as the response given by monochromatic yellow light! So mixing red and green will give you the same response. But wait R, G, B are not the brain response right? You're right, but remember, according to the research of Thomas, Guild and Wright thought that each cone was more sensitive to one primary color, so they use the primary as a way to simulate the cone response. And later, when we were able to effectively compute the cone response (we'll talk later about LMS), the 3D graph will look very similar!!
So by combining the monochromatic light with different weights, we are able to make the brain perceive other colors and we can plot them in the 3D graph. There is an infinity of combination possible, but some of them will give the same color! Taking two spectral colors and generating all the colors in between can already give us many colors, which as we can visualize on the animation, will give us an "envelope” of the Human Visual Gamut.
Note that this envelope of generated color from the spectral colors are colors that would be perceived by a light emitting 1 watt! If you remember the general color matching equation using the tristimulus values
If the luminance correspond to a light emitting 1 one of power, we can use the CMF as tristimulus values to write
and for any other luminance $l$, you could simply scale each coefficient with the same factor $s$ to keep the same chromaticity
Varying $s$ will vary the luminance of the color but keep the same chromaticity. Graphically, changing the luminance while keeping the chromaticity boils down to moving the tristimulus values on the isochromatic line, a line of constant chromaticity.
Since you can increase the luminance indefinitely, there is actually no boundaries to be able to represent the Human Visual Gamut. The cone shape envelope we see is only a representation of equal-energy colors. However, the chromaticity of the Human Visual has boundaries. You'll better visualize those boundaries in the next section but we can already understand that because of those defined boundaries, we use the chromaticity diagram as a representation of the Human Visual Gamut.
So we could say that the 3d volume is one sample of the Human Visual Gamut in a 3D space where each axis represent the amount of a primary light composing a color. Remember that the data I used are from the CIE. The actual primaries used for this representation are the Red at 700nm, Green at 546.1nm and Blue at 435.8nm. And the sum of one unit of each primary will match the reference white E. Those primaries and reference white defined the 3D color space called the CIE 1931 RGB color space (which is still neither the sRgb nor AdobeRGB color space!)
Here is a quick recap on the different steps of the color matching experiment. By asking humans to match the spectral colors, we ended up by being able to represent the colors by three tristimulus values in a 3D color space called the CIE1931 RGB color space.
The next question is: how is the CIE1931 RGB color space related to the horseshoe-shape chromaticity diagram? It is a 2d graph, not a 3d and the axis are labeled x and y, not R, G or B!
So far we've been able to understand that a color can be represented by its chromaticity coefficients and its luminance, and all the color a human can perceive are represented in a 3D volume with 3 axes R, G and B. Now it's good to recall the relationship between the different concepts through their mathematical notation.
A color C represented by its SPD \(\Phi(\lambda)\) is fully matched in term of chromaticity and luminance using the tristimulus values:
And from these tristimulus values, we can deduce the chromaticity coefficients because they are the relative amount primaries needed to produce a certain amount of this color C. In addition, they are defined such that their sum = 1.
If we represent the chromaticity coefficients in the RGB space, those points $r, g, b$ lies on the plane R+G+B =1, which is equivalent to project all the 3D visual gamut points onto this plane, along their line of constant chromaticity. By omitting \(b\) which can be found using \(r\) and \(g\), you could visualize this plane in a 2D rg-graph. This animation shows you the different projections. To make the animation clearer, only the spectral colors are shown.
And there you have it, the chromaticity diagram. But it's still different than the one in the introduction! Be patient, we are almost there. This chromaticity diagram is expressed in the rg-plane while the one we are looking for is in the xy-plane. The xy-chromaticity diagram is actually the same as this rg-chromaticity diagram but coming from another color space created by CIE, the CIE1931 XYZ color space. This latter is a simple linear transformation of the CIE1931 RGB color space. Why not use the CIE1931 RGB? Because CIE wanted a standard color space where the colors could be expressed only with positive values. On top of that, they wanted one of the dimensions to represent the luminance.
If you recall a bit of linear algebra, transforming the coordinates of a point from one coordinate system to another requires a transformation matrix that maps the axes of one system onto those of another.
In the context of color, the tristimulus values (R, G, B) in the RGB color space, defined by the three primary color axes $(\mathcal{R}, \mathcal{G}, \mathcal{B})$, can be transformed into another coordinate system defined by a different set of axes (\(\mathcal{X},\mathcal{Y},\mathcal{Z}\)) with coordinates (X, Y, Z). Using matrix notation:
How was this matrix determined, and how was the new coordinate system (X, Y, Z) chosen? You'll find all the details in this paper.
The key takeaway is that in this new system, all tristimulus values are positive, and the Y component represents the luminance of the color. In this XYZ space defined by the primaries \(\mathcal{X}, \mathcal{Y}, \mathcal{Z}\), the color matching equation of a color with coordinates \((X, Y, Z)\) is now expressed as:
Just like in the RGB space, we can define the chromaticity coefficients in this new space as the relative proportions of the primaries, chosen such that their sum equals 1.
And here they are: the chromaticity coefficients that make up our famous xy-chromaticity diagram , the very one we've been trying to understand.
In the animation below, you can see how the chromaticity diagram is constructed, starting from the RGB tristimulus values of the spectral colors. These tristimulus values are transformed into the XYZ color space using the matrix \(M\). Then, by projecting the resulting points onto the plane \(X + Y + Z = 1\), we obtain the chromaticity coefficients coordinates \((x, y, z)\) which still exist in three dimensions. However, since \(z\) is dependent on \(x\) and \(y\), we can reduce the dimensionality by omitting the \(z\) component. This gives us a 2D representation on the XY-plane: the famous horseshoe-shaped chromaticity diagram.
The projection of the spectral colors' RGB tristimulus values traces the outer boundary of the chromaticity diagram and is known as the spectral locus. If we had included more points from within the 3D color volume, the diagram would begin to fill in, revealing more colors within the boundaries. But we can also work directly in the 2D chromaticity diagram itself. To add more chromaticities, we need to generate additional colors. As we've seen earlier when filling in the volume between two colors, mixing colors to produce a new one involves summing their tristimulus values. For example, if we mix two colors \(C_1\) and \(C_2\), the resulting color \(C_r\) can be described by the following color matching equation:
During this color mixing process, what happens to the chromaticity coefficients? Do they simply add up as well? Let's focus on the \(x\) chromaticity coordinate to explore this. By definition
This equation shows the relationship between chromaticities: when mixing two colors, the resulting chromaticity is a weighted sum of their respective chromaticities.
Now, let's return to our chromaticity diagram. Remember, it's a projection onto the plane \(X + Y + Z = 1\). To represent the chromaticity of \(C_r\) correctly within this projection, we must apply the constraint \(X_r+Y_r+Z_r = L_1 + L_2 = 1\). With this constraint, we can rewrite equation \(\eqref{eq:xr}\) accordingly and perform the same derivation for the \(y\) coordinate.
According to these equations, if you take two colors and mix them, the line connecting their respective chromaticity represents the chromaticities of the resulting colors for different mixing ratios. Here's an animation illustrating this concept: you can select two colors on the spectral locus and hover over the connecting line to see the resulting color. The position of the hovered point along the line determines the contribution of each spectral color to the mix: the closer it is to one end, the more that color influences the result. In other words, the distance from each endpoint reflects its weight in the resulting color.
We can see that the chromaticity diagram is a powerful tool for understanding how colors can be mixed to create new ones. We also observe that different mixtures can produce the same chromaticity. And this principle holds not just for two colors, but for any number of them. In the next animation, you can experiment with mixing three colors. The triangle formed by the three selected points represents the full range of chromaticities you can create from those colors. If you choose a point inside the triangle, it shows a chromaticity that results from mixing the three colors in proportions related to their respective distances from that point.
Does mixing three colors remind you of something? Yes, the color matching experiment, where we mixed the three primaries: red (700 nm), green (546.1 nm), and blue (435.8 nm). We can visualize these three primaries on the chromaticity diagram and see the triangle they form. As we've just learned, this triangle includes all the chromaticities that can be produced by mixing those three primaries. In other words, it represents the gamut of the CIE 1931 RGB color space.
Now, imagine we choose a different set of primaries, the triangle would change, defining a different color space. These new primaries don't necessarily need to lie exactly on the spectral locus. For example, if you choose primaries with chromaticity coordinates (x, y) = (0.64, 0.33) for red, (0.3, 0.6) for green and (0.15, 0.06) for blue, you get the gamut of the sRGB color space. Similarly, with primaries (x, y) = (0.64, 0.33) for red, (0.21, 0.71) for green and (0.15, 0.06) for blue, you get the gamut of AdobeRGB.
Now we truly understand what sRGB and AdobeRGB mean: they are color spaces defined by three specific primaries, and therefore define their own gamuts. This is why you often hear that “AdobeRGB has a wider gamut than sRGB”, it is because the triangle representing AdobeRGB covers a larger area on the chromaticity diagram than the one for sRGB. And if your monitor is said to cover only 95% of AdobeRGB, it means it can reproduce 95% of the colors in the AdobeRGB space, but not the remaining 5%.
Finally, if you close the spectral locus and colorize every points withing that shape, you end up with the typical color representation of the chromaticity diagram. Note that the straight line that closes the spectral locus is called the line of purples but it's not part of the spectral locus because these colors don't correspond to any single wavelength.
A quick note about the colorization: the colors used on the chromaticity diagram can actually be a bit misleading.
First, as we've seen, each point on the diagram represents a chromaticity (x, y), but it doesn't specify the luminance. In theory, any color with the same chromaticity but different luminance could be used. Typically, the color shown is the one with the maximum luminance possible for your display.
Secondly, while the diagram represents the full range of human-visible chromaticities, no existing display can reproduce all these colors. So what colors are shown for those chromaticities that can't actually be displayed? Typically, colors that can't be displayed are replaced by nearby reproducible ones, so the diagram remains visually continuous even if it's not perfectly accurate.
In fact, there's no single way to colorize the chromaticity diagram. If you're curious, I explain how I decided to do it in this article.
Remember, a color can be described by two key attributes: chromaticity and luminance. Chromaticity is represented by the coordinates (x, y) on the chromaticity diagram, while luminance corresponds to the Y component in the XYZ color space. In the animation below, you can explore how any color can be created by selecting a chromaticity point on the diagram and adjusting the luminance via the Y value. As you might expect, changing the luminance while keeping the chromaticity fixed moves you along an isochromatic line, which is illustrated in the 3D graph.