**Hey there 👋** I would like to quickly plug a publication I am working on to help teams build better AI-enabled products. If you are building a software product and want to integrate LLMs or make sure you're ready for going to production, make sure to check it out!

When averaging values, sometimes you want to penalize inequality or variation, for example, when you value similar medium values more than extreme differences. When using the arithmetic mean, you merely get the same sum, regardless of having the same results or huge differences between the values.

To illustrate the differences between the arithmetic mean and other measures, let's think of three data sets we'll test against:

```
a <- c(0.1, 0.5, 0.9)
b <- c(0.4, 0.5, 0.6)
c <- c(0.5, 0.5, 0.5)
```

The first case shows a high dispersion of our values, whereas the second only has modest differences, and the third case has equal values.

When computing the arithmetic mean on those sets, we'll get the same value of `0.5`

, regardless of the dispersion.

```
mean(a)
mean(b)
mean(c)
```

This may be helpful in a lot of cases, but when we value equality between our values, it doesn't tell the full story. To emphasize the importance of equality between values in our data sets, let's take a look at the geometric mean.

Calculated as the nth root of the product of n numbers, it naturally causes dispersed values to lower the average value.

Calculating the geometric mean yields quite different results compared to the arithmetic mean:

```
prod(a)^(1/length(a)) # 0.3556893
prod(b)^(1/length(b)) # 0.4932424
prod(c)^(1/length(c)) # 0.5
```

As we can see, the closer we get to equal values, the closer the mean aligns on that value. For the first data set, we have quite a low result, as we have a high variation between our values.

Of course, it's important to think about which data you run this on. Does it make sense to penalize inequality of values? Does a lower average value make sense for data sets with high dispersion?

To give more background information to guide this decision, let us explore some applications of the geometric mean.

To measure the level of human development in a country and make it comparable with other countries, the Human Development Index (HDI) started aggregating three partially-normalized indices using the geometric mean in 2010, replacing the arithmetic mean, which was used before. The reason for this change was that the geometric mean directly reflects poor performance in any measured dimension.

Another helpful property of the geometric mean is that low achievement in one dimension is not linearly compensated for by a higher achievement in another direction. This means that to compensate for a decrease of 1% in one dimension, a country would have to raise another dimension by more than that to yield the same outcome.

Another area of use is describing proportional growth: The geometric mean of growth rates is known as the compound annual growth rate (CAGR) in business.

**Thanks for reading this post 🙌** I would like to quickly plug a publication I am working on to help teams build better AI-enabled products. If you are building a software product and want to integrate LLMs or make sure you're ready for going to production, make sure to check it out!

In this Blog Post