Skip to content
TkDodo's blog
TwitterGithub

Why I don't like reduce

โ€” JavaScript, TypeScript, Array, reduce โ€” 3 min read

swiss knife
    No translations available.
  • Add translation

The popular eslint-plugin-unicorn recently added a no-array-reduce rule, and it is set to error per default. The argument is that Array.reduce will likely result in code that is hard to reason about, and can be replaced with other methods in most cases (Read this twitter thread for a lengthy discussion if you like).

I have to say: I wholeheartedly agree, and I have personally turned on that rule in some projects.

What is wrong with reduce?

For me, there are many reasons why I rarely like to see reduce when reviewing code. First and foremost, it is hard to grasp. I believe one of the reasons for this is that reduce can do way too much.

  • Need to sum up values?
  • Need to transform Arrays into Objects?
  • Need to build a string?

Array.reduce can do it all.

While it might sound nice to have such a tool at your disposal, when looking at something implemented with reduce, you don't immediately see what that code is for.

What also adds to the confusion for me is that you cannot read reduce from left to right, top to bottom - at least not in JavaScript. Whenever I see reduce, I usually skim to the very end to get ahold of the initial value, because it will tell me what this reduce is trying to do. Then, I can go back to the beginning and try to understand it.

This is not the case in other languages, for example scala, where the initial value is the first parameter:

sum-a-list
1val numbers = List(1, 2, 3)
2
3numbers.fold(0)(_ + _) // 6

Try me in scastie

Reduce is so mighty, you can implement all the other Array functions you are using on a daily basis with it:

map-with-reduce
1const mapWithReduce = (array, callback) =>
2 array.reduce((accumulator, currentValue, index) => {
3 accumulator[index] = callback(currentValue, index, array)
4 return accumulator
5 }, [])
6
7mapWithReduce([1, 2, 3], (value) => value * 2) // 2, 4, 6

I have even seen people re-implement join with reduce:

join-with-reduce
1const joinWithReduce = (array, delimiter) =>
2 array.reduce(
3 (accumulator, currentValue, index) =>
4 accumulator +
5 currentValue +
6 (index === array.length - 1 ? '' : delimiter),
7 ''
8 )
9
10joinWithReduce(['foo', 'bar', 'baz'], ';') //foo;bar;baz

The question is: why would you? For almost all cases, there are methods that:

  • are not as powerful, with a limited scope
  • have a clear API
  • have a good name, so you know what it is doing

Array.join is a very good example of such a limited method. Everyone understands what is going on when we read:

values.join(';')

Compare that to the above implementation - I think we can agree that the simplicity is preferred.

When is it okay to reduce?

For me, (mostly) only when implementing reusable util methods. It usually doesn't matter how they are implemented. You give them a good name, a clear purpose, write some tests and that's it.

Most usages of reduce I was reviewing lately fall in one of three categories:

1. Transforming Arrays to Objects

Yes, there is no easy native way to do that, and not even popular util libraries like lodash have no good way of achieving this (keyBy is okay, but doesn't transform values).

In one project, we frequently had the need for such transformations, so we made our own util for it. The implementation is something like this:

to-object
1export const toObject = <T, K extends string | number | symbol, V>(
2 array: ReadonlyArray<T>,
3 iteratee: (element: T, index: number, array: ReadonlyArray<T>) => [K, V]
4): Record<K, V> =>
5 array.reduce((result, element, index) => {
6 const [key, value] = iteratee(element, index, array)
7 result[key] = value
8 return result
9 }, {} as Record<K, V>)
10
11toObject(['foo', 'bar', 'baz'], (element) => [
12 'key-' + element,
13 'value-' + element,
14])

Good name, strong types, ease of use. The rest is implementation detail (including the type assertion for the initial value).

2. Grouping Arrays

Again, pick a util library ( lodash, ramda, remeda, ...) or write your own util. Encapsulate that complex reduce so that you don't have to re-implement it every time you need it.

3. Do many things at once

Iterating over big lists many times can be costly, so people often fallback to reduce because it can do everything in one go.

The truth is: usually, it doesn't matter. Even when working with very large lists (tens of thousands of entries), I have made the experience that performance is rarely negatively impacted as long as you keep iterations linear.

If your toObject util does one iteration with a reduce or two iterations with a map followed by Object.fromEntries is irrelevant, unless you have measured it and found it to be a bottleneck.

Reduce performance pitfalls

Talking about performance and linear iterations, I've learned the hard way not to do this when working with reduce:

immutability-๐ŸŽ‰
1export const toObject = <T, K extends string | number | symbol, V>(
2 array: ReadonlyArray<T>,
3 iteratee: (element: T, index: number, array: ReadonlyArray<T>) => [K, V]
4): Record<K, V> =>
5 array.reduce((result, element, index) => {
6 const [key, value] = iteratee(element, index, array)
7 return {
8 ...result,
9 [key]: value,
10 }
11 }, {} as Record<K, V>)

Why should I be dirty and mutate the result, when I can be super fancy instead and create a new object every time. ๐Ÿค”๐Ÿคฆโ€

Here is a perf analysis how the two compare when run over an Array with 10k entries:

1.700 operations per second vs. 47 operations per second.

Yes, it's that slow, because it has to re-create an ever-growing object with every iteration. It will get exponentially slower the more entries the array has. Mutation is not the root of all evil, it does not have to be avoided at all costs. If the scope is small, and the intent is clear - mutate away. ๐Ÿš€

But still - avoid reduce


Do you like reduce or not? Let me know in the comments below. โฌ‡๏ธ

ยฉ 2024 by TkDodo's blog. All rights reserved.
Theme by LekoArts