Why I don't like reduce
- No translations available.
- Add translation
The popular eslint-plugin-unicorn (opens in a new window) recently added a no-array-reduce (opens in a new window) rule, and it is set to error per default. The argument is that Array.reduce will likely result in code that is hard to reason about, and can be replaced with other methods in most cases (Read this twitter thread (opens in a new window) for a lengthy discussion if you like).
I have to say: I wholeheartedly agree, and I have personally turned on that rule in some projects.
For me, there are many reasons why I rarely like to see reduce when reviewing code. First and foremost, it is hard to grasp. I believe one of the reasons for this is that reduce can do way too much.
- Need to sum up values?
- Need to transform Arrays into Objects?
- Need to build a string?
Array.reduce can do it all.
While it might sound nice to have such a tool at your disposal, when looking at something implemented with reduce, you don’t immediately see what that code is for.
What also adds to the confusion for me is that you cannot read reduce from left to right, top to bottom - at least not in JavaScript. Whenever I see reduce, I usually skim to the very end to get ahold of the initial value, because it will tell me what this reduce is trying to do. Then, I can go back to the beginning and try to understand it.
This is not the case in other languages, for example scala, where the initial value is the first parameter:
val numbers = List(1, 2, 3)
numbers.fold(0)(_ + _) // 6Try me in scastie (opens in a new window)
Reduce is so mighty, you can implement all the other Array functions you are using on a daily basis with it:
const mapWithReduce = (array, callback) => array.reduce((accumulator, currentValue, index) => { accumulator[index] = callback(currentValue, index, array) return accumulator }, [])
mapWithReduce([1, 2, 3], (value) => value * 2) // 2, 4, 6I have even seen people re-implement join with reduce:
const joinWithReduce = (array, delimiter) => array.reduce( (accumulator, currentValue, index) => accumulator + currentValue + (index === array.length - 1 ? '' : delimiter), '' )
joinWithReduce(['foo', 'bar', 'baz'], ';') //foo;bar;bazThe question is: why would you? For almost all cases, there are methods that:
- are not as powerful, with a limited scope
- have a clear API
- have a good name, so you know what it is doing
Array.join is a very good example of such a limited method. Everyone understands what is going on when we read:
values.join(';')
Compare that to the above implementation - I think we can agree that the simplicity is preferred.
For me, (mostly) only when implementing reusable util methods. It usually doesn’t matter how they are implemented. You give them a good name, a clear purpose, write some tests and that’s it.
Most usages of reduce I was reviewing lately fall in one of three categories:
Yes, there is no easy native way to do that, and not even popular util libraries like lodash have no good way of achieving this (keyBy (opens in a new window) is okay, but doesn’t transform values).
In one project, we frequently had the need for such transformations, so we made our own util for it. The implementation is something like this:
export const toObject = <T, K extends string | number | symbol, V>( array: ReadonlyArray<T>, iteratee: (element: T, index: number, array: ReadonlyArray<T>) => [K, V]): Record<K, V> => array.reduce((result, element, index) => { const [key, value] = iteratee(element, index, array) result[key] = value return result }, {} as Record<K, V>)
toObject(['foo', 'bar', 'baz'], (element) => [ 'key-' + element, 'value-' + element,])Good name, strong types, ease of use. The rest is implementation detail (including the type assertion for the initial value).
Again, pick a util library ( lodash (opens in a new window), ramda (opens in a new window), remeda (opens in a new window), …) or write your own util. Encapsulate that complex reduce so that you don’t have to re-implement it every time you need it.
Iterating over big lists many times can be costly, so people often fallback to reduce because it can do everything in one go.
The truth is: usually, it doesn’t matter. Even when working with very large lists (tens of thousands of entries), I have made the experience that performance is rarely negatively impacted as long as you keep iterations linear.
If your toObject util does one iteration with a reduce or two iterations with a map followed by Object.fromEntries is irrelevant, unless you have measured it (opens in a new window) and found it to be a bottleneck.
Talking about performance and linear iterations, I’ve learned the hard way not to do this when working with reduce:
export const toObject = <T, K extends string | number | symbol, V>( array: ReadonlyArray<T>, iteratee: (element: T, index: number, array: ReadonlyArray<T>) => [K, V]): Record<K, V> => array.reduce((result, element, index) => { const [key, value] = iteratee(element, index, array) return { ...result, [key]: value, } }, {} as Record<K, V>)Why should I be dirty and mutate the result, when I can be super fancy instead and create a new object every time. 🤔🤦
Here (opens in a new window) is a perf analysis how the two compare when run over an Array with 10k entries:
1.700 operations per second vs. 47 operations per second.
Yes, it’s that slow, because it has to re-create an ever-growing object with every iteration. It will get exponentially slower the more entries the array has. Mutation is not the root of all evil, it does not have to be avoided at all costs. If the scope is small, and the intent is clear - mutate away. 🚀
But still - avoid reduce
Do you like reduce or not? Let me know in the comments below. ⬇️