I’ve started noticing articles and YouTube videos touting the benefits of branchless programming, making it sound like this is a hot new technique (or maybe a hot old technique) that everyone should be using. But it seems like it’s only really applicable to data processing applications (as opposed to general programming) and there are very few times in my career where I’ve needed to use, much less optimize, data processing code. And when I do, I use someone else’s library.
How often does branchless programming actually matter in the day to day life of an average developer?
It matters if you develop compilers 🤷,
Otherwise? Readability trumps the minute performance gain almost every time (and that’s assuming your compiler won’t automatically do branchless substitutions for performance reasons anyway which it probably will)
The better of those articles and videos also emphasize you should test and measure, before and after you “improved” your code.
I’m afraid there is no standard, average solution. You trying to optimize your code might very well cause it to run slower.
So unless you have good reasons (good as in ‘proof’) to do otherwise, I’d recommend to aim for readable, maintainable code. Which is often not optimized code.
I only know of a handful of cases where branchless programming is actually being used. And those are really niche ones.
So no. The average programmer really doesn’t need to use it, probably ever.
If you want your code to run on the GPU, the complete viability of your code depend on it. But if you just want to run it on the CPU, it is only one of the many micro-optimization techniques you can do to take a few nanoseconds from an inner loop.
The thing to keep in mind is that there is no such thing as “average developer”. Computing is way too diverse for it.
And the branchless version may end up being slower on the CPU, because the compiler does a better job optimizing the branching version.
As a webdev I’ve honestly never even heard of it