Follow

If it involves a couple of hacky tricks, I don't care if your code solves the problem 6% faster.

Hardware scales and will improve over time. The human capacity of understanding and maintaining code does not.

There are always exceptions to the rule, but generally speaking:

clean code > performance.

@fribbledom The secret to writing good software is basically maintaining a balance between simplicity, correctness and performance.

@fribbledom Amen brother! The most important attribute for good code is that when someone else sees it for the first time, they understand it right away (in general).

If you have to write some code that's necessary to be complex or cryptic, then in general there should be one or two big paragraphs of documentation above it, explaining why you did that, and what it does, in narrative form they way you'd explain it in person.

@fribbledom You could just have tooted clean code > performance 😂

@fribbledom honestly, It's far more often the case that cleaner, clearer code performs better in the first place. I've seen far too many hacks to improve performance added on to shit code that were obviated by a proper refactor

@fribbledom but separately, respect for user resources matters too. Clean code is good. But so is avoidig the all too common issue of throwing wildly inefficient code at the users and making them bear the cost of that inefficiency

And honestly the disregard for efficient code is a big contributor to the disposable computing problem. I think the right path is more nuanced than "clean code is always better than performance" or any other maxim

@calcifer @fribbledom code your programs like you're gonna use them regularly.

@calcifer @fribbledom From Rob Pike’s third rule of programming: “Fancy algorithms are slow when n is small, and n is usually small.”

@fribbledom and most importantly, before you make any hacky tricks to improve performance: measure, don‘t speculate.

@fribbledom i took over a software project at university when starting my thesis. It was a pile of c++ code written in c-style “for performance”. Took hours to run. I didn’t understand much of the code so I implemented a naive implementation in parallel (std::vector, no optimization). It was blazing fast.

What happened? My supervisor had tried to optimize the wrong thing, actually introduced overhead with the optimizations, then never realized there was a performance penalty for debug mode.

@fribbledom
BUt if those hacky tricks are the difference between an instance working and not working, I'll trick it all damn day!

@fribbledom often times cleaner code even gets you performance.
Especially if it's not a mess of unused libs, classes and functions.

@fribbledom And 20 years later, when it turns out your hacky "performance boost" introduced some subtle security vulernability, it'll be a real testament to your programming abilities.

@fribbledom and additionally: debugging code is twice as hard as writing code. if you make your code as smart as you possibly can - you are per definition not smart enough to debug it!

@elbosso @fribbledom We need a PSA about that.

Write dumb code, people. Write dumb code.

@fribbledom Or we simply use Java, so we don't have both and don't need to decide! ;)

@gcrkrause
Thats not true as much as I wished it where. You can write efficient code withe primitives, but if you have to deal with autoboxing and lambdas you loose the performance and the JIT can't help you.
Maybe it changes with project Valhalla and the Value Objects, but until then you can not sadly always do what you want in Java and expect to perform at max level.

@hamburghammer I just saw a chance to bash Java, I am totally biased there and don't want my mind changed :D

@fribbledom depends on your scale. 6% can be a few seconds and a couple cents VS several months and billion dollar difference. Performance _matters_, and the cult of leaving it up to powerful devices has to stop, because no, hardware doesn't just scale, we're _already_ at the limit of how far we can push single threads. You miss that hardware has grown immensely, yet software has filled that performance growth like hot air, being at an all time slow, while not doing all that much more.

@fribbledom leaving it to hardware has also created a 60 million ton ewaste problem, because lazy software design with quick and easy excuses has left good devices unusable

@evolbug

Sure, it always depends on the context, and as I said, there are exceptions to every rule. As a general rule though, I stand by it.

@fribbledom correct analysis models plus good model compiler means never having to read code and getting efficiency. #xtuml

@fribbledom except when you do stuff like bubble-sort. Embrace the features in your libraries and frameworks. Also: write documentation & tests.

@fribbledom Can I tatoo this on the forehead of certain boomer programmers?

@fribbledom [optimal algorithm] > [clean code] > [hacky optimization]

@fribbledom My general policy is "performance only eclipses clean code when you've identified an actual bottleneck"

I've written a ton of very inefficient code, because run time wasn't an issue, and heavily encourage others to do the same :)

I've also seen too many people "optimize" code without actually profiling the run-time, and the end results are often actually slower AND less readable.

Or they really did improve the run time... but that loop only accounts for 1% of the full function's runtime because it has to hit three databases and the network latency eclipses everything else.

@fribbledom Depends. If your code is consuming 60% CPU, 6% less energy usage may be significant. Also, in gaming, 6% better performance means more people able to play the game.

Sign in to participate in the conversation
Mastodon

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!