Software efficiency halves every 18 months, compensating Moore’s Law.
@fribbledom So we're kinda screwed since hardware manufacturers aren't keeping up with Moore's Law these days, aren't we. >_>
@fribbledom No, I think it's not that simple. I think that current devs are not the equal of their intellectual forebearers.
Previous generations of devs had to work miracles in highly constrained environments, often working very hard to optimize their code effectively. In the era of "cheap' CPU and RAM, nearly everyone has forgotten the art of optimization.
Seriously, go ask a modern dev to describe the functionality of a specific CPU register of your choice and see what he says.
To barrow a quote from a CS professor at Harvard who's name escapes me just now, "God made NAND and we made everything else."
Below the level of logic gates, you're doing EE. At the level of logic gates and above, you're doing computation.
@profoundlynerdy @fribbledom I don't remember everything I learned about registers and etc. and certainly don't keep up to date on that stuff, but I do remember lessons I learned about efficiency, which is definitely part of the craft. But the number of times I've heard, "eh, computers are getting faster"--No! This kind of laziness is what keeps the benefits of these faster computers from getting to the users. (Also part of the fun for me is finding the elegant sollution.)
@dlek @fribbledom This is all so fresh in my mind. I'm in the process of drafting a CS course now. I start with the abacus and ask "what's the simplest general purpose digital computer you could build."
Making me think about registers and clock cycles has forced me to revisit some assembly. I've been fiddling with MOS 6502 assembly for days and not writing as much of my course draft as I should have. Hahaha! First World problems.
@dlek @profoundlynerdy @fribbledom This is why I started trying to write programs for my calculator in the last week. 2.5 MiB of storage, and ~20 KiB of RAM, on 15 MHz, pretty fun, especially when you just figured out an optimization to boost the speed by a factor of 10 with hand-tuned assembly and by counting the CPU cycles instruction-by-instruction. https://cybre.space/@niconiconi/100992046537861579
If you asked a non-modern dev to describe the functionality of a specific JRE library function or CSS selector, they'd also have trouble. If you're measuring by knowledge in terms of what you can recite, modern devs probably know as much, if not more, than programmers from the 1980s. But it's different stuff.
(Also, though, programming isn't about reciting facts. It's about creating things.)
If you only need to run a task once and/or not very often, it doesn't matter what language or tools you use so long as it's accomplished in a reasonable time.
If the task is performance-dependent or needs to run on older hardware then that's when optimisation becomes an issue developers care about.
@profoundlynerdy @fribbledom That might be true, although I think there's a big functionality/performance tradeoff. While some modern abstractions cost a lot in performance, more features get added because the work is less error-prone and there's less reinventing the wheel. Ultimately this is about what the industry is demanding from programmers rather than some innate inability to memorise registers or what have you.
That said, I still cringe whenever I hear of some company replacing their legacy COBOL applications with theoretically equivalent Java code.
While COBOL isn't bullet proof and can certainly ABEND (terminate abnormally) the solution for the Java crowd always seems to be "Hmm... I don't know, try restarting the JVM."
Ok, but go back in time to 1980 and ask a programmer to describe how he'd implement an internet-enabled service with cloud backups.
You're not asking for programmers as knowledgeable as ones from the 80s, you're asking for some union of both sets of knowledge and experience.
That's going to be rare in any era.
@veer66 @fribbledom @profoundlynerdy That as well. But the times where ax (eax, rax) are strictly the accumulator are over. We left that behind with the 8 bitters. Even in x86 this was just a name, while in 6502, registers were still restricted to certain purposes. Today RISC has won. Lots of general purpose registers. Even x86 is translated to RISC-like instructions internally by the CPU these days.
@veer66 @js @fribbledom Sort of: on the whole most Intel CPUs use CISC instructions as, well, basically macros for lower level RISC instructions for performance reasons. I'm not sure if all of this is defined at the microcode level or not, I'm sure some of it must be.
So, CISCS persists but RISC is the real winner here.
@profoundlynerdy @fribbledom Also, ask a modern dev about cache lines. This did not exist back then and allows you a lot of optimization today. The difference is not time. The difference is people who care for performance and details about how things work vs. people who don’t give a shit and quickly want to get something done that falls apart next week.
I see your point but I wonder if it puts things in the proper light. Modern developers might not need to work small and efficient and can create more complex and powerful products. Super Mario 64 is 8 megs; Dark Souls III is almost 20 Gigs; neither the small nor the large is necessarily superior, their developers simply had a different environment to work with.
@fribbledom May who?
I would call this the "eh, good enough" equilibrium. You have better hardware, you don't need as tightly optimized (and simple) programs to make it reasonable to use.
@fribbledom LOL--and this is why I still code in C rather than those new-fangled languages.
@fribbledom Aw, we aren't punning,
and calling it "Leess's Law"?
(or something similar)
@fribbledom also memory footprint doubles as well
@fribbledom too real
Follow friends and discover new ones. Publish anything you want: links, pictures, text, video. This server is run by the main developers of the Mastodon project. Everyone is welcome as long as you follow our code of conduct!