Realistically, I suspect most of what people use computing devices for could be done with the following:
* 100 MHz CPU with a single 3-5 stage pipeline, a FPU, no branch prediction, no out-of-order execution, no kind of speculative execution
* A GPU with enough resources for 2D desktop composition (not necessary but nice to have) and a hardware-accelerated 1080p60 video codec, that can also be used for some light general-purpose workloads
* A crypto coprocessor
@bhtooefr it could be, but software bloat makes that not so feasible
@popefucker Honestly, the biggest problem is the web.
Everything else can be rewritten to work well on this target hardware (and I do mean on it, not offloading to a Xeon in some datacenter), but the web... ugh. There's no good way to make the existing web faster other than offload.
@popefucker I came at this from the direction of "doing tasks", though, and "using the web" is, in 2019, rarely the task done for its own sake. (20 years ago, it was often the task, but nowadays, the web is used as an application platform...)
Rewriting the applications that are current web-based is really the hardest part, just because there's so damn many.
I've been thinking about this a lot, and tried putting it in to words earlier today, and landed on basically this (but I'm not as direct as you, so it was two paragraphs to say the same basically the same thing.)
It's the cryptography coproccessor that's going to be the important/difficult part. And it'd need to be upgradeable, explicitly trustworthy, *and* tamper evident.
But if we could get a standard going on that front, then everything else you've said here is spot on too.
@ajroach42 @bhtooefr I remember the days when moving from a ~Pentium 133 to a Pentium 200 w/ MMX made a *huge* difference in just... playing MP3s in WinAmp, apparently the MMX instructions in particular would knock ~25-30% CPU usage down to like 5%.
Potentially tied with the video decoder (audio too?) and/or crypto coprocessor, potentially just good SIMD / vector units?
@ajroach42 @bhtooefr Also, I am not by any means an EE, but I keep hearing good things about RISC-V, and of course the big commercial team at SiFive apparently has a whole bunch of cores of different capabilities and sizes - I wonder how these would stack up as a potential starting point?
@feld Not just the languages, but the mindless approaches to things from bringing in libraries without understanding what they do.
I always like to point to RISC OS as an example of how much better things could be.
Under the hood, there's massive technical debt, but 15 years of assuming that your software had to be usable on a 202 MHz StrongARM means that you *HAD* to aggressively optimize, and a $5 Raspberry Pi is incredibly fast with it as a result. (Except for the web.)
@feld And, sure, that's an order of magnitude faster than what I'm talking about, but that's still a few orders of magnitude slower than what we expect our *PHONES* to be...
@dokuja I mean, there's open source SPARC designs that IIRC would be suitable - remove the branch prediction from microSPARC IIep?
Or a RISC-V could be configured in the right way, too.
Or there's always some more exotic ideas (Mill, anyone?), but they really start deviating from my point here (but that doesn't make them incompatible with my point).
The idea is something like 486DX4 or ARM7 levels of technology are enough for what people actually need.
@dokuja (Granted, Mill lends itself to speculative execution...)
@bhtooefr RISC architectures are really neat. I suspect those are what you are referencing?
@Anarkat Nah, doesn't have to be RISC specifically, more speaking of a general technology level.
I mean, an Intel 486DX4 meets the CPU portion of this description perfectly.
But, yes, there's a lot of 80s and early 90s RISC designs that also meet it.
@bhtooefr I've got a 2005-era box with a *1.3 GHz* CPU, and a couple of gigs RAM. It's wonderful for shell-based tasks and a reasonably light desktop, and modest GUI apps, under Debian GNU/Linux.
It completely dies using a composited desktop. Tried that, no bueno. GNOME, KDE, Cinnamon.
And it effectively freezes hard under Firefox or Chromium. Just. Won't. Run. Even with just one tab, at least after a few minutes.
And almost *everything* online today is Web-based.
So I disagree.
@bhtooefr Mind that the problem is really that *the Web is just too goddamned bloated.* And that this is reflected in both individual web pages and browsers.
There is no effective constraint or penalty on consuming excessive resources. So websites, and browsers, simply expand to consume all available memory. *Even on a far newer iMac, with 8 GB RAM and an SSD/Hybrid drive*, Firefox kills performance after a day or so (and yes, I have Far Too Many Tabs Open). That's another story.
@bhtooefr In theory, this could be fixed ... somehow. Browsers which treated webpages as individual processes (and which could be terminated with prejudice, to be resurrected on demand). Penalties for excessively complex DOMs and memory (and CPU) utilisation. Vastly more effective state management for browsers (the true root of the tabs problem). And more.
And even then, I don't think you're going to get to a 100 MHz CPU. Though yes, a far thinner resource allocation could work.
@dredmorbius You're thinking of the problem at the wrong level, though. You're thinking the problem to be solved is "run a web browser".
The web browser is the platform, not the task.
The tasks that the web browser is used to perform could, if done competently, be done on the level of hardware I describe, almost universally.
(Note that I'm specifically saying to have hardware to assist with things like compositing, too, and use that GPU for DSP tasks like audio codecs.)
@bhtooefr *As things stand now* the tasks *can not* be performed without a web browser. You'd have to change literally billions of web pages and applications across millions of websites.
I *use* console-based web tools. I write (and run) scrapers. I do much of that on the 2005-era mac mentioned earlier. One interesting performance statistic resulting from scraping 105,000 Google+ Community pages in recent weeks: it literally took less time to crawl those than parse them WITH CLI TOOLS.
@bhtooefr The crawl itself took 16 hours. Extracting the specific data of interest, using HTML-XML-utils, a command-line parsing tool, plus some HTML pre-processing in awk, *took 46 hours*. Literally longer to walk the DOM in memory than to fetch it over slow broadband. And that's *without* display overhead, just figuring out what's on the damned page.
Trust me, I've ideas for getting toward the world you describe, but there's a LOT of what the modern Web does that it won't.
@dredmorbius And, ultimately, what does Google+ do that, say, a Mastodon client (something that is practical on my proposed system - I mean, I just saw today where someone had the beginnings of a Mastodon client on an Atari 68k machine) connecting to a Mastodon or Pleroma instance can't?
(Yes, I know, Circles. But other than that, what?)
Yes, you have to actually replace a lot of these sites, and there's network effects at play. But, the *things that they do* can be done.
@dredmorbius And there's always using intermediate servers to prechew stuff until things can be scaled down to actually be practical on native devices.
Opera Mini has been doing that for many years to get the full-on web down to devices as low-end as dumbphones.
@bhtooefr You can pre-chew all you like. If you cannot talk to the API (and the API doesn't allow itself to be talked to in most interesting cases), you're fucked.
There are exceptions. CLI Mastodon tools, CLI Reddit tools.
With A LOT OF HARD WORK that's possible. But for the typical user, IT SIMPLY WON'T HAPPEN.
(Again: I really wish it would: https://old.reddit.com/r/dredmorbius/comments/6bgowu/what_if_the_web_was_filesystemaccessible/)
I've said most of what I'm interested in saying on this, begging off.
@dredmorbius In this particular case, I'm literally meaning "run Chrome on a server and send the rendered page to the device".
That is a form of prechewing that will *always* work.
Opera Mini does it using Presto (Opera's old rendering engine) rendering to a binary markup language designed for low-resource local rendering.
Browsh does it using Firefox rendering to a text console.
Skyfire did it using Gecko and I believe rendering to a video stream.
@dredmorbius Literally the only way to block it is by IP address, identifying and blocking the prechewing servers.
If they're actually hiding in peoples' homes, though - "buy this $300 box and it'll save you on data bills and let this smartphone with a month of battery life work on the web" - they can't even do that.
@bhtooefr What Google+ (or Facebook, or Instagram) does is *interact with itself*, and more significantly, *with the social and commercial / political graphs present on those sites.* So long as that's where you want to be, or what you want to interact with, you're stuck with those.
And yeah, I'm kind of aware of this shit -- see https://social.antefriguserat.de and https://old.reddit.com/r/plexodus -- I'm part of the G+ Exodus, and looking at open and federated alternatives is Very Much What I'd Like to Happen.
@bhtooefr And the fundamental problems are pretty much what I'd described before: *there is no penalty to sites or software for imposing excessive overhead on user systems.* Actually, THAT OVERHEAD IS ITSELF A FEATURE. It helps select for those who can afford sufficiently capable (and new) systems. Which, in a commercially-driven Web is one of the very few market-segementation mechanism that works.
You can't open shop in an upscale neighbourhood. But you can demand beefy clients.
@bhtooefr For pure content, there's a hell of a lot of crap you can strip from virtually any Web page. But if you want to interact with systems -- school, work, government, commerce, etc. -- presently a full-featured, JS-enabled, GUI browser named Chrome, Safari, or Firefox is a virtual necessity. And I'm mostly including Safari and Firefox out of professional courtesy: Chrome owns the Web.
(And if you think I'm happy about this, I'm not. At all.)
But until we can knock out the pins...
@dredmorbius I suspect there's also always... you can often strip the crap to get at the real underlying API that these sites use.
It may take a lot of work, but for things that are necessary and sufficiently popular, there might be enough market for that work, to create something that can slam responses into these tools' endpoints using a reimplemented front end.
@dredmorbius And there's also looking at reverse-engineering things like the smartphone apps, especially if it's a service also offered in developing nations - the developing nations APIs may be usable.
@bhtooefr ... underlaying this dynamic, IT WILL NOT CHANGE.
Individually, users cannot tell schools, governments, businesses, commerce sites, and employers to Go To Hell. Not /enough/ individuals, at any rate.
Quite honestly, your best tools are probably legal -- GDPR and ADA in the EU and US, respectively. Put some liability into noncompliance. That shifted Google+, in a fairly extreme way.
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!