Follow

Optimize What? • Commune

Silicon Valley is full of the stupidest geniuses you'll ever meet. The problem begins in the classrooms where computer science is taught.

communemag.com/optimize-what/

Interesting how "optimization" here is used *only* in the sense of optimizing *outcomes*, with zero regard for *costs*.

Two more things that mar this mind-blowing read are the word "overeducated", and a serious suggestion that workers seize the means of production.

Dude, the means of production are expensive as all hell, that's how we ended up with capitalism in the blue corner, and centralized economies in the red corner.

@kavbojka “The study of machine learning offers a stunning revelation that computer science in the twenty-first century is, in reality, wielding powers it barely understands.” communemag.com/optimize-what

Very good article.

So-called “soft” questions about society, ethics, politics, and humanity were silently understood to be intellectually uninteresting. They were beneath us as scientists; our job was to solve whatever problems we were given, not to question what problems we should be solving in the first place. And we learned to do this far too well.

@uint8_t That was certainly my experience in college. Something which was hugely demoralizing. It felt like my field of study was amoral and inhuman.

@kavbojka @notimetoplay @uint8_t @cstanhope

A long read, and I haven't quite processed it yet. I agree with many things and the general sentiment, although I'm still not sure what the conclusion even is, but here's one thing that stood out to me:

> “Is this a technical problem that can be optimized to find some happy balance, or is it a political question of whether elections should be run this way, with campaign contributions, in the first place?”

Isn't it both? And that class was about the technical problem? At some point it needs to be allowed to discuss narrow technical issues, as long as that's not all we discuss. Should we only be allowed to pick problems that are divorced from reality?
@kavbojka @cstanhope @notimetoplay @uint8_t

I hope we don't have to go down to the level of "O(n log(n)) sorting algos are generally a better choice than O(n^2) ones" to not have someone interject "actually the real problem is global capitalism".

I do see the point that it's easy from a point of privilege to say "that's not what we're discussing right now", but in an engineering context there often really is a technical problem to discuss, and to have a useful and productive discussion about.

Maybe the example under discussion could be chosen to have fewer high-level issues surrounding it, but should that really be necessary in order to be able to discuss the problem at hand?
@kavbojka @cstanhope @notimetoplay @uint8_t My main issue with the article though, which may clear if I read it a few more times, is that I don't know that its conclusion is.

It seems to be saying that AIs and optimization processes can become inscrutable, the economy is an example of a process that is inscrutable, and therefore we should replace it with some model and calculation that we will somehow be able to understand, without providing any reason why that model would be more understandable than the economy we have.

Alternatively it might be saying, we optimize for the wrong things all the time, therefore we should not optimize things, which seems an even odder conclusion to me.

But I may be a hopeless technocrat.

@clacke @kavbojka @cstanhope @notimetoplay I think the main takeaway is that way too often we are solving technical problems without examining the context, and this is damaging to society. It's that IT haven't had its Manhattan Project type of thing. The abuse potential of the systems we build should be examined _all the time_.

@uint8_t @kavbojka @cstanhope @notimetoplay I agree, and I wanted the article to say that, but I don't think it successfully did.

Therac-25 was software, so I think it counts as "ours". I'm disappointed it was never mentioned at uni.

More of an unethical negligence case than unethical requirements/design though.
@kavbojka @cstanhope @notimetoplay @uint8_t I'm sure we should have several examples of unethical design to draw on as well. It's been a few decades.

We have had and have operating systems with anti-features, deceptive ad anti-features, popups ...

Nothing quite as dramatic as wiping out whole cities though. Military applications of worms and viruses, perhaps. And worms and botnets blackmailing hospitals.

@kavbojka »So-called “soft” questions about society, ethics, politics, and humanity were silently understood to be intellectually uninteresting. They were beneath us as scientists; our job was to solve whatever problems we were given, not to question what problems we should be solving in the first place.«

That's so glaringly upside-down, it hurts.

Sign in to participate in the conversation
Mastodon

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!