Eugen is a user on mastodon.social. You can follow them or interact with them if you have an account anywhere in the fediverse. If you don't, you can sign up here.
Eugen @Gargron

I don't understand people who think that self-improving artificial intelligence will solve all of humanity's problems. We've already got self-improving natural intelligence and all it does is shitpost and be depressed

· Web · 102 · 205

@Gargron

I think it's the hope that the machines will see our flaws in ways we cannot and eventually solve them AROUND us, in a way.

But maybe that's just me?

@sydneyfalk @Gargron The machines will see our flaws and eliminate them.

Insufficient paperclip production.
Faulting organic unit #2644-37 removed and scheduled for disposal.

BEEP!

LEGACY LANGUAGE TRANSLATION

Oh crap here comes that mad lady with the machine guns who keeps beating us.

Initiate timeline evacuation to Columbian Era, lets sort this once and for all.

@gargron they think IA will be faultless, not like the human creators and want to transfer responsibilities to the machine, a virtual god.

@Gargron even more when #AI algorithms are crafted by a minority of probably white-male completely out of touch with reality, what could go wrong ?

@Gargron I just keep thinking an AI is only as good as the data we feed it and the assumptions we've programmed into its worldview.

@lilithsaintcrow @gargron We've no way of predicting what self-improving AI would be like (if we can, in fact, ever build it!) Two obvious fallacies to avoid: Thinking it would be like present-day AI; Thinking it would be like people.

The latter is like Victorian futurists assuming our society would basically be Victorian, only with spacecraft; the former like assuming spacecraft would change society, but they'd still be powered by triple-expansion engines..

@ej @Gargron It strikes me that our present society is EXTREMELY Victorian, with not ENOUGH spaceships.

@hj Given limited resources & time I would prioritize researching a cure for human ageing over researching a general AI. In practice both *can* happen at the same time, but both are awfully underfunded and underrepresented

@gargron @hj already UK govt expect me to work until age 67 before getting state pension, its likely I would still have to supplement this with other income into age 70s (assuming I am still around then!).

Also the population of Europe (and even Asian countries) is *declining* and *ageing* (in spite of migration), and there is a shortage of people who have skills in infrastructure maintenance.

So we need to keep older folk going long enough to keep the lights on and everything else working!

@hj @Gargron You say such limitations aren't a thing, but you do not offer (and can not possibly offer) any argumentation or evidence in favour of this claim.

You are just assuming AI will be better than us. This is not a good assumption to make. It is entirely possible that the only way to reach our level of intelligence is to also suffer from as many faults as we do.

Self-improving AI = Marvin, the Paranoid Android? - very depressed AI

@gargron well speaking of "self improving natural intelligence" seems pretty far fetched considering what is going on in the world :D

@gargron Maybe I do. They are afraid of going to church, because of the stigma that got attached to religion. So they hope someone codes one for them. I am not religious but, and am very convinced. But still sometimes feel the absence of a god as painful.

@Gargron Sure it will. If humanity is the origin of humanity's problems, replacing humanity with machinekind will solve those.

@Gargron What if this behavior is rather typical for a General Intelligence, and the Artificial one will do just the same. — DARPA will be disappointed… everybody will be disappointed.

@Gargron @guerrillarain yeeeees

but that's because their brains are made of meat

@Gargron Eh, it theoretically could. Assuming we get all the details right. Of course, that in and of itself is a massive problem, and not one I think we're well equipped to tackle. Thus, my project (Assuming I ever get back to working on it. XD) to enhance human communication, and thereby hopefully allow us to exercise greater collective intelligence... :/

@Gargron This is what gets me about AI fantasies: The idea that our human flaws are somehow entirely unrelated to our human strengths, and the assumption that you could create intelligence that has one but not the other.

@Gargron The way it looks to me, human intelligence is, like everything else about us, full of tradeoffs. You could be better at pattern-matching, but then you'd also go autistic. You could be better at intuiting complex answers from subtle clues, but then you'd also go superstitious and paranoid.

I see no reason an artificially created intelligence would not be forced to make similar tradeoffs.

@WAHa_06x36 @Gargron That's a very human centric way of perceiving intelligence though, isn't it? What if artificial intelligence would transcend what we are able to understand, or how intelligence is structured?

Our inherent hubris of "If I created it it cannot possibly grow beyond my understanding" is a common theme in exceptional sci-fi, and it's also something humans are most uncomfortable with 😃

@moritzheiber @Gargron But basically, the default assumption now is basically "Intelligence can be anything! We couldn't even comprehend it!" You are not challenging anything or gaining any insight with this assumption. It is just throwing up your arms and saying "We just don't know! It will be awesome!".

I am saying that from all the evidence we actually have, there is plenty of reason to think intelligence may well have inherent limitations.

@WAHa_06x36 @Gargron I understand your scientific (and fair!) approach to this topic.

However, science largely is "We don't know, but I'm still going to shine a light into the dark", wouldn't you agree?

There are a few topics which carry a huge bias in terms of perception, intelligence and understanding (neuro-science comes to mind), and I think AI is one of them..

@moritzheiber @Gargron Science is shining a light into the dark, yes. But the kind of fantasies people have about AI are much more religion than science: An urge to look into the dark and imagine paradise to be somewhere behind it.

@WAHa_06x36 @Gargron I think that's an obvious assumption, however, what, for example, I were content with the notion that I wasn't all knowing and all powerful and that I will never quite understand what the dark (or what the sky beyond the dark) looks like .. ?

I think wanting to understand what's beyond, no matter what, is definitely why religion is still such a driving force in our society.

Humanities narcissistic tendencies are dominating the discourse...

@Gargron @WAHa_06x36 ... I'm arguing that AI wouldn't necessarily suffer from this fallacy