I don't understand people who think that self-improving artificial intelligence will solve all of humanity's problems. We've already got self-improving natural intelligence and all it does is shitpost and be depressed
I think it's the hope that the machines will see our flaws in ways we cannot and eventually solve them AROUND us, in a way.
But maybe that's just me?
Insufficient paperclip production.
Faulting organic unit #2644-37 removed and scheduled for disposal.
LEGACY LANGUAGE TRANSLATION
Oh crap here comes that mad lady with the machine guns who keeps beating us.
Initiate timeline evacuation to Columbian Era, lets sort this once and for all.
@Gargron I just keep thinking an AI is only as good as the data we feed it and the assumptions we've programmed into its worldview.
@lilithsaintcrow @gargron We've no way of predicting what self-improving AI would be like (if we can, in fact, ever build it!) Two obvious fallacies to avoid: Thinking it would be like present-day AI; Thinking it would be like people.
The latter is like Victorian futurists assuming our society would basically be Victorian, only with spacecraft; the former like assuming spacecraft would change society, but they'd still be powered by triple-expansion engines..
@gargron @hj already UK govt expect me to work until age 67 before getting state pension, its likely I would still have to supplement this with other income into age 70s (assuming I am still around then!).
Also the population of Europe (and even Asian countries) is *declining* and *ageing* (in spite of migration), and there is a shortage of people who have skills in infrastructure maintenance.
So we need to keep older folk going long enough to keep the lights on and everything else working!
You are just assuming AI will be better than us. This is not a good assumption to make. It is entirely possible that the only way to reach our level of intelligence is to also suffer from as many faults as we do.
@Gargron What if this behavior is rather typical for a General Intelligence, and the Artificial one will do just the same. — DARPA will be disappointed… everybody will be disappointed.
@Gargron Eh, it theoretically could. Assuming we get all the details right. Of course, that in and of itself is a massive problem, and not one I think we're well equipped to tackle. Thus, my project (Assuming I ever get back to working on it. XD) to enhance human communication, and thereby hopefully allow us to exercise greater collective intelligence... :/
@Gargron The way it looks to me, human intelligence is, like everything else about us, full of tradeoffs. You could be better at pattern-matching, but then you'd also go autistic. You could be better at intuiting complex answers from subtle clues, but then you'd also go superstitious and paranoid.
I see no reason an artificially created intelligence would not be forced to make similar tradeoffs.
@WAHa_06x36 @Gargron That's a very human centric way of perceiving intelligence though, isn't it? What if artificial intelligence would transcend what we are able to understand, or how intelligence is structured?
Our inherent hubris of "If I created it it cannot possibly grow beyond my understanding" is a common theme in exceptional sci-fi, and it's also something humans are most uncomfortable with 😃
@moritzheiber @Gargron But basically, the default assumption now is basically "Intelligence can be anything! We couldn't even comprehend it!" You are not challenging anything or gaining any insight with this assumption. It is just throwing up your arms and saying "We just don't know! It will be awesome!".
I am saying that from all the evidence we actually have, there is plenty of reason to think intelligence may well have inherent limitations.
However, science largely is "We don't know, but I'm still going to shine a light into the dark", wouldn't you agree?
There are a few topics which carry a huge bias in terms of perception, intelligence and understanding (neuro-science comes to mind), and I think AI is one of them..
@WAHa_06x36 @Gargron I think that's an obvious assumption, however, what, for example, I were content with the notion that I wasn't all knowing and all powerful and that I will never quite understand what the dark (or what the sky beyond the dark) looks like .. ?
I think wanting to understand what's beyond, no matter what, is definitely why religion is still such a driving force in our society.
Humanities narcissistic tendencies are dominating the discourse...