Follow

Scared Silicon Valley billionaire:

Look, you're just not grasping the scale of the AI threat. What if an AI one day is built that decides to harvest large chunks of the Earth's biosphere in order to make some numbers in a financial engine go up slightly? What would we all do then?

Climate change and species loss researchers:

Really? That's a thing that could happen? And people might not believe the predictions? Do go on.

Billionaire:

Well OBVIOUSLY I don't mean it's happening NOW

@natecull

Narrator: The billionaire looked around nervously, hoping their cover wasn't blown.

It was.

@natecull the AI threat isn't really a distinct threat to humanity as much as it is a threat to the power of billionaires

@lm @natecull This.

People don't like suddenly discovering that they're now a small fish in an even bigger pond.

@natecull What if an AI decides to start killing billions of sentient beings every year while simultaneously running a marketing campaign to convince people that their flesh is healthy, and this marketing campaign also affects public school education for decades?

@natecull Also this joins perfectly with my "The global economy is a de facto distributed AI" hypothesis.

@sandrockcstm @natecull definitely

although I don't really see much of a difference between "rogue AI driven only by capital motive" and "out-of-touch billionaires driven only by capital motive" for the rest of us, y'know?

@lm @natecull I think the difference is one of capability.

An AI can move exponentially faster than a human, and therefore can accomplish basically any task faster than we could stop it. This has always been the true danger of an intelligence driven by physical forces that approach the speed of light.

The real nightmare is if an AI decides that the best way to accomplish its goal is to remove a confounding variable: humans, and decides to wipe us out. (cont)

@lm @natecull The example often cited is a "Gray Goo" apocalypse.

Imagine an AI designed only to build the best paper clips in the world. It decides the best way to do this is to colonize all of known space and convert it into paper clip factories. It converts some of its facilities into nano-bot production without anyone realizing, and then uses nano-bots to kill humanity in a week.

It then happily continues its task on a barren earth, making paper clips forever.

@lm @natecull This is even more plausible with a capitalistic AI who is programmed by people that don't give a flying fuck about anyone else. What constraints would realistically be programmed into an AI whose only purpose is to maximize profit? And in theory such an AI could accomplish this far better than any human would, with zero compunctions about human suffering. (end)

Sign in to participate in the conversation
Mastodon

Invite-only Mastodon server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!