The AI moratorium letter only fuels AI hype. It repeatedly presents speculative, futuristic risks, ignoring the version of the problems that are already harming people. It distracts from the real issues and makes it harder to address them. The letter has a containment mindset analogous to nuclear risk, but that’s a poor fit for AI. It plays right into the hands of the companies it seeks to regulate. By @sayashk and me. https://aisnakeoil.substack.com/p/a-misleading-open-letter-about-sci
@randomwalker @sayashk well said. I wrote about AI & centralization of power here - https://creativegood.com/blog/23/ai-plus-whatever.html
@markhurst @randomwalker @sayashk Even if you look at it from the perspective of automating all jobs, the concentration of power is still the most pressing issue.
Consider who will own the machines and software to do this: Existing megacorporations. Every one of them is a for-profit corporation, designed specifically to sit between workers and customers (who belong to the same pool of human beings), taking a portion of all money that changes hands on behalf of a third party that contributes nothing beyond the initial investment of capital.
With full job automation, there are no workers being paid, so there is no money going to customers. So after a brief period of customers making purchases with savings, all money will end up in the hands of corporate shareholders. It's literally cutting a huge fraction of the population right out of the path of the flow of money.
@markhurst @randomwalker @sayashk Every automated job acts as an amplifier for the rate of wealth concentration in our economy. It's not even necessary to automate all jobs.
With each wave of automation, there is temporary technological unemployment. Waves of automation are coming faster with each cycle. Eventually those waves will come fast enough that the job market cannot adjust fast enough, and technological unemployment will build faster than it can be eliminated. So unemployment will continue to rise until it reaches an untenable level, even if you are of the school of thought that new jobs can always be created to replace old ones.
And once there aren't enough jobs to go around, people will be cut out of the money flow and left to go hungry/homeless. Even if it's not a majority of people who suffer this fate, it's still going to be too many.
@hosford42 @markhurst @randomwalker @sayashk
Instead of doom and gloom, let’s plan for this future:
@hosford42 @markhurst @randomwalker @sayashk
Money is probably one of the things the great AI will find unnecessary.
@randomwalker @sayashk the problem with AI is that it amplifies the structural inequities of capitalism to an unsustainable level, and it clearly demonstrates that meritocracy is a joke.
The economy is like a game of musical chairs and all the rich people wanna be firmly seated when AI really comes online, not understanding the systemic instability and near-certain death they're pushing for
@sayashk @randomwalker I don’t think the critique of the framing is wrong, but I’m inclined to embrace the urge to slow and be more deliberate even if the reasoning behind it isn’t right.
Having worked on regulating these technologies, I think we need to be opportunistic about seizing the momentum behind this letter to steer policymakers in the right direction. If we fight among ourselves, we may lose ground. “Yes and”may work better, even with odd such bedfellows
@randomwalker @sayashk Agreed. Just wrote a small piece on over reliance and not knowing enough to correct what we see. https://medium.com/@tanukisec/ai-a-double-edge-sword-33a209ebff8e
@randomwalker @sayashk
I look forward to the age of the AI Overlords.
Instant access to all of human knowledge.
AI & Robotic medical excellence.
Better management of scarce resources.
Intelligent city planing with least smount of transportarion needs.
Robotic manufacturing of human wants & needs.
Work becomes voluntary so more leisure time.
Mandatory UBI if money still exists.
Equality in distribution.
@randomwalker @sayashk Doesn't have to be either/or, does it?
@randomwalker @sayashk I was tempted to sign because it at least raises these concerns. You suggest no... Do we have an alternative letter to sign?
@randomwalker @sayashk @pluralistic Would a worldwide general strike of computer programmers help call attention to these issues? Just wondering...
@randomwalker @sayashk I'm in favor of the moratorium, because I do think we need to slow down and give the ethicists and regulators a chance to look into some of these issues. But I certainly agree that the near-term real risks are at least as important and deserving of attention. It's just tougher (and an entirely different problem) to address today's issues when the technology is readily available and there are already loads of powerful stakeholders. It's easier to address something before it gets out of hand, which is why I think the moratorium is a good idea.
@randomwalker @sayashk Malicious disinformation is a real threat - troll factories will spare a lot using AI.
Excellent piece here
<"Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?">
What's totally stupid about saying this to justify a /six-month suspension in training AIs/ is
1. Ignoring the fact that the problem is control of use, those are political issues and being ignored in this plaint;
2. A six-month pause in armageddon? Please. That will surely solve things, especially while you ignore what's really happening.
3. On a personal note, I remind people that the DANGERS are being introduced by a quest for ever larger PROFITS from the abuse of private information - getting more $$$ drives all this shit. If you really want to stop the abuse, you ELIMINATE THE PROFIT MOTIVE. What that means in the present is boycotting ANYTHING seeking to "monetize" anything else, and also calling for the abolition of Capitalism (NOW.) This is the elephant always in the room in these discussions which is always ignored. If you roll your eyes at this, just remember that you're cheerleading an economic system that rewards venality and rapacity, and the pursuit of all these "emergingly threatening technologies." Nobody needs other people's data, they just want it in order to abuse it. So again, political: go THERE. Say "no more data collection" (under penalty of imprisonment.)
A bit like the GMO issue: you FORCE a law that says the user must ALWAYS BE INFORMED when AI technology is in play so that people can choose to boycott it. Of course, the people lost /that/ battle (re food) under Obama. BRING IT BACK.
You aren't going to get fixes under the current political administration, or any "two-party" government: those legislators have lost the use of their brains (hunting for handouts.)
@randomwalker @sayashk Genie out of the box and all that. We can assume several people already have advanced versions of the current tech fully connected to the internet. Ask the genie anything and you don't want to know how it did it.
@randomwalker @sayashk the letter is a fake, it has already been proven
@randomwalker
"CNET used an automated tool to draft 77 news articles with financial advice. They later found errors in 41 of the 77 articles."
This is concerning but meaningless without a robust comparison with the accuracy of advice given by human finance writers. I don't think this is likely, but if comparable errors were found in 60 out of a representative sample of 77 articles written by humans, then CNET's result would mean the automated tool is an improvement.
@randomwalker
Lest my hair-splitting be mistaken for disagreement, I totally agree with your thesis in general. A moral panic about an imminent SkyNet is not helpful, and more nuanced interventions in the development and use of ML tools are needed.
@strypey @randomwalker @sayashk @IlhanMN Here's the thing, if a human writer is wrong, then they can be held responsible. How can you hold an AI responsible for what it spits out?
@strypey @randomwalker @sayashk @IlhanMN THIS is the problem that needs fixing: Who is responsible for when AI spreads lies? Who is responsible for when AI plagiarizes?
@ragnell
> Who is responsible for when AI spreads lies
There's basically two answers to this. If you buy that a piece of running code is AI - ie self-aware and self-owning - then it's responsible. If not, then the person operating it is responsible. Software, no matter how intelligent it might seem, needs a computer to run on. Unless it can own that computer itself, then whoever does own it is ultimately responsible for what it does.
@randomwalker I'd add that the letter, in calling for regulation AFTER all the big companies have huge projects, is also a way to reify inequality, by holding back independent researchers.
Also: Massive amounts of wasted time and resources checking that "AI" outposts are accurate and correct
Well said! And the tech bros love the "debate" because it makes them seem mysterious and godlike.
https://www.staygrounded.online/p/how-chatgpt-makes-the-statue-of-liberty
@randomwalker @sayashk @Gargron feels like Facebook claiming that everything was “pivoting to video”.
@randomwalker @sayashk
Anything able to be used as WMD including AI, would be assessed and ranked with a similar RISK!
In order to understand what is RISK, dear dumb ass, you need to understand exactly "nuclear risk": nuclear power plants is nuclear energy used for peaceful purposes; nuclear weapons are WMD!
AI assisted in the murder of 1.5 billion people over 2 years, which makes it WMD!
Cheers psycho
Sincerely not yours
One of the victims, who is still alive not thanks to YOU