If "data science" in the 2020s is starting to have the same social valence of existential terror as "nuclear science" did in the 1980s, which will come next:
Arguably all these have already happened in the last four years; Trump/Brexit was TMI, GDPR was SALT, NSA is NORAD, and a weird temporary alliance of the AI Ethics movement and Effective Altruism (should they ever be able to step into the same room without exploding) is the CND.
Of course what this joke misses is that in the 1980s we were *also* terrified of AIs, just like today, for much the same reasons.
Hence why The Terminator had a robot not just a bomb as its villain.
@natecull People were always afraid of AIs, but for the wrong reasons.
Instead of "what if the AI becomes too smart and decides it doesn't want or need humans", the real concern is "what if the AI is really dumb and programmed by really dumb people but people take important decisions based on it's (wrong) conclusions because they think it's smart".
@eldaking @natecull I heard a great comment in an interview with Kate Crawford, author of the Atlas of AI, that concepts of AGI safety based around the fear of AIs spontaneously becoming too smart for humans is like a child's understanding of the way technology works as a form of magic. A term used for this was "enchanted determinism."
I think it perfectly sums up why the Effective Altruism AGI Safety crew loathes the AI Ethics/critical algorithms studies space, and why the AI Ethics space literally doesn't think about the AGI safety crew at all.
@vortex_egg @eldaking @natecull i think there might be a little more to it than that. if you compare to japanese technopocolypse narratives, like, say, evangelion, superficially it has judeochristian elements- they even dig deep and get it biblically accurate unlike most western biblically inspired narratives. dig a little deeper though and it’s just window dressing for “fuck nuclear weapons are scary aren’t they? like really scary. like these scary things we found in the gaijin book”
For example, the idea of an apocalyptic "singularity" descends pretty directly from the "robot uprising" - which is a metaphor for a revolution or a slave revolt, down to the origin of the word "robot", mixed with fears about automation/machines "replacing" workers.
@vortex_egg @zens @natecull Another, even older, influence on stories about rogue AIs is the classic "human hubris is trying to invade the domain of the gods and do what is forbidden". This is present in Christian mythology, of course, but can also be found in Greek mythology... from where it is endlessly re-told.
In particular, trying to create life is a favorite. In science fiction, we could easily point to Frankenstein, which directly quotes Paradise Lost and so on.
@eldaking @vortex_egg @natecull oh yeah, funny story about that. years ago a slovakian grandpa who shared my last name started talking to me on skype. Zero english. but he was persistent so I tried to muddle through with google translate.
one day this comes out
“How goes the financial crisis? Here we ride atop robots”
I was really confused. i wrote “i have a job, but sadly I do not have a robot friend”, translated it back to slovak and sent it.
oh yes, like the German Agency for Refugees and Migration, when they started using voice AI to determine where people come from (link in German):
tl;dr Kurdish refugee comes to Germany, they make him take the voice test. Voice AI says he speaks Turkish. His asylum request is denied, because they assume he's lying. His native language wasn't in the database. A human translator confirmed he spoke a Kurdish dialect.
I'm still not convinced that an AI singularity as we think of it can happen.
For a singularity, code needs to get good at writing code that writes code that writes code.
(points at the code landscape around us and the complete lack of higher-level abstractions than "compiler")
That ain't happening and I don't think making neural nets bigger will help.
@natecull wish people were scared of the bomb again.. Enough of them as still here to be scared of.
If we have a Cuban missile crisis, but with Taiwan, that could be very bad. Recently Daniel Ellsberg leaked ~1960 papers that suggested they'd use nukes if Taiwan was invaded.
When Trump argued it's good to be a little insane, and the world doesn't quite know what the US might do. That's pretty much official policy.
"When Trump argued it's good to be a little insane, and the world doesn't quite know what the US might do. That's pretty much official policy."
Yep. Trump's sin was that he said the quiet parts out loud.
Which in itself was still very scary because "not saying the quiet parts out loud" is the cornerstone of diplomacy; when everyone gets extremely honest, it's usually just before the shooting breaks out.
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!