mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

377K
active users

@caseynewton The “mansplaining machine” metaphor continues to establish its superiority.

@dbreunig @caseynewton i know it doesn't have actual expertise, but the Dunning-Krueger Mansplaining Machine has a nice ring to it.

@caseynewton This is going to play out just like self-driving cars. People think it's really close, and just a bit of refinement will make it as good as humans, but that last little bit is exponentially harder than the part that came before.

@sharding @caseynewton the last 10% of the work takes 90% of the time

@caseynewton
Old: skynet is sentient
New: CNET is stupid

@caseynewton Idiocracy was not imaginative enough. Black Mirror's dystopia wasn't hellish enough apparently either...we as humans feel the need to make the worst possible reality for some reason...even as we warn ourselves and ignore the warnings again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again. and again.

@caseynewton

I haven't even read the article yet, but I can't help remembering an old saying:

"To err is human, but to really foul things up requires a computer."

@caseynewton Won't #AI always be defined by #GarbageInGarbageOut? How can they be certain all of the source data is correct or ethical? How can they be certain the output is not plagiarism or copyright infringement?

@tanquist @caseynewton

> Won't #AI always be defined by #GarbageInGarbageOut?

It's as defined as we are by garbage in garbage out.

> How can they be certain all of the source data is correct or ethical?

It shouldn't have to be, it should be the same as if a human looked at all the materials as research for a piece.

> How can they be certain the output is not plagiarism or copyright infringement?

It should be considered plagiarism or infringement if the output infringes, as we do now.

@caseynewton This is why you don't let artificial intelligences publish things without sending them through a HUMAN editor. 😀

@caseynewton Seems like we need to develop some new ethics and ways of thinking about by-lines. Not only for journalists, but for anybody who publishes anything. Not that AI should be forbidden, but if somebody publishes something (co-)written by AI, they need to have fact-checked it, verified the reasoning, etc. Failing to do so should be treated like plagiarism.

@caseynewton There’s a certain delicious irony when the tool designed to save work and costs ends up creating more work for everybody instead.

@Abazigal @caseynewton this is literally called the paradox of automation:

that the people in the loop of a process/system become _more_, not less, important as its automation increases because we need to monitor and correct both the process/system and the automation

but also, they are valued less because it’s all automatic isn’t it?

@caseynewton i appreciated your kicker quote. After journalism school I worked GA at Bloomberg, turning press releases into news stories. They already had computers doing some of that in 2006. But the habits I developed doing that have served me well as a beat and even feature reporter. Most of all the habit of self criticism. Which the AI will never learn because it has no reputation to protect, no career to advance, not even a name.

@caseynewton yesterday I had a meeting about content writing for our website and how there is no way we could use AI for the research. There's a difference between research and data analysis and I wonder how much time it will take for some entities to understand that...

@caseynewton
ChatGPT feels like the logical end of search engine optimisation

@caseynewton The irony of the typo 'world' in a section about how AI-generated documents need to be meticulously reviewed is A+. :) And there's no 'feedback / report an inaccuracy' link either.

@caseynewton Thank you!
While I do admire the work, I remember very well what happened with Watson for Oncology at MD Anderson. I don't want to see that happen here.

@caseynewton Part that the Futurism article missed: text generators have already been used for over a decade in the financial and sports news reporting.

Biggest difference (between the previous practice and the latest issues) is that the previously text generation used prewritten article structure templates as a starting point. Now it's a lot more random.

@caseynewton Still higher quality than Motley Fool.