mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

336K
active users

Jonathan Korman


I was sitting in front of him telling him that the internet, a computer, technology, all these supposedly authoritative things … were wrong. And that I, one person, was right. He basically •couldn’t• believe me.

miniver.blogspot.com/2024/07/a

miniver.blogspot.com“AI”, students, and epistemic crisisPersonal blog of Jonathan Korman

@Miniver are you teaching now? I start in Sept with a new cohort and I am bracing for the AI conversation

@gretared I would probably let them ask it why/how you should [do something that's obviously bad for you, like there famous "checking cooking oil temperature with the backside of your hand"], and have them marvel at it's bs justifications
@Miniver

@gretared let them ask ChatGPT to justify failing students who use ChatGPT, and then ask them if they still think using it is a good idea ;)
@Miniver

@Miniver this is the main problem our society faces now. No critical thinking skill, no understanding of logic and reason can help when your basic facts are incorrect. We cannot communicate and build when we do not share the same reality. People have been conned into thinking they can trust a computer, not understanding that most AI's just parrot words that look like they go together. Without epistemological understanding, we can't trust what we see on the screen.

@Miniver One of the primary cognitive dysfunctions that occurs in psychosis is a failure of what is called "reality testing". The result is that the person cannot tell delusions from reality, and this makes finding a path toward mental health much more challenging.

Generative AI is at its very core psychotic, because it fails so spectacularly at reality testing. Which wouldn't be so bad, except as you note, people treat it as authoritative. A psychotic authority.

@bertwells no it is not. To be psychotic (or have hallucinations) it needs to be a person (more generally to have a brain). I don't consider this a minor nitpick: How we discuss these things defines in part how we think about them @Miniver

@pmk @Miniver I understand your disagreement. Words matter.

But, it is also important to understand HOW a model fails. The linear models underlying so-called generative AI are extremely powerful at outputting well formed chunks of language, but are incredibly weak at reality testing. In this regard, they are analogous (but not equivalent) to people who are in acute psychosis.

@bertwells @Ooze @Miniver Humans seem to do this with things other than Generative AI as well. Example: Trump and his followers.

@rberger @bertwells @Ooze @Miniver
People hate the burden of decision making - even choosing your reality is a choice that they want to offload to someone else. Trump was willing to take that load for them without question as long as they didn't ask questions of him, just as AI is.

Seriously. Ask an LLM questions about one thing for a while and you will see that it is not a reliable intelligence, artificial or otherwise.
There's a reason Republicans want to destroy secular education in America.

@Miniver

It highlights one useful distinction I hadn't thought of previously. AI is obviously a b******* machine but worse than that it's a bespoke b******* machine.

@Miniver Well the good news is: if they "search" for the same thing a second time ... they'll get a different answer.

@fishidwardrobe @Miniver
Class Exercise :
1. Create a list of search tools including books.
2. Distribute list among class and give them one thing to search for.
3. Collate results, compare, contrast, discuss.

@Miniver I'm curious who they were getting harrassment from on Twitter before they deleted the original post. AI stans?

@Miniver That accurately reflects my recent impressions during an #OpenBooks #exam and students' use of #AI. By now I believe that the students seriously believe that AI (and by extension a lot of tech) is just so much smarter than them :(
That's #sad and #discouraging :(

@Miniver of course for saying this reasonable thing he got shouted down and harassed on fucking twitter.

@Miniver
Looks like a made up case to me.
1. How did the kid get the same unique nonsensical answer repeatedly?
2. What does the author mean by "I alone vs. the entire internet"? He opened a search bar and typed something in. The search results will differ greatly from the ChatGPT answer. It will be "internet vs. internet".
I agree with the general problem, but this case looks made up to me.

@ditol @Miniver the story lacks cohesiveness, but it could simply be that it was crudely anonymized. The missing details that would make it "ring true" would also make it effortless to identify the student.

@tob @Miniver Yes, this is a constant problem with retelling a story without pointing at the person. But my questions relate to the very general setting, not to some details that could get lost due to anonymisation. :)

@Miniver An interesting observation, but a core part of the problem seems to be the idea that kids are doing what they have been taught to do by looking things up like that. Discrimination of sources and verification of information clearly needs to be taught much earlier now than in previous generations. Not enough now to tell kids to just go to the library. Looking up information is no longer the key skill that it once was.

@Miniver
This goes in my list of the scariest things I ever heard of.
This list is being updated almost daily:-(

#education #ai #search

#democracy is more than how we vote but we must vote to have a democracy!

@Miniver

"The ideal subject of totalitarian rule is...people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist."
- Hannah Arendt. The Origins of Totalitarianism. 1967

[Note: Arendt was a terrible racist, but she literally wrote the book on totalitarianism]

@Miniver
This kind of thing is definitely an increasingly worry of mine.

I'd placed what I'd also thought of as the ongoing "Crisis of Epistemology" as starting a bit earlier with social media groupthink and conspiracy discussion boards leading to things like anti-vaxx in the middle of a pandemic, incel mutual support, flat-earthism (!?), etc.

But mad-libs chatbot AI treated as a superior authoritative source is definitely an frustrating recent twist to the problem.

@Miniver

I haven't hit this yet, but dread the day I do. I work with adult learners in the IT/Cybersecurity field so they mainly have at least a vague understanding that LLMs are stupid word-stringing-together boxes.

@johntimaeus @Miniver wish my coworkers (IT professionals who are now even consluting in the “AI” area) understood that…