❝
I was sitting in front of him telling him that the internet, a computer, technology, all these supposedly authoritative things … were wrong. And that I, one person, was right. He basically •couldn’t• believe me.
❞
https://miniver.blogspot.com/2024/07/ai-students-and-epistemic-crisis.html
@Miniver are you teaching now? I start in Sept with a new cohort and I am bracing for the AI conversation
@Miniver this is the main problem our society faces now. No critical thinking skill, no understanding of logic and reason can help when your basic facts are incorrect. We cannot communicate and build when we do not share the same reality. People have been conned into thinking they can trust a computer, not understanding that most AI's just parrot words that look like they go together. Without epistemological understanding, we can't trust what we see on the screen.
@Miniver One of the primary cognitive dysfunctions that occurs in psychosis is a failure of what is called "reality testing". The result is that the person cannot tell delusions from reality, and this makes finding a path toward mental health much more challenging.
Generative AI is at its very core psychotic, because it fails so spectacularly at reality testing. Which wouldn't be so bad, except as you note, people treat it as authoritative. A psychotic authority.
@bertwells no it is not. To be psychotic (or have hallucinations) it needs to be a person (more generally to have a brain). I don't consider this a minor nitpick: How we discuss these things defines in part how we think about them @Miniver
@pmk @Miniver I understand your disagreement. Words matter.
But, it is also important to understand HOW a model fails. The linear models underlying so-called generative AI are extremely powerful at outputting well formed chunks of language, but are incredibly weak at reality testing. In this regard, they are analogous (but not equivalent) to people who are in acute psychosis.
@bertwells @Ooze @Miniver Humans seem to do this with things other than Generative AI as well. Example: Trump and his followers.
@rberger @bertwells @Ooze @Miniver
People hate the burden of decision making - even choosing your reality is a choice that they want to offload to someone else. Trump was willing to take that load for them without question as long as they didn't ask questions of him, just as AI is.
Seriously. Ask an LLM questions about one thing for a while and you will see that it is not a reliable intelligence, artificial or otherwise.
There's a reason Republicans want to destroy secular education in America.
@bertwells @Miniver (Quiet, intense, screaming)
It highlights one useful distinction I hadn't thought of previously. AI is obviously a b******* machine but worse than that it's a bespoke b******* machine.
@Miniver Well the good news is: if they "search" for the same thing a second time ... they'll get a different answer.
@fishidwardrobe @Miniver
Class Exercise :
1. Create a list of search tools including books.
2. Distribute list among class and give them one thing to search for.
3. Collate results, compare, contrast, discuss.
@Miniver I'm curious who they were getting harrassment from on Twitter before they deleted the original post. AI stans?
@Miniver That accurately reflects my recent impressions during an #OpenBooks #exam and students' use of #AI. By now I believe that the students seriously believe that AI (and by extension a lot of tech) is just so much smarter than them :(
That's #sad and #discouraging :(
@Miniver of course for saying this reasonable thing he got shouted down and harassed on fucking twitter.
@Miniver
Looks like a made up case to me.
1. How did the kid get the same unique nonsensical answer repeatedly?
2. What does the author mean by "I alone vs. the entire internet"? He opened a search bar and typed something in. The search results will differ greatly from the ChatGPT answer. It will be "internet vs. internet".
I agree with the general problem, but this case looks made up to me.
@Miniver An interesting observation, but a core part of the problem seems to be the idea that kids are doing what they have been taught to do by looking things up like that. Discrimination of sources and verification of information clearly needs to be taught much earlier now than in previous generations. Not enough now to tell kids to just go to the library. Looking up information is no longer the key skill that it once was.
ChatGPT could be a problem, or it could be a chance to teach kids about the needs of shareholder capitalism.
@Miniver
This goes in my list of the scariest things I ever heard of.
This list is being updated almost daily:-(
#democracy is more than how we vote but we must vote to have a democracy!
"The ideal subject of totalitarian rule is...people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist."
- Hannah Arendt. The Origins of Totalitarianism. 1967
[Note: Arendt was a terrible racist, but she literally wrote the book on totalitarianism]
@Miniver
This kind of thing is definitely an increasingly worry of mine.
I'd placed what I'd also thought of as the ongoing "Crisis of Epistemology" as starting a bit earlier with social media groupthink and conspiracy discussion boards leading to things like anti-vaxx in the middle of a pandemic, incel mutual support, flat-earthism (!?), etc.
But mad-libs chatbot AI treated as a superior authoritative source is definitely an frustrating recent twist to the problem.
I haven't hit this yet, but dread the day I do. I work with adult learners in the IT/Cybersecurity field so they mainly have at least a vague understanding that LLMs are stupid word-stringing-together boxes.
@johntimaeus @Miniver wish my coworkers (IT professionals who are now even consluting in the “AI” area) understood that…