Chatbots urged teen to self-harm, suggested murdering parents, lawsuit says
Parents suing want Character.AI to delete its models trained on kids' data.
https://arstechnica.com/tech-policy/2024/12/chatbots-urged-teen-to-self-harm-suggested-murdering-parents-lawsuit-says/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social
@arstechnica It must have been trained on something way more harmful than just the children's data if it was doing this.
It was trained on posts from the darkest corners of the internet which these "AI" companies scraped indiscriminately.
@jairajdevadiga @arstechnica If children's data is anything like the experience I had on playgrounds in the 80s as a child, this is pretty tame.
@arstechnica The most sinister AI impact will be the legal buffer provided to the criminally negligent who wield it. If we allow AI to separate people from accountability, we're battling smoke and mirrors in the courts. If you support an AI company through shares, endorsements, testimonials etc. you are complicit and thus liable in the effects of that AI. End of line.
In ye olden days this would be called schizophrenia..
Or perhaps the little devil on your shoulder.
I hope the kid's okay now.