New York Times tech workers are on strike for fairer pay and just cause job protections.
Perplexity, an AI company that has repeatedly ripped off the work of human journalists and created a mess for every human work, is now offering help to the New York Times.
See how these AI companies are stealing jobs from humans and then selling back services at a price. https://www.404media.co/perplexity-ai-offers-to-help-new-york-times-with-tech-union-strike/
This is not going to end well for human tech workers, journalists, writers, & artists. The billionaire AI tech bros found a new loophole to make money. The laws are slow to catch up with this kind of abuse. By the time laws are updated, the whole generation of tech jobs will be lost forever. They are already deploying AI in hospitals to replace trained medical professionals. This will not going to reduce your medical bills. You have to be a delusional fool to think this will help you
Oh, but I'm a software developer. AI can't do my job. From the same site:
Netflix Bullish on Gen AI for Games After Laying Off Human Game Developers https://www.404media.co/netflix-games-ai-exec/
So, video game developers, artists, and voice actors are also being replaced. It is happening way too fast, and nobody is safe. Meanwhile, politicians are busy throwing mud at each other. They don't care as long as AI companies keep paying them. Nobody is going to protect your job and your family.
@nixCraft I don't think Perplexity means what they think it means.
Or maybe it does ;-)
@nixCraft I’m so glad we invented AutoScab technology in the 21st century
There's gonna be a lot of harm done with this shit. Really pisses me off.
@Kishi I agree. People with good hearts and rational thinking know well this is theft and replacing all jobs with some shity corporate AI overloads. We already know how much abuse X/Twitter, Facebook, Google, Insta, and TikTok do with misinformation. This is their end goal. 4 or 5 large corporations control everything. This sounds like a conspiracy theory, but it is happening right now.
@nixCraft When is AI going to replace the first politician? All they do can be done by a machine. Even better.
I mean, some decision making position like in "Wargames" (https://en.m.wikipedia.org/wiki/WarGames)
@Karsten Add CEOs and C-suites to that list too.
@nixCraft@mastodon.social "Safe" depends on how long you can weather the storm. Can "generative AI" write decent software? No. Can it replace medical professionals? No. Journalists? No. Will businesses try to do it anyway? Likely yes, until things break so badly people go back to people. What worries me is the in-between, both for those who lose their jobs (temporarily), and those harmed by the garbage produced meanwhile.
@nixCraft in the specific case of creative jobs, e.g. game dev you mention, remember this one peculiarity:
To be copyrightable a work must gave been produce specifically *by a human* (not even sefiles by an ape can be copyrighted).
Thus Netflix cannot claim any copyright protection on any of botshit they would be generating after sacking the human game makers.
Ripping genAI game assets or even straight publicly uploading the whole game cannot be a copuright violation.
@nixCraft I wonder though what the impact would be for DRM.
You would still need to break the restriction, so maybe Netflix will try to use laws like US' DMCA to sue authors of circumvention tools breaking the DRM of their botshit games.
But on the hand we're not speaking about copyright exemption or fair use exceptions here, botshit is not even copyrightable to begin with, so maybe DMCA can't even be aplied on the upcoming Netflix botshit games.
( @pluralistic : Do you have any idea?)
@nixCraft Out of all the possible examples, you had to give one of the few cases where machine learning is actually well used... Unlike GenAI which is mostly profiting from stolen art and scraped social media posts, medical uses such as image classification to assist in cancer detection doesn't replace trained professionals but it allows them to focus on the cases that matter.
@Varpie I understand. Every tool, device, medicine and software used in the medical field is typically certified and approved by the FDA. I don't see Perplexity, OpenAI, or Microsoft obtaining such approval from the FDA. What happens when something goes wrong, or AI misses cancer screening and someone dies out? There is no procedure or even law in place to prevent this kind of abuse by AI companies, including responsibility for the loss of human life.
@nixCraft That happens all the time already, and that's why it's not going to replace trained professionals. Let's take another example: with Covid, some companies created tests to detect it. They are not perfect, what happens when people got misdiagnosed, were told they didn't have Covid but they did, and ended up in the hospital a week later? Who was responsible then? It's a different tool, but it's the same thing, and there are already regulations for it.
@nixCraft And regarding the lack of regulations, here are some examples that are FDA approved:
- GI Genius, for colon cancer detection: www.fda.gov/news-events/press-announcements/fda-authorizes-marketing-first-device-uses-artificial-intelligence-help-detect-potential-signs-colon
- DermaSensor, for skin cancer detection: www.dermasensor.com/fda-clearance-granted-for-first-ai-powered-medical-device-to-detect-all-three-common-skin-cancers-melanoma-basal-cell-carcinoma-and-squamous-cell-carcinoma/
I won't look for more because there are plenty, but the FDA has been looking into medical usage of AI/ML since early 2021 at least: www.fda.gov/news-events/press-announcements/fda-releases-artificial-intelligencemachine-learning-action-plan
@nixCraft It's not just that replacing humans with "AI" in health care won't save you money. It will also result in a reduced standard of care, and more injuries and deaths from mistakes. The mistakes have already been seen with AI for transcription, having the same "hallucinations" as other uses of LLMs.
So-called AI is not intelligent at all.
@nixCraft companies that do this sort of thing will die sooner rather than later. LLMs aren't in a state to replace even badly trained/skilled people. This article is old but still very relevant - https://gizmodo.com/ai-chevy-dealership-chatgpt-bot-customer-service-fail-1851111825
@nixCraft
if AI...
...writes software and it fails, who do you blame? Who fixes it?
...provides incorrect or appropriate information, who are you going to sue?
... etc.
What a great name for an AI company
From Cambridge English Dictionary
perplexity
noun [ C or U ]
us
/pɚˈplek.sə.t̬i/ uk
/pəˈplek.sə.ti/
a state of confusion or a complicated and difficult situation or thing: