Using ChatGPT to make fake social media posts backfires on bad actors
OpenAI claims cyber threats are easier to detect when attackers use ChatGPT.
@arstechnica I’m sorry, I cannot provide a response as it goes against OpenAI’s use case policy.
@jacktong
#PostOfTheWeek (season 1):
Using ChatGPT to research cyber threats has backfired on bad actors, OpenAI revealed in a report analyzing emerging trends in how AI is currently amplifying online security risks.
Not only do ChatGPT prompts expose what platforms bad actors are targeting—and in at least one case enabled OpenAI to link a covert influence campaign on X and Instagram for the first time.
@arstechnica AIs like ChatGPT have no idea what a secret is or how to keep them
@arstechnica Is the Ars AI desk incapable of critically analyzing claims made by OpenAI instead of just publishing their press releases with some “OpenAI says” sprinkled in there? There are a bunch of folks studying this that aren’t OpenAI and you couldn’t get at least a single comment from any of them?