mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

336K
active users

As a service to security researchers, I added this section to 's hackerone page:

AI

If you have used AI in the creation of the vulnerability report, you must disclose this fact in the report and you should do so clearly. We will of course doubt all "facts" and claims in reports where an AI has been involved. You should check and double-check all facts and claims any AI told you before you pass on such reports to us. You are normally much better off avoiding AI.

hackerone.com/curl

HackerOnecurl - Bug Bounty Program | HackerOneThe curl Bug Bounty Program enlists the help of the hacker community at HackerOne to make curl more secure. HackerOne is the #1 hacker-powered security platform, helping organizations find and fix critical vulnerabilities before they can be criminally exploited.

@bagder Just curious how it's that much different than, say, using any other autonomous mass scanner. What makes it any better or worse, than, say metasploit or any other dozens of autonomous tool suites routinely used in pentesting? Seems a little hypocritical to selectively single out something that's only SLIGHTLY more automated. It completely ignores the fully autonomous software that protects the internet every day, too. Explain that to this non hacker, could ya? What's the difference?

@Beggarmidas @bagder Other scanners are based on facts. They actually run curl's code and observe its behavior, or statically analyze the code, i.e. read it and report patterns that are known to be potential sources of problems.

With "AI" on the other hand, everything is fuzzy, everything is statistical probability. It can't reason, it can't observe, it can't really analyze things because it doesn't know what it's doing. It can only output stuff that sounds plausible, whether it's right or not.

@scy @bagder They are capable of generating false leads, too. That's a matter of correction through iteration, not a distinction in kind. To my admittedly outside view it just appears y'all are flipping out over something that at the end of the day is just a tool using other tools using other tools. I've used some of the hacker AI's. They still require extensive guidance just like the other tools. It just takes it up a notch. It seems a silly point of genuine nondistinction.

daniel:// stenberg://

@Beggarmidas @scy I know of no other tools than AI based ones that blatantly lie and mislead about their findings

@bagder @scy Even back before I retired over a decade ago fellas like y'all that'd come in redteaming our setups were semi-joking that you were busy automating yourselves out of a job. Many of you even set starting around now as a prospective timeline. From an outsiders viewpoint it just looks like y'all are going strangely luddite as your prophecy starts to come true.

@Beggarmidas @scy @bagder No we're AIing ourselves out of useful automation. By adding LLMs to our scanners we can no longer get useful scanner output.

@Beggarmidas @bagder @scy that assumes, that ai would be useful, it's just guessing things, it's mostly wrong when it comes to facts. Programming is facts layered on top of facts, so Ai is useless without a human to check/correct it.

@bagder

@Beggarmidas @scy
more importantly, other tools consider false positives bugs on their part and fix them.