. @HerraBRE OpenAI built a text generation model that can write fairly good essays (about the level of a 45 press conference: decent english but incoherent). So they did not release the full model nor the training code for fear that bad actors would misuse it. Never mind that large companies/states will have no problem replicating the results. https://blog.openai.com/better-language-models/
@Tryphon Yes, I read about it. Sounded quite responsible of them.
Since much of my career was spent fighting spam (or just dealing with the fallout from their trashing of the commons), I'm quite happy to see people aren't giving those low-lifes more things to weaponize.
I take it you disagree. 😁
@HerraBRE I don't necessarily disagree, but I wonder what they were thinking when they started OpenAI? That they would only get results that, magically, could only be used for good?
Also, they did publish. It's just that it will take some time and money to replicate their results. A few weeks at most for Google, Facebook, Amazon or Microsoft.
@Tryphon Which is fine, IMO. Those are not the only bad actors in the world.
Far from it, there are lots and lots of low-lifes out there who are currently held back by their own ineptitude or lack of resources.
The scientists who worked with nuclear fusion and fission had to confront these issues, I see no reason why compsci and AI should get a pass. These issues are far too complex for all-or-nothing binaries.
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!