Follow

Well, this is both fascinating and terrifying: arxiv.org/abs/2012.07805

This is honestly like turning the crank on a meat grinder backwards and having that work so well that that cows just mosey out of it under their own power. It is astounding.

@mhoye

Thank you for linking this paper.

I share the peculiar mix of feelings tho.

"It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model."

#cryptography and
#security (cs.CR); #computation and Language (cs.CL); #machinelearning (cs.LG)

@zeroed One reason for optimism is that it makes auditing for bias in training data a theoretical possibility, rather than leaving ML algorithms as a quasi-mystical blackboxes without verifiability or recourse.

@mhoye

Beautifully put.

Systematic bias can turn out devastating with such "scale" and the infamous "Security through obscurity" IMHO does not even get closer to be a point with this regard.

en.wikipedia.org/wiki/Algorith

en.wikipedia.org/wiki/Security

Sign in to participate in the conversation
Mastodon

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!