Stuart Langridge is a user on You can follow them or interact with them if you have an account anywhere in the fediverse. If you don't, you can sign up here.

Collecting useful user data without compromising our privacy: how carefully controlled lying could be the way forward.

@sil I like the idea of lying (you can decide if that's true or not). I'm curious if we need to let the user choose whether they lie or not. In theory this would likely be as statistically valid, but would give users the feeling of being in control of the data.

"We're going to send this data about your computer to Canonical:
[Nope] [Adjust values] [Sure]"

@ted the flaw there is that, as the piece outlines, you need to tune the amount of lying to balance "get accurate info in aggregate" against "protect user privacy". If you ask people that question, they won't know how to answer it -- why should I adjust? what's good about it? -- and you'll have no idea of how much lying happened and so don't know how accurate your data is at all, which makes it a lot less useful.

@sil I would make sure to get a designer involved for better text, more trying to illustrate the point. But certainly knowing the nature of the noise added to the system would give you better results, but I don't think I have to. For instance, it'd be hard to know precise values on how many people lie about committing a crime. Your error bars increase, but if things are that close, it's not actionable data anyway.

Stuart Langridge @sil

@ted perhaps so, yeah, and someone with more data science chops than I could doubtless quantify that so it's OK. My objection to asking is really that it makes people care about a thing that they shouldn't have to care about. It's like popping up a dialog to ask whether the kernel should defragment your memory. I don't know or care; decide for me!

· Web · 0 · 0

@sil I agree, but, I think when it comes to data privacy people are surprisingly sophisticated. Certainly most would use Nope/Yeah in my example, but I think given the option to lie makes it fit into human notions of trust.

We have the computing power today to make computers more human, instead of what we've done in the past where we taught humans how to work with computers.