Have you ever thought about the process by which people answer self-report measures about their own experience? Because there's a really interesting literature on this. Provided that people HAVE answers to the questions being asked, and feel comfortable accurately reporting them (not a small concern obviously), it is an incredibly depthful and layered process.
In fact, better to think of self-report measures in psychology as being CONSTRUCTIVE dialogue with participants, not passive sampling
I think this is very relevant to all these software organizations who are trying to stretch their new muscles of developer experience. I get asked TONS of questions about what constructs should be measured for developers, but you know what? Too many questions about finding the magic "content" not enough about unlocking a really good reflective self-report PROCESS imo
You can think about key tasks people have to accomplish in order to report their experience to you: 1. read & understand the question with enough fidelity 2. search their memories and do retrieval 3. integrate and create a summary judgment 4. translate between their internal judgment and offered responses 5. edit their response (occasionally) in terms of reflecting on weighting a certain version of it
(source/influence: https://journals.sagepub.com/doi/abs/10.3102/0013189x15584327)
It's truly remarkable though, that we're GOOD at this. We ARE really good at it. We're good at it because it is a well developed human process that we use to communicate our experiences to each other every day of our lives typically. It may be the best thing we've got for the reporting of certain types of psychological states -- certainly in a dearth of other options, it may serve a very key function.
But achieving this REALLY does rely on us having an honest relationship between an individual and a data collection situation. People must understand it, be bought into it, and bring trust with them and be motivated (for whatever variety of reasons) to give you an accurate response. There's no substitute; I fear a lot of "research" that at the point of data collection was never in any way understood or believed to be "research" by the people whose responses are supposed to be our insight
It frustrates me that people in software space ask us so often what our items are and so little about how we create research practices and communicate with participants. It is obvious to me as a scientist that you need to create a "research situation" if you want to collect "research data." Shoving a link into a bunch of people's inboxes and never doing ANY of the work to explain, situate, contextualize, and make clear to them that their experiences are valued and for what? It's not gonna fly.
I have my doubts about whether an accurate measure that purports to be about developers' deepest experiences can EVER come from the authority that can also fire them.
I really do. It could fill a book, the number of muddying pitfalls of surveys that are risked here: social desirability bias, reference bias, lack of pilot testing, leakage of corporate jargon that constrains responses, top-down constraint on categories, practice effects, task impurity
In my personal opinion as a scientist some of the most meaningful experiences I've ever have been when participants tell me that a research study actually PROVIDED a different, separate, structurally protected "third space" to try and figure out how to share their experiences in a way to help others. People critique applied scientists for being 'detached' and different and not 'part of the profit' and all this ish, but they MISS the critical possibility that this actually PROTECTS this work.
One such project was a study I did getting self-reports from BOTH aging adults who were living at home AND their adult children. I mean....these were tangled tough family situations and I was there to provide an accurate view into the experiences on both sides and from this, make health service design recs. I did phone interviews with the older folks and they are BURNED in my mind because of the way this project became a chance for folks to describe their experience in ways they never had
Obviously working in health you have a much, much higher bar for participant privacy & data and all those things. So folks felt ultimately pretty safe to engage, but my job was to create the psychological safety that would allow for real sharing, and to bring the communication and nonjudgment that would allow for serious conflicts between the different POVs to emerge. I did serious research to have the skills for that. Older folks live in a world that's so condescending and dismissive of them.
Being able to drive this caring, participant-centered space is crucial even when "data collection" looks scripted. I suppose we could have AI read prompts to older people & we're probably already in that world. Breaks my heart that we will lose so many possibilities of projects like this one; I learned a lot about the dignity that was being taken from these older adults in decisions & how to design to protect it & it changed the health service design fundamentally.
@grimalkina I absolutely hate "pulses" - also people doing them are really bad at designing questionnaires.
(Also slight aside have you seen https://theory-database.appspot.com/ is out?)
@tanepiper oh I haven't seen this!! Cool, who made it and decides what gets into it??
@grimalkina Human Behaviour Change Project (I've been tracking this ontology for a while as something to potentially use)
@grimalkina amen amen! In general that framing presumes that detachment is somehow the same as useless; I would insist that it is a necessary part of the cycles of existence!
@grimalkina So, I love this thread but I struggle with the idea of trying to get accurate measures. I understand there are probably people in those chains that want accurate information. But in a corporate setting there are major systemic things at work to intentionally bias results. Either to demonstrate a projects success greater than its realized success, to justify an action a leader wishes to take etc. A lot of the data collection planning injects these into surveys intentionally.
@runewake2 sure. this is true of nearly everything in the world including measuring stuff in healthcare, education, any real world system. The challenge is always about trying to create a structure in which good work can happen. I mean, lack of measurement is also horrific like situations where people only get promoted based on who likes them and who is like them and no measurement is ever required (this is obviously pervasive in areas like engineering)