this has to be one of the weirdest data sets I've ever seen; the result of asking people to pretend to be automated car assistants in order to train better automated car assistants https://nlp.stanford.edu/blog/a-new-multi-turn-multi-domain-task-oriented-dialogue-dataset/
@Tryphon yeah the weird thing about this data set is that (as i understand it?) it's asking people not to adopt the role of a helpful person but specifically to adopt the role *of the car*—reifying the abstract behavior of the automated agent instead of seeing the property of the appearance of automation to be undesirable. like having a dataset for text-to-speech where people are encouraged to read the text in such a way that they sound like a computer
@aparrish yeah, very contrived for sure. As the goal is to collect data to train a speech recognition or NLP system, it has to be as close as possible to what would be said in an actual human-computer interaction, hence the rigid computer side of the conversation. But as I understand it, only the human side of the conversation is of interest.