mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

331K
active users

#同儕審查

0 posts0 participants0 posts today

🌘 親愛的期刊:停止囤積我們的論文
➤ 單次投稿政策需要死亡(以及在此期間該怎麼做)。
nature.com/articles/d41586-023
本文作者認為,現在是重新評估單次投稿政策的時候了。作者提出了一些策略,以幫助作者更有效地運用現有的出版環境,並呼籲科學出版社重新評估單次投稿政策。
+ 科學出版社應該重新評估單次投稿政策,以更好地支持作者的職業發展和科學知識的快速傳播。
+ 這篇文章提供了一些有用的策略,以幫助作者更好地運用現有的出版環境,並呼籲科學出版社重新評估單次投稿政策。

www.nature.comDear journals: stop hoarding our papersWhy single-submission policies need to die (and what to do in the meantime).

🌘 [1806.06237] PeerReview4All: 公平且準確的同儕審查審稿人分配
➤ 公平且準確的同儕審查審稿人分配算法
arxiv.org/abs/1806.06237
本文探討會議同儕審查中自動分配論文給審稿人的問題,並關注公平性和統計準確性。我們的公平目標是最大化最不利論文的審查質量,而不是通常使用的最大化所有論文的總質量。我們設計了一種基於增量最大流程的分配算法,證明瞭它在公平性方面接近最優。我們的統計準確性目標是確保正確恢復應該被接受的論文。我們對一個流行的客觀分數模型以及我們在論文中提出的一個新的主觀分數模型進行了尖銳的極小化分析。我們的分析證明瞭我們提出的分配算法也導致接近最優的統計準確性。最後,我們設計了一個新的實驗,允許客觀比較各種分配算法,並克服了同儕審查實驗中沒有基本事實的固有困難。這個實驗的結果以及對合成和真實數據的其他實驗都證實了我們算法的理論保證。
+ 這種算法對確保同儕審查的公正性和準確性非常重要,對學術界有很大的幫助。
+ 這種算法的實用性如何?是否已經在實際同儕審查中

arXiv.orgPeerReview4All: Fair and Accurate Reviewer Assignment in Peer ReviewWe consider the problem of automated assignment of papers to reviewers in conference peer review, with a focus on fairness and statistical accuracy. Our fairness objective is to maximize the review quality of the most disadvantaged paper, in contrast to the commonly used objective of maximizing the total quality over all papers. We design an assignment algorithm based on an incremental max-flow procedure that we prove is near-optimally fair. Our statistical accuracy objective is to ensure correct recovery of the papers that should be accepted. We provide a sharp minimax analysis of the accuracy of the peer-review process for a popular objective-score model as well as for a novel subjective-score model that we propose in the paper. Our analysis proves that our proposed assignment algorithm also leads to a near-optimal statistical accuracy. Finally, we design a novel experiment that allows for an objective comparison of various assignment algorithms, and overcomes the inherent difficulty posed by the absence of a ground truth in experiments on peer-review. The results of this experiment as well as of other experiments on synthetic and real data corroborate the theoretical guarantees of our algorithm.