Co-authorship practices are known to differ among distinct scientific fields, but may do so even within research communities belonging to the same field. As a consequence, standard normalisation... Show moreCo-authorship practices are known to differ among distinct scientific fields, but may do so even within research communities belonging to the same field. As a consequence, standard normalisation procedures may fail and not allow a correct comparison of individual scientific performance within a given field. For instance, in recent years groups of scientists in experimental physics, particularly in the high-energy sector, have started to work within projects involving very large research groups, sometimes comprising thousands of researchers, while other scientists operating in the same general field have continued to work within more traditional, smaller scientific collaborations: as a consequence, evaluating the scientific contribution of researchers in experimental physics has become particularly difficult, even if traditional field normalisation is taken into account. The aim of this paper is to investigate the relevance of this phenomenon and discuss a method to take into account co-authorship when evaluating scientist considering the case of the Italian National scientific Habilitation program as a case study. Show less
Traag, V.A.; Malgarini, M.; Cicero, T.; Sarlo, S.; Waltman, L. 2018
A common element of all performance based research funding systems is the need to evaluate research. A recurrent question in this context is whether peer review and metrics tend to yield similar... Show moreA common element of all performance based research funding systems is the need to evaluate research. A recurrent question in this context is whether peer review and metrics tend to yield similar outcomes, or whether they differ substantially. We here study peer review uncertainty at the institutional level. We rely on data collected by ANVUR, the agency tasked with implementing the Italian research assessment exercise called VQR. We find that peer review agreement is generally higher at the institutional level than at the publication level. Similarly, correlations between peer review and metrics also tend to be higher at the institutional level. Finally, we find that the correlations between especially journal metrics and peer review are on par with correlations among two peer reviewers. Our results support the possibility of using metrics in combination with peer review for evaluation purposes. Show less