Altmetric’s mission is to help others understand the influence of research online.We collate what people are saying about published research in sources such as the mainstream media, policy... Show moreAltmetric’s mission is to help others understand the influence of research online.We collate what people are saying about published research in sources such as the mainstream media, policy documents, social networks, blogs, and other scholarly and non-scholarly forums to provide a more robust picture of the influence and reach of scholarly work. Altmetric works with some of the biggest publishers, funders, businesses and institutions around the world to deliver this data in an accessible and reliable format.ContentsAltmetrics, Ten Years Later, Euan Adie (Altmetric (founder) & Overton)Reflections on Altmetrics, Gemma Derrick (University of Lancaster), Fereshteh Didegah (Karolinska Institutet & Simon Fraser University), Paul Groth (University of Amsterdam), Cameron Neylon (Curtin University), Jason Priem (Our Research), Shenmeng Xu (University of North Carolina at Chapel Hill), Zohreh Zahedi (Leiden University)Worldwide Awareness and Use of Altmetrics, Yin-Leng Theng (Nanyang Technological University)Leveraging Machine Learning on Altmetrics Big Data, Saeed-Ul Hassan (Information Technology University), Naif R. Aljohani (King Abdulaziz University), Timothy D. Bowman (Wayne State University)Altmetrics as Social-Spatial Sensors, Vanash M. Patel (West Hertfordshire Hospitals NHS Trust), Robin Haunschild (Max Planck Institute for Solid State Research), Lutz Bornmann (Administrative Headquarters of the Max Planck Society)Altmetric’s Fable of the Hare and the Tortoise, Mike Taylor (Digital Science)The Future of Altmetrics: A Community Vision, Liesa Ross (Altmetric), Stacy Konkiel (Altmetric)https://digitalcommons.unl.edu/scholcom/170 Show less
We address the question of whether altmetrics data can provide both convergent and discriminant validity for assessing societal impact. Using data from the UK Research Excellence Framework (REF)... Show moreWe address the question of whether altmetrics data can provide both convergent and discriminant validity for assessing societal impact. Using data from the UK Research Excellence Framework (REF) and the company Altmetric, we investigate whether or not societal impact can be indexed by using altmetrics. Our results reveal that paper mentions on Facebook, in blogs, in news, in Wikipedia and in policy-related documents are indeed convergently and discriminantly valid, i.e. is valid as a discriminant indicator. The results for Twitter reveal that the source does not seem to be valid for societal impact assessment. Show less
Scheidsteger, T.; Haunschild, R.; Hug, S.; Bornmann, L. 2018
In order to assess Microsoft Academic as a useful data source for evaluative bibliometrics it is crucial to know, if citation counts from Microsoft Academic could be used in common normalization... Show moreIn order to assess Microsoft Academic as a useful data source for evaluative bibliometrics it is crucial to know, if citation counts from Microsoft Academic could be used in common normalization procedures and whether the normalized scores agree with the scores calculated on the basis of established databases. To this end, we calculate the field-normalized citation scores of the publications of a computer science institute based on Microsoft Academic and the Web of Science and estimate the statistical concordance of the scores. Our results suggest that field-normalized citation scores can be calculated with Microsoft Academic and that these scores are in good agreement with the corresponding scores from the Web of Science. Show less
Haunschild, R.; Marx, M.; French, B.; Bornmann, L. 2018
Three different approaches of field-categorization are currently (mainly) used for normalizing citation impact without a clear preference for one alternative: (1) journal sets, (2) intellectual... Show moreThree different approaches of field-categorization are currently (mainly) used for normalizing citation impact without a clear preference for one alternative: (1) journal sets, (2) intellectual assignments, and (3) citation relations. In this study, we compare normalized citation scores, which have been calculated based on the three approaches to build reference sets. We are interested whether they lead to the same, similar, or different scores for the same papers – if the formula of calculating the scores is held constant. This study focusses on chemistry and related sciences, because we have access to a comprehensive dataset from Chemical Abstracts Service (CAS). The results show that normalized scores based on intellectual field assignments are more in agreement with scores based on journal sets than with scores based on citation relations. Thus, one can expect more similar scores based on intellectual assignments and journal sets than on citation relations. Show less
Using the complete Journal Citation Reports (Science Citation Index, SCI, and Social Sciences Citation Index, SSCI) during the period 1994-2016 as data, we address the question of change and... Show moreUsing the complete Journal Citation Reports (Science Citation Index, SCI, and Social Sciences Citation Index, SSCI) during the period 1994-2016 as data, we address the question of change and stability in the sciences at the level of the (n2) aggregated citation links between (n) journals. Information theory enables us to study longitudinal developments first at the level of cells and then to aggregate, since the Shannon-formulae are based on using Σs. Micro-developments in the data can thus be related to theorizing about the sciences in terms of distributed change (Price, 1976; cf. Kuhn, 1962). Our results suggest that the dynamics can be explained by considering Bak et al.’s (1987) model of “self-organized criticality”: the knowledge base can be considered as a pile of meta-stable constructs which are continuously disturbed by new knowledge claims bringing also new citation relations. “Avalanches” of variable size can then be expected. The effects, however, are local; the meta-stable regions operate in parallel. The overall system remains tending towards meta-stability—at “the edge of chaos” —because of the ongoing fluxes of new manuscripts creating and rewriting relations in terms of citations at different scales (Zitt et al., 2005). Show less
With the program CRExplorer (Cited References Explorer) users can apply Reference Publication Year Spectroscopy (RPYS) to publication sets downloaded from Scopus or Web of Science. RPYS reveals... Show moreWith the program CRExplorer (Cited References Explorer) users can apply Reference Publication Year Spectroscopy (RPYS) to publication sets downloaded from Scopus or Web of Science. RPYS reveals quantitatively which historical papers are of particular importance for a given publication set. In this paper, we present some new advanced statistics included in CRExplorer to identify and characterize cited references (CRs) which have been influential across a longer period of time (i.e., many citing years). The indicators N_TOP50, N_TOP25, and N_TOP10 can be used to find those CRs with (significantly) higher impact than comparable CRs from the same publication year. Furthermore, CRExplorer analyses the sequence of citations across many citing years to identify specific types of citation distributions. For example, ""hot papers"" have early, but not late impact. The new statistics are demonstrated using all the papers published in the journal Scientometrics between 1978 and 2016. The analysis of the example dataset revealed, for example, that the paper by Lotka (1926) entitled ""The frequency distribution of scientific productivity"" belongs to the 10% most frequently cited publications in 36 citing years. However, such papers are exceptions; many publications show citation distributions which are characterized by changes in citation impact intensities over the citing years. Show less