Recent publications

Langfeldt, L & Steffy, K (2024). The plurality and contexts of research quality notions. Science and Public Policy, scae066.

Abstract

Whereas research quality is a key concern in research policy, it is often handled as unitary and rarely interrogated. This paper explores variations in what researchers perceive to characterize the research they value the highest and aims to understand the different sites where research quality notions are formed. Based on a large researcher survey, we find both commonalities and differences across disciplines. Notions appear to vary systematically by researcher’s organizational type, their interaction with clients and practitioners, and their reliance on outside infrastructure and multidisciplinary research. For example, those affiliated with research institutes are more prone than those at universities to value societal impact as a characteristic of the best research. In conclusion, quality notions appear to reflect a multitude of organizational sites, and disciplines account for only part of the variation. Hence, a more nuanced understanding of the plurality and origins of research quality notions is needed.

Full text: https://doi.org/10.1093/scipol/scae066


Svartefoss, S.M., Jungblut, J., Aksnes, D.W., Kolltveit, K. & van Leeuwen, T. (2024). Explaining research performance: investigating the importance of motivation. SN Social Sciences, 4, 105.  

Abstract

In this article, we study the motivation and performance of researchers. More specifically, we investigate what motivates researchers across different research fields and countries and how this motivation influences their research performance. The basis for our study is a large-N survey of economists, cardiologists, and physicists in Denmark, Norway, Sweden, the Netherlands, and the UK. The analysis shows that researchers are primarily motivated by scientific curiosity and practical application and less so by career considerations. There are limited differences across fields and countries, suggesting that the mix of motivational aspects has a common academic core less influenced by disciplinary standards or different national environments. Linking motivational factors to research performance, through bibliometric data on publication productivity and citation impact, our data show that those driven by practical application aspects of motivation have a higher probability for high productivity. Being driven by career considerations also increases productivity but only to a certain extent before it starts having a detrimental effect.

Full text: https://doi.org/10.1007/s43545-024-00895-9


Hylmö, A., Steffy, K., Thomas, D.A. & Langfeldt, L. (2024). The quality landscape of economics: The top five and beyond. Research Evaluation, rvae014.

Abstract

Whereas a growing number of studies evidence that research quality notions and evaluative practices are field- and context-specific, many focus on single evaluative practices or moments. This paper introduces the concept of quality landscape to capture dynamics of interrelated quality notions, evaluative moments and practices in a research field. This concept shifts focus to (1) the field-specific universe of practices, devices and notions of research quality; (2) ways that interrelated valuations provide structure and boundeness to a landscape; (3) ways that perspectives on a shared landscape may change with position within the landscape; and (4) ways in which a quality landscape is intertwined with the field’s socio-epistemic conditions. With extensive interview data from top ranked departments in three Scandinavian countries, we use economics as a case for exploring the value of a quality landscape lens. We find that the field’s journal hierarchy and its ‘Top 5’ journals dominate the landscape, while other important evaluative practices beyond the top five are interlinked with the journal hierarchy. However, quantitative evaluative metrics common in other fields are virtually absent. We further find that national and local policy reinforce the journal hierarchy emphasis, and that career stages affect quality perspectives. We argue that the quality landscape is structured as a quality hierarchy with a focus on the core ‘general interest’, and suggest the notion of ordinalization (the process of rank ordering) as an organizing principle linking the quality landscape to the field’s socio-epistemic conditions. Finally, we offer suggestions for further research.

Full text: https://doi.org/10.1093/reseval/rvae014


Borlaug, S.B., Karaulova, M., Svartefoss, S.M., Sivertsen, G., Meijer, I., Leeuwen, T. van & Hessels, L.K. (2024). Researchers engaging with society: who does what? Science and Public Policy, scae006.

Abstract

Distinguishing between research collaboration, consultancy, dissemination, and commercialization of research results, this paper analyses the determinants of researchers’ societal engagement. The analytical framework integrates societal engagement as part of the credibility cycle. Several variables extend previous findings on determinants and mechanisms—herein scientific recognition and funding sources. A novel method to investigate the relationship between scientific recognition and societal engagement is explored. Drawing on a large-scale survey of European-based researchers in physics, cardiology, and economics, we find that several factors are associated with different modes of societal engagement in complex and intersecting ways. Scientific recognition is positively associated with research collaboration and dissemination, while organizational seniority is associated with all modes except for research collaboration with non-scientific actors. Female gender is positively associated with dissemination and external funding sources are positively associated will all. The findings intersect with differences in the three research fields.

Full text: https://doi.org/10.1093/scipol/scae006


Seibicke, H. (2024). Investigating stakeholder rationales for participating in collaborative interactions at the policy–science nexus. Policy & Politics.

Abstract

Contemporary politics has become increasingly reliant on scientific knowledge. In evidence-based policymaking, science is invoked to address complex, ‘wicked’ problems. Yet, policymakers do not necessarily base decisions on the best-available evidence, and models of knowledge used in policymaking have long been criticised as simplistic. Therefore, collaboration with non-scientific actors has emerged as a possible way forward. On both sides of the policy–science nexus, collaborative interactions are extended to include ‘stakeholders’ to improve the impact of knowledge (that is, its usability and applicability). And while stakeholder involvement often follows this overarching justification, the question of stakeholder rationales for participating in these processes has previously received little scholarly attention. To address this gap, this article analyses stakeholder rationales, asking why organisations get involved in collaborative research. The theoretical expectations about divergent organisational rationales, drawing on theories of institutional and organisational logics, are investigated through an exploratory case study of stakeholders engaged in collaborative research projects in Norway. The theoretical and empirical analysis form the basis for a proposed new typology of stakeholder rationales. In this way, the article contributes towards the development of better tools for understanding and assessing the sources and potential pathways of knowledge, shaped by self-interested actors, making its way into policymaking processes, often as ‘neutral’ evidence.

Full text: https://doi.org/10.1332/03055736Y2023D000000010


Sivertsen, G. (2023). Performance-based research funding and its impacts on research organizations. In: Handbook of Public Research Funding (Chapter 6, 90-106). Cheltenham: Edward Elgar Publishing.

Abstract

Performance-based research funding systems (PBFS) allocate direct institutional funding to universities and other public research organizations based on an assessment of their research. There are three main types: evaluation-based funding, indicator-based funding, and funding contingent on performance agreements. Potential problems with PBFS depend on the type and design of the system, its influence on funding and reputation, and the involvement of the funded organizations in collaboration about the design and implementation. PBFS may create an overview over and external insight into the research activities and provide fairer and more transparent funding allocation criteria. However, they also come with a paradox: The validity of the methods, and thereby the usefulness of PBFS, may be reduced when performance assessment or measurement is connected to funding. Minimizing their effects on funding and reputation may support the achievement of their aims.

Full text: https://doi.org/10.31235/osf.io/u54c3


Aksnes, D.W.; Sivertsen, G. (2023). Global trends in international research collaboration, 1980-2021. Journal of Data and Information Science (JDIS), 8(2):26-42.

Abstract

Purpose: The aim of this study is to analyze the evolution of international research collaboration from 1980 to 2021. The study examines the main global patterns as well as those specific to individual countries, country groups, and different areas of research.
Design/methodology/approach: The study is based on the Web of Science Core collection database. More than 50 million publications are analyzed using co-authorship data. International collaboration is defined as publications having authors affiliated with institutions located in more than one country.
Findings: At the global level, the share of publications representing international collaboration has gradually increased from 4.7% in 1980 to 25.7% in 2021. The proportion of such publications within each country is higher and, in 2021, varied from less than 30% to more than 90%. There are notable disparities in the temporal trends, indicating that the process of internationalization has impacted countries in different ways. Several factors such as country size, income level, and geopolitics may explain the variance.
Research limitations: Not all international research collaboration results in joint co-authored scientific publications. International co-authorship is a partial indicator of such collaboration. Another limitation is that the applied full counting method does not take into account the number of authors representing in each country in the publication.
Practical implications: The study provides global averages, indicators, and concepts that can provide a useful framework of reference for further comparative studies of international research collaboration.
Originality/value: Long-term macro-level studies of international collaboration are rare, and as a novelty, this study includes an analysis by the World Bank’s division of countries into four income groups.

Full text: https://doi.org/10.2478/jdis-2023-0015


Zhang, L.; Gou, Z.; Fang, Z.; Sivertsen, G.; Huang, Y. (2023). Who tweets scientific publications? A large-scale study of tweeting audiences in all areas of research. Journal of the Association for Information Science and Technology, 74:1485-1497.

Abstract

The purpose of this study is to investigate the validity of tweets about scientific publications as an indicator of societal impact by measuring the degree to which the publications are tweeted beyond academia. We introduce methods that allow for using a much larger and broader data set than in previous validation studies. It covers all areas of research and includes almost 40 million tweets by 2.5 million unique tweeters mentioning almost 4 million scientific publications. We find that, although half of the tweeters are external to academia, most of the tweets are from within academia, and most of the external tweets are responses to original tweets within academia. Only half of the tweeted publications are tweeted outside of academia. We conclude that, in general, the tweeting of scientific publications is not a valid indicator of the societal impact of research. However, publications that continue being tweeted after a few days represent recent scientific achievements that catch attention in society. These publications occur more often in the health sciences and in the social sciences and humanities.

Full text: https://doi.org/10.1002/asi.24830


Franssen, T. (2022). Enriching research quality: A proposition for stakeholder heterogeneity, Research Evaluation, 31(3): 311–320.

Abstract

Dominant approaches to research quality rest on the assumption that academic peers are the only relevant stakeholders in its assessment. In contrast, impact assessment frameworks recognize a large and heterogeneous set of actors as stakeholders. In transdisciplinary research non-academic stakeholders are actively involved in all phases of the research process and actor-network theorists recognize a broad and heterogeneous set of actors as stakeholders in all types of research as they are assigned roles in the socio-material networks, also termed ‘problematizations’, that researchers reconfigure. Actor-network theorists consider research as a performative act that changes the reality of the stakeholders it, knowingly or unknowingly, involves. Established approaches to, and notions of, research quality do not recognize the heterogeneity of relevant stakeholders nor allow for reflection on the performative effects of research. To enrich the assessment of research quality this article explores the problematization as a potential new object of evaluation. Problematizations are proposals for how the future might look. Hence, their acceptance does not only concern fellow academics but also all other human and other-than-human actors that figure in them. To enrich evaluative approaches, this article argues for the inclusion of stakeholder involvement and stakeholder representation as dimensions of research quality. It considers a number of challenges to doing so including the identification of stakeholders, developing quality criteria for stakeholder involvement and stakeholder representation, and the possibility of participatory research evaluation. It can alternatively be summarized as raising the question: for whose benefit do we conduct evaluations of research quality?

Full text: https://doi.org/10.1093/reseval/rvac012


Tirado, MM, Nedeva, M. & Thomas, D.A. (2023) Aggregate level research governance effects on particle physics: A comparative analysis. Research Evaluation.

This paper contributes to understanding the effects of research governance on global scientific fields.

Abstract

Using a highly selective comparative analysis of four national governance contexts, we explore how governance arrangements influence the dynamics of global research fields. Our study provides insights into second-level governance effects, moving beyond previous studies focusing primarily on effects on research organizations rooted in national contexts. Rather than study over 100 countries across which our selected CERN-based particle physics global research field operates, we explore conditions for changing the dynamics of global research fields and examine mechanisms through which change may occur. We predict then minimal effects on the epistemic choices and research practices of members of the four local knowledge networks despite variations in governance arrangements, and hence no second-level effects. We assert a research field’s independence from governance depends on its characteristics and the relative importance to researchers of research quality notions. This paper contributes methodologically and has practical implications for policymakers. It suggests governance arrangements affect the epistemic choices and research practices of the local knowledge networks only when certain conditions are met. Policymakers should consider the context and characteristics of a field when designing governance arrangements and policy.

Full text: https://doi.org/10.1093/reseval/rvad025


Langfeldt, L., Reymert, I. & Svartefoss, S.M. (2023). Distrust in grant peer review — reasons and remedies. Science and Public Policy

Abstract

With the increasing reliance on competitive grants to fund research, we see a review system under pressure. While peer review has long been perceived as the cornerstone of self-governance in science, researchers have expressed distrust in the peer review procedures of funding agencies. This paper draws on literature pointing out ability, benevolence, and integrity as important for trustworthiness and explores the conditions under which researchers have confidence in grant review. Based on rich survey material, we find that researchers trust grant reviewers far less than they trust journal peer reviewers or their colleagues’ ability to assess their research. Yet, scholars who have success with grant proposals or serve on grant review panels appear to have more trust in grant reviewers. We conclude that transparency and reviewers with field competencies are crucial for trust in grant review and discuss how this can be ensured.

Full text: https://doi.org/10.1093/scipol/scad051


Franssen, T., Borlaug, S. B., & Hylmö, A. (2023). Steering the Direction of Research through Organizational Identity FormationMinerva, 1-25.

Full text: https://link.springer.com/article/10.1007/s11024-023-09494-z


Reymert, I., Vabø, A., Borlaug, S.B. et al. Barriers to attracting the best researchers: perceptions of academics in economics and physics in three European countries. High Educ (2022).
Full text: https://doi.org/10.1007/s10734-022-00967-w


Borlaug, S.B., Tellmann, S.M. & Vabø, A. (2022). Nested identities and identification in higher education institutions—the role of organizational and academic identities. Higher Education 

Full text: https://doi.org/10.1007/s10734-022-00837-5


Nedeva, M., Tirado, M.M. & Thomas D.A. (2022). Research governance and the dynamics of science: A framework for the study of governance effects on research fields. Research Evaluation

Full text: https://doi.org/10.1093/reseval/rvac028


Steffy, K., Langfeldt, L. (2022). Research as discovery or delivery? Exploring the implications of cultural repertoires and career demands for junior economists’ research practices. Higher Education

Full text: https://doi.org/10.1007/s10734-022-00934-5


Zhang, L., Sivertsen, G., Du, H. et al. Gender differences in the aims and impacts of research. Scientometrics 126, 8861–8886 (2021).

Abstract

This study uses mixed methods—classical citation analysis, altmetric analysis, a survey with researchers as respondents, and text analysis of the abstracts of scientific articles—to investigate gender differences in the aims and impacts of research. We find that male researchers more often value and engage in research mainly aimed at scientific progress, which is more cited. Female researchers more often value and engage in research mainly aimed at contributing to societal progress, which has more abstract views (usage). The gender differences are observed among researchers who work in the same field of research and have the same age and academic position. Our findings have implications for evaluation and funding policies and practices. A critical discussion of how societal engagement versus citation impact is valued, and how funding criteria reflect gender differences, is warranted.

Full text: https://doi.org/10.1007/s11192-021-04171-y


Scholten, W., Franssen, T. P., van Drooge, L., de Rijcke, S., & Hessels, L. K. (2021). Funding for few, anticipation among all: Effects of excellence funding on academic research groups. Science and Public Policy, 48(2), 265-275.

Abstract

In spite of the growing literature about excellence funding in science, we know relatively little about its implications for academic research practices. This article compares organizational and epistemic effects of excellence funding across four disciplinary fields, based on in-depth case studies of four research groups in combination with twelve reference groups. In spite of the highly selective nature of excellence funding, all groups employ dedicated strategies to maximize their chances of acquiring it, which we call strategic anticipation. The groups with ample excellence funding acquire a relatively autonomous position within their organization. While the epistemic characteristics of the four fields shape how excellence funding can be used, we find that in all fields there is an increase in epistemic autonomy. However, in fields with more individual research practices a longer time horizon for grants, beyond the usual 5 years, would fit better with the research process.

Full text: https://doi.org/10.1093/scipol/scab018


Steffy, Kody (2021).  Gendered Patterns of Unmet Resource Need among Academic Researchers

Abstract

An expansive body of literature has documented how academia acts as a gendered organization, characterized by disadvantage at multiple levels. Because of data limitations, we know surprisingly little about whether and how access to the resources needed to carry out high-quality research may be gendered. This study begins to fill this gap using a newly available survey of researchers in three disciplines across five European countries. Across a wide range of resources, findings point to marked gender disparities. Women are more likely than men to say that they do not have the resources they need to do their research well and that having them would make a big difference in their work.
These findings are robust to controls including academic seniority, suggesting that structural sexism contributes to resource disparities in science. Even after overcoming obstacles en route to research positions in competitive fields, women in science remain systematically disadvantaged.

Full text: https://journals.sagepub.com/doi/pdf/10.1177/23780231211039585


Aksnes, D. W. & Aagaard, K. (2021). Lone geniuses or one among many? An explorative study of contemporary highly cited researchers. Journal of Data and Information Science, 6(2):1–26.

Abstract

Purpose: The ranking lists of highly cited researchers receive much public attention. In common interpretations, highly cited researchers are perceived to have made extraordinary contributions to science. Thus, the metrics of highly cited researchers are often linked to notions of breakthroughs, scientific excellence, and lone geniuses.
Design/methodology/approach: In this study, we analyze a sample of individuals who appear on Clarivate Analytics’ Highly Cited Researchers list. The main purpose is to juxtapose the characteristics of their research performance against the claim that the list captures a small fraction of the researcher population that contributes disproportionately to extending the frontier and gaining—on behalf of society—knowledge and innovations that make the world healthier, richer, sustainable, and more secure.
Findings: The study reveals that the highly cited articles of the selected individuals generally have a very large number of authors. Thus, these papers seldom represent individual contributions but rather are the result of large collective research efforts conducted in research consortia. This challenges the common perception of highly cited researchers as individual geniuses who can be singled out for their extraordinary contributions. Moreover, the study indicates that a few of the individuals have not even contributed to highly cited original research but rather to reviews or clinical guidelines. Finally, the large number of authors of the papers implies that the ranking list is very sensitive to the specific method used for allocating papers and citations to individuals. In the “whole count” methodology applied by Clarivate Analytics, each author gets full credit of the papers regardless of the number of additional co-authors. The study shows that the ranking list would look very different using an alternative fractionalised methodology.
Research limitations: The study is based on a limited part of the total population of highly cited researchers.
Practical implications: It is concluded that “excellence” understood as highly cited encompasses very different types of research and researchers of which many do not fit with dominant preconceptions.
Originality/value: The study develops further knowledge on highly cited researchers, addressing questions such as who becomes highly cited and the type of research that benefits by defining excellence in terms of citation scores and specific counting methods.

Full text: https://doi.org/10.2478/jdis-2021-0019


Benneworth, P., Engels, T. C. E., Galleron, I., Kulczycki, E., Ochsner, M., Sivertsen, G., Sinkuniene, J. & Williams, G. (2020). Challenging evaluation in the SS: Roundtable discussion. Darbai ir dienos 73:107–126.

Full text: https://hdl.handle.net/20.500.12259/109423


Huang, Y., Li, R., Zhang, L. & Sivertsen, G. (2020). A comprehensive analysis of the journal evaluation system in China. Quantitative Science Studies 1–33.

Abstract

Journal evaluation systems reflect how new insights are critically reviewed and published, and the prestige and impact of a discipline’s journals is a key metric in many research assessments, performance evaluation, and funding systems. With the expansion of China’s research and innovation systems and its rise as a major contributor to global innovation, journal evaluation has become an especially important issue. In this paper, we first describe the history and background of journal evaluation in China and then systematically introduce and compare the most currently influential journal lists and indexing services. These are: the Chinese Science Citation Database (CSCD), the Journal Partition Table (JPT), the AMI Comprehensive Evaluation Report (AMI), the Chinese S&T Journal Citation Report (CJCR), “A Guide to the Core Journals of China” (GCJC), the Chinese Social Sciences Citation Index (CSSCI), and the World Academic Journal Clout Index (WAJCI). Some other influential lists produced by government agencies, professional associations, and universities are also briefly introduced. Through the lens of these systems, we provide: comprehensive coverage of the tradition and landscape of the journal evaluation system in China and the methods and practices of journal evaluation in China with some comparisons to how other countries assess and rank journals.

Full text: https://doi.org/10.1162/qss_a_00103


Kulczycki, E., Guns, R., Pölönen, J., Engels, T. C. E., Rozkosz, E. A., Zuccala, A. A., Bruun, K., Eskola, O., Starčič, A. I., Petr, M. & Sivertsen, G. (2020). Multilingual publishing in the social sciences and humanities: A seven-country European study. Journal of the Association for Information Science and Technology 71:1371–1385.

Abstract

We investigate the state of multilingualism across the social sciences and humanities (SSH) using a comprehensive data set of research outputs from seven European countries (Czech Republic, Denmark, Finland, Flanders [Belgium], Norway, Poland, and Slovenia). Although English tends to be the dominant language of science, SSH researchers often produce culturally and societally relevant work in their local languages. We collected and analyzed a set of 164,218 peer‐reviewed journal articles (produced by 51,063 researchers from 2013 to 2015) and found that multilingualism is prevalent despite geographical location and field. Among the researchers who published at least three journal articles during this time period, over one‐third from the various countries had written their work in at least two languages. The highest share of researchers who published in only one language were from Flanders (80.9%), whereas the lowest shares were from Slovenia (57.2%) and Poland (59.3%). Our findings show that multilingual publishing is an ongoing practice in many SSH research fields regardless of geographical location, political situation, and/or historical heritage. Here we argue that research is international, but multilingual publishing keeps locally relevant research alive with the added potential for creating impact.

Full text: https://doi.org/10.1002/asi.24336


Pölönen, J., Guns, R., Kulczycki, E., Sivertsen, G. & Engels, T. C. E. (2020). National lists of scholarly publication channels: An overview and recommendations for their construction and maintenance. Journal of Data and Information Science 1–37.

Abstract

Purpose:
This paper presents an overview of different kinds of lists of scholarly publication channels and of experiences related to the construction and maintenance of national lists supporting performance-based research funding systems. It also contributes with a set of recommendations for the construction and maintenance of national lists of journals and book publishers.

Design/methodology/approach:
The study is based on analysis of previously published studies, policy papers, and reported experiences related to the construction and use of lists of scholarly publication channels.

Findings:
Several countries have systems for research funding and/or evaluation, that involve the use of national lists of scholarly publication channels (mainly journals and publishers). Typically, such lists are selective (do not include all scholarly or non-scholarly channels) and differentiated (distinguish between channels of different levels and quality). At the same time, most lists are embedded in a system that encompasses multiple or all disciplines. This raises the question how such lists can be organized and maintained to ensure that all relevant disciplines and all types of research are adequately represented.

Research limitation:
The conclusions and recommendations of the study are based on the authors’ interpretation of a complex and sometimes controversial process with many different stakeholders involved.

Practical implications:
The recommendations and the related background information provided in this paper enable mutual learning that may feed into improvements in the construction and maintenance of national and other lists of scholarly publication channels in any geographical context. This may foster a development of responsible evaluation practices.

Originality/value:
This paper presents the first general overview and typology of different kinds of publication channel lists, provides insights on expert-based versus metrics-based evaluation, and formulates a set of recommendations for the responsible construction and maintenance of publication channel lists.

Full text: https://doi.org/10.2478/jdis-2021-0004


Sivertsen, G. (2020). Problems and considerations in the design of bibliometric indicators for national performance-based research funding systems. Przegląd Prawa Konstytucyjnego 55(3):109–118.

Abstract

This article presents an overview of ten specific problems and considerations that are typically involved in designs of bibliometric indicators for national performance-based research funding systems (PRFS). While any such system must be understood and respected on the background of different national contexts, mutual learning across countries can inspire improvements. The paper is partly based on experiences from a Mutual Learning Exercise (MLE) on Performance Based Funding Systems which was organized by the European Commission in 2016–17and involved fourteen European countries, partly on experiences from advising a few other countries in developing such systems. A framework for understanding country differences in the design of PRFS is presented first, followed by a presentation of the five specific problems and considerations that are typically involved in designs of bibliometric indicators for such system. The article concludes with an overview of how Norway’s PRFS has dealt with the same five problems.

Full text: https://doi.org/10.15804/ppk.2020.03.06


Zhang, L. & Sivertsen, G. (2020). Combination of scientometrics and peer-review in research evaluation: International experiences and inspirations. (In Chinese). Qingbao xuebao 39(8):806–816.

Full text: http://dx.chinadoi.cn/10.3772/j.issn.1000-0135.2020.08.003


Zhang, L. & Sivertsen, G. (2020). The new research assessment reform in China and its implementation. Scholarly Assessment Reports 2(1):3.

Abstract

A radical reform of research assessment was recently launched in China. It seeks to replace a focus on Web of Science-based indicators with a balanced combination of qualitative and quantitative research evaluation, and to strengthen the local relevance of research in China. It trusts the institutions to implement the policy within a few months but does not provide the necessary national platforms for coordination, influence and collaboration on developing shared tools and information resources and for agreement on definitions, criteria and protocols for the procedures. Based on international experiences, this article provides constructive ideas for the implementation of the new policy.

Full text: http://doi.org/10.29024/sar.15


Langfeldt, L, Reymert, I & Aksnes, D. W. (2020). The role of metrics in peer assessments. Research Evaluation 1–21.

Abstract

Metrics on scientific publications and their citations are easily accessible and are often referred to in assessments of research and researchers. This paper addresses whether metrics are considered a legitimate and integral part of such assessments. Based on an extensive questionnaire survey in three countries, the opinions of researchers are analysed. We provide comparisons across academic fields (cardiology, economics, and physics) and contexts for assessing research (identifying the best research in their field, assessing grant proposals and assessing candidates for positions). A minority of the researchers responding to the survey reported that metrics were reasons for considering something to be the best research. Still, a large majority in all the studied fields indicated that metrics were important or partly important in their review of grant proposals and assessments of candidates for academic positions. In these contexts, the citation impact of the publications and, particularly, the number of publications were emphasized. These findings hold across all fields analysed, still the economists relied more on productivity measures than the cardiologists and the physicists. Moreover, reviewers with high scores on bibliometric indicators seemed more frequently (than other reviewers) to adhere to metrics in their assessments. Hence, when planning and using peer review, one should be aware that reviewers—in particular reviewers who score high on metrics—find metrics to be a good proxy for the future success of projects and candidates, and rely on metrics in their evaluation procedures despite the concerns in scientific communities on the use and misuse of publication metrics.

Full text: https://doi.org/10.1093/reseval/rvaa032


Reymert, I., Jungblut, J. & Borlaug, S. B. (2020). Are evaluative cultures national or global? A cross-national study on evaluative cultures in academic recruitment processes in Europe. Higher Education 1–21.

Abstract

Studies on academic recruitment processes have demonstrated that universities evaluate candidates for research positions using multiple criteria. However, most studies on preferences regarding evaluative criteria in recruitment processes focus on a single country, while cross-country studies are rare. Additionally, though studies have documented how fields evaluate candidates differently, those differences have not been deeply explored, thus creating a need for further inquiry. This paper aims to address this gap and investigates whether academics in two fields across five European countries prefer the same criteria to evaluate candidates for academic positions. The analysis is based on recent survey data drawn from academics in economics and physics in Denmark, the Netherlands, Norway, Sweden, and the UK. Our results show that the academic fields have different evaluative cultures and that researchers from different fields prefer specific criteria when assessing candidates. We also found that these field-specific preferences were to some extent mediated through national frameworks such as funding systems.

Full text: https://doi.org/10.1007/s10734-020-00659-3


Reymert, I. (2020). Bibliometrics in academic recruitment: A screening tool rather than a game changer. Minerva 1–26.

Abstract

This paper investigates the use of metrics to recruit professors for academic positions. We analyzed confidential reports with candidate evaluations in economics, sociology, physics, and informatics at the University of Oslo between 2000 and 2017. These unique data enabled us to explore how metrics were applied in these evaluations in relation to other assessment criteria. Despite being important evaluation criteria, metrics were seldom the most salient criteria in candidate evaluations. Moreover, metrics were applied chiefly as a screening tool to decrease the number of eligible candidates and not as a replacement for peer review. Contrary to the literature suggesting an escalation of metrics, we foremost detected stable assessment practices with only a modestly increased reliance on metrics. In addition, the use of metrics proved strongly dependent on disciplines where the disciplines applied metrics corresponding to their evaluation cultures. These robust evaluation practices provide an empirical example of how core university processes are chiefly characterized by path-dependency mechanisms, and only moderately by isomorphism. Additionally, the disciplinary-dependent spread of metrics offers a theoretical illustration of how travelling standards such as metrics are not only diffused but rather translated to fit the local context, resulting in heterogeneity and context-dependent spread.

Full text: https://doi.org/10.1007/s11024-020-09419-0


Costas, R., Mongeon, P., Ferreira, M. R., van Honk, J. & Franssen, T. (2020). Large-scale identification and characterization of scholars on Twitter. Quantitative Science Studies 1(2):771–791.

Abstract

This paper presents a new method for identifying scholars who have a Twitter account from bibliometric data from Web of Science (WoS) and Twitter data from Altmetric.com. The method reliably identifies matches between Twitter accounts and scholarly authors. It consists of a matching of elements such as author names, usernames, handles, and URLs, followed by a rule-based scoring system that weights the common occurrence of these elements related to the activities of Twitter users and scholars. The method proceeds by matching the Twitter accounts against a database of millions of disambiguated bibliographic profiles from WoS. This paper describes the implementation and validation of the matching method, and performs verification through precision-recall analysis. We also explore the geographical, disciplinary, and demographic variations in the distribution of scholars matched to a Twitter account. This approach represents a step forward in the development of more advanced forms of social media studies of science by opening up an important door for studying the interactions between science and social media in general, and for studying the activities of scholars on Twitter in particular.

Full text: https://doi.org/10.1162/qss_a_00047


Franssen, T. (2020). Research infrastructure funding as a tool for science governance in the humanities: A country case study of the Netherlands. In K. Cramer & O. Hallonsten (Eds.), Big Science and Research Infrastructures in Europe (chapter 7, 157–176). Cheltenham: Edward Elgar.

Abstract

This chapter argues that research infrastructure funding functions as a tool that science governance actors can use to steer research. Funding arrangements are developed in relation to science policy discourses and, in that capacity, enact particular normative frames of what “good science” is. This framework is used to study the boom of funding for research infrastructures in the humanities in the Netherlands. I argue that digital research infrastructures in the humanities, and the related emerging research area of digital humanities, are seen to foster collaboration and the coordination of research agendas across (sub)disciplinary boundaries in the humanities. This is important as the humanities are viewed as epistemically fragmented. Through research infrastructure funding science governance actors thus attempt to strengthen particular research traditions in the humanities. This chapter calls for a critical focus on funding arrangements in the sociology of science and the ways in which these shape research practices across scientific domains.

Full text: https://doi.org/10.4337/9781839100017.00013


Kuipers, G. & Franssen, T. (2020). Qualification, or: what is a good something? In J. R. Bowen, N. Dodier, J. W. Duyvendak & A. Hardon (Eds.), Pragmatic Inquiry: Critical concepts for social sciences (chapter 9). London: Routledge.

Abstract

This chapter analyzes process of assessing whether something is “a good something”. When people encounter something – whether it is new, or a version of something already known – it has to be qualified: people assess simultaneously what something is, and whether this something has quality. The study of quality as a social construct with real consequences received an important impetus from the work of Pierre Bourdieu. Bourdieu argued that seemingly disinterested taste-based evaluations, for instance in music and arts, are shaped by power struggles. Bourdieu’s concept of quality is rooted in the Durkheim-inspired notion of culture as classification system, as it was developed in twentieth century French anthropology and linguistics. Classification was taken up mainly by institutional and cognitive sociologists. Institutionalists study how classification systems are produced and embedded in social institutions and fields.

Full text: https://doi.org/10.4324/9781003034124


Karaulova, M., Nedeva, M. & Thomas, D. A. (2020). Mapping research fields using co-nomination: the case of hyper-authorship heavy flavour physics. Scientometrics 1–21.

Abstract

This paper introduces the use of co-nomination as a method to map research fields by directly accessing their knowledge networks organised around exchange relationships of intellectual influence. Co-nomination is a reputation-based approach combining snowball sampling and social network analysis. It compliments established bibliometric mapping methods by addressing some of their typical shortcomings in specific instances. Here we test co-nomination by mapping one such instance: the idiosyncratic field of CERN-based heavy flavour physics (HFP). HFP is a ‘hyper-authorship’ field where papers conventionally list thousands of authors alphabetically, masking individual intellectual contributions. We also undertook an illustrative author co-citation analysis (ACA) mapping of 2310 HFP articles published 2013–18 and identified using a simple keyword query. Both maps were presented to two HFP scientists for commentary upon structure and validity. Our results suggest co-nomination allows us to access individual-level intellectual influence and discern the experimental and theoretical HFP branches. Co-nomination is powerful in uncovering current and emerging research specialisms in HFP that might remain opaque to other methods. ACA, however, better captures HFP’s historical and intellectual foundations. We conclude by discussing possible future uses of co-nomination in science policy and research evaluation arrangements.

Full text: https://doi.org/10.1007/s11192-020-03538-x


Thomas, D. A., Nedeva, M., Tirado, M. M. & Jacob, M. (2020). Changing research on research evaluation: A critical literature review to revisit the agenda. Research Evaluation, rvaa008:1–14.

Abstract

The current range and volume of research evaluation-related literature is extensive and incorporates scholarly and policy/practice-related perspectives. This reflects academic and practical interest over many decades and trails the changing funding and reputational modalities for universities, namely increased selectivity applied to institutional research funding streams and the perceived importance of university rankings and other reputational devices. To make sense of this highly diverse body of literature, we undertake a critical review of over 350 works constituting, in our view, the ‘state-of-the-art’ on institutional performance-based research evaluation arrangements (PREAs). We focus on PREAs because they are becoming the predominant means world-wide to allocate research funds and accrue reputation for universities. We highlight the themes addressed in the literature and offer critical commentary on the balance of scholarly and policy/practice-related orientations. We then reflect on five limitations to the state-of-the-art and propose a new agenda, and a change of perspective, to progress this area of research in future studies.

Full text: https://doi.org/10.1093/reseval/rvaa008


Sivertsen, G. & Meijer, I. (2019). Normal versus extraordinary societal impact: how to understand, evaluate, and improve research activities in their relations to society? Research Evaluation, rvz032:1–5.

Abstract

Societal impact of research does not occur primarily as unexpected extraordinary incidents of particularly useful breakthroughs in science. It is more often a result of normal everyday interactions between organizations that need to create, exchange, and make use of new knowledge to further their goals. We use the distinctions between normal and extraordinary societal impact and between organizational- and individual-level activities and responsibilities to discuss how science–society relations can better be understood, evaluated, and improved by focusing on the organizations that typically interact in a specific domain of research.

Full text: https://doi.org/10.1093/reseval/rvz032


Schneider, J. W., van Leeuwen, T., Visser, M. & Aagaard, K. (2019). Examining national citation impact by comparing developments in a fixed and a dynamic journal set. Scientometrics 119(2):973–985.

Abstract

In order to examine potential effects of methodological choices influencing developments in relative citation scores for countries, a fixed journal set comprising of 3232 journals continuously indexed in the Web of Science from 1981 to 2014 is constructed. From this restricted set, a citation database depicting the citing relations between the journal publications is formed and relative citation scores based on full and fractional counting are calculated for the whole period. Previous longitudinal studies of citation impact show stable rankings between countries. To examine such findings coming from a dynamic set of journals for potential “database effects”, we compare them to our fixed set. We find that relative developments in impact scores, country profiles and rankings are both very stable and very similar within and between the two journal sets as well as counting methods. We do see a small “inflation factor” as citation scores generally are somewhat lower for high-performing countries in the fixed set compared to the dynamic set. Consequently, using an ever-decreasing set of journals compared to the dynamic set, we are still able to reproduce accurately the developments in impact scores and the rankings between the countries found in the dynamic set. Hence, potential effects of methodological choices seem to be of limited importance compared to the stability of citation networks.

Full text: https://doi.org/10.1007/s11192-019-03082-3


Langfeldt, L., Nedeva, M., Sörlin, S. & Thomas, D. A. (2019). Co‑existing notions of research quality: A framework to study context‑specifc understandings of good research. Minerva.

Abstract

Notions of research quality are contextual in many respects: they vary between fields of research, between review contexts and between policy contexts. Yet, the role of these co-existing notions in research, and in research policy, is poorly understood. In this paper we offer a novel framework to study and understand research quality across three key dimensions. First, we distinguish between quality notions that originate in research fields (Field-type) and in research policy spaces (Space-type). Second, drawing on existing studies, we identify three attributes (often) considered important for ‘good research’: its originality/novelty, plausibility/reliability, and value or usefulness. Third, we identify five different sites where notions of research quality emerge, are contested and institutionalised: researchers themselves, knowledge communities, research organisations, funding agencies and national policy arenas. We argue that the framework helps us understand processes and mechanisms through which ‘good research’ is recognised as well as tensions arising from the co-existence of (potentially) conflicting quality notions.

Full text: https://doi.org/10.1007/s11024-019-09385-2


Piro, F. N. (2019). The R&D composition of European countries: concentrated versus dispersed profiles. Scientometrics 119(2):1095–1119.

Abstract

In this study, we use a unique dataset covering all higher education institutions, public research Institutions and private companies that have applied for funding to the European Framework Programs for Research and Innovation in the period 2007–2017. The first aim of this study is to show the composition of R&D performing actors per country, which to the best of our knowledge has never been done before. The second aim of this study is to compare country profiles in R&D composition, so that we may analyse whether the countries differ in concentration of R&D performing institutions. The third aim of this study is to investigate whether different R&D country profiles are associated with how the R&D systems perform, i.e. whether the profiles are associated with Research and Innovation performance indicators. Our study shows that the concentration of R&D actors at country-level and within the sectors differ across European countries, with the general conclusion being that countries that can be characterized as well-performing on citation and innovation indicators seem to combine (a) high shares of Gross Domestic Expenditure on R&D as percentage of GDP with (b) a highly skewed R&D system, where a small part of the R&D performing actors account for a very high share of the national R&D performance. This indicates a dual R&D system which combines a few large R&D performing institutions with a very large number of small actors.

Full text: https://doi.org/10.1007/s11192-019-03062-7


Franssen, T. & de Rijcke, S. (2019). The rise of project funding and its effects on the social structure of academia. In F. Cannizzo & N. Osbaldiston (Eds.), The social structures of global academia (chapter 9). London: Routledge.

Abstract

In this chapter we analysed the effects of the rise of project funding on the social structure of academia. We show that more temporary positions are created and the temporary phase in the career is extended. Short-term contracts increase job and grant market participation of early career researchers, which, in turn, establishes competition as a mode of governance, reaffirms the individual as the primary epistemic subject and increases anxiety and career uncertainty. All of which impact the social fabric of research groups and departments. Communitarian ideals are promoted by senior staff members, which is necessary to establish the research group as a community, but cannot solve the inherent tension because of the structural nature of the mechanisms we describe. We conclude that individual research groups will be unlikely to be able to solve these problems and a more radical shift in the distribution of research funding is necessary.

Full text: https://doi.org/10.4324/9780429465857


Franssen, T. & Wouters, P. (2019). Science and its significant other: Representing the humanities in bibliometric scholarship. Journal of the Association for Information Science and Technology.

Abstract

The cognitive and social structures, and publication practices, of the humanities have been studied bibliometrically for the past 50 years. This article explores the conceptual frameworks, methods, and data sources used in bibliometrics to study the nature of the humanities, and its differences and similarities in comparison with other scientific domains. We give a historical overview of bibliometric scholarship between 1965 and 2018 that studies the humanities empirically and distinguishes between two periods in which the configuration of the bibliometric system differs remarkably. The first period, 1965 to the 1980s, is characterized by bibliometric methods embedded in a sociological theoretical framework, the development and use of the Price Index, and small samples of journal publications from which references are used as data sources. The second period, the 1980s to the present day, is characterized by a new intellectual hinterland—that of science policy and research evaluation—in which bibliometric methods become embedded. Here metadata of publications becomes the primary data source with which publication profiles of humanistic scholarly communities are analyzed. We unpack the differences between these two periods and critically discuss the analytical avenues that different approaches offer.

Full text: https://doi.org/10.1002/asi.24206


Borlaug, S. B. & Langfeldt, L. (2019). One model fits all? How centres of excellence affect research organisation and practices in the humanities. Studies in Higher Education.

Abstract

Centres of Excellence (CoE) have become a common research policy instrument in several OECD countries the last two decades. The CoE schemes are in general modelled on the organisational and research practices in the natural and life sciences. Compared to ‘Big science’, the humanities have been characterised by more individual research, flat structures, and usually less integration and coordination of research activities. In this article we ask: How does the introduction of CoEs affect the organisation of research and research practices in the humanities? By comparing Norwegian CoEs in different fields of research and studying the specific challenges of the humanities, we find that CoEs increase collaboration between different fields and make disciplinary and organisational boundaries more permeable, but so far they do not substantially alter individual collaboration patterns in the humanities CoEs. They further seem to generate more tensions in their adjacent environments compared to CoEs in other fields.

Full text: https://doi.org/10.1080/03075079.2019.1615044


Aksnes, D. W., Langfeldt, L. & Wouters, P. (2019). Citations, citation indicators, and research quality: An overview of basic concepts and theories. Sage Open 9(1):1–17.

Abstract

Citations are increasingly used as performance indicators in research policy and within the research system. Usually, citations are assumed to reflect the impact of the research or its quality. What is the justification for these assumptions and how do citations relate to research quality? These and similar issues have been addressed through several decades of scientometric research. This article provides an overview of some of the main issues at stake, including theories of citation and the interpretation and validity of citations as performance measures. Research quality is a multidimensional concept, where plausibility/soundness, originality, scientific value, and societal value commonly are perceived as key characteristics. The article investigates how citations may relate to these various research quality dimensions. It is argued that citations reflect aspects related to scientific impact and relevance, although with important limitations. On the contrary, there is no evidence that citations reflect other key dimensions of research quality. Hence, an increased use of citation indicators in research evaluation and funding may imply less attention to these other research quality dimensions, such as solidity/plausibility, originality, and societal value.

Full text: https://doi.org/10.1177%2F2158244019829575


Borlaug, S. B. & Gulbrandsen, M. (2018). Researcher identities and practices inside centres of excellence. Triple Helix 5(14):1–19.

Abstract

Many science support mechanisms aim to combine excellent research with explicit expectations of societal impact. Temporary research centres such as ‘Centres of Excellence’ and ‘Centre of Excellence in Research and Innovation’ have become widespread. These centres are expected to produce research that creates future economic benefits and contributes to solving society’s challenges, but little is known about the researchers that inhabit such centres. In this paper, we ask how and to what extent centres affect individual researchers’ identity and scientific practice. Based on interviews with 33 researchers affiliated with 8 centres in Sweden and Norway, and on institutional logics as the analytical framework, we find 4 broad types of identities with corresponding practices. The extent to which individuals experience tensions depend upon the compatibility and centrality of the two institutional logics of excellence and innovation within the centre context. Engagement in innovation seems unproblematic and common in research-oriented centres where the centrality of the innovation logic is low, while individuals in centres devoted to both science and innovation in emerging fields of research or with weak social ties to their partners more frequently expressed tension and dissatisfaction.

Full text: https://doi.org/10.1186/s40604-018-0059-3


Franssen, T., Scholten, W., Hessels, L. K. & de Rijcke, S. (2018). The Drawbacks of Project Funding for Epistemic Innovation: Comparing Institutional Affordances and Constraints of Different Types of Research Funding. Minerva 56(1):11–33.

Abstract

Over the past decades, science funding shows a shift from recurrent block funding towards project funding mechanisms. However, our knowledge of how project funding arrangements influence the organizational and epistemic properties of research is limited. To study this relation, a bridge between science policy studies and science studies is necessary. Recent studies have analyzed the relation between the affordances and constraints of project grants and the epistemic properties of research. However, the potentially very different affordances and constraints of funding arrangements such as awards, prizes and fellowships, have not yet been taken into account. Drawing on eight case studies of funding arrangements in high performing Dutch research groups, this study compares the institutional affordances and constraints of prizes with those of project grants and their effects on organizational and epistemic properties of research. We argue that the prize case studies diverge from project-funded research in three ways: 1) a more flexible use, and adaptation of use, of funds during the research process compared to project grants; 2) investments in the larger organization which have effects beyond the research project itself; and 3), closely related, greater deviation from epistemic and organizational standards. The increasing dominance of project funding arrangements in Western science systems is therefore argued to be problematic in light of epistemic and organizational innovation. Funding arrangements that offer funding without scholars having to submit a project-proposal remain crucial to support researchers and research groups to deviate from epistemic and organizational standards.

Full text: https://doi.org/10.1007/s11024-017-9338-9


Aagaard, K. (2017). The Evolution of a National Research Funding System: Transformative Change Through Layering and Displacement. Minerva 55(3):279–297.

Abstract

This article outlines the evolution of a national research funding system over a timespan of more than 40 years and analyzes the development from a rather stable Humboldt-inspired floor funding model to a complex multi-tiered system where new mechanisms continually have been added on top of the system. Based on recent contributions to Historical Institutionalism it is shown how layering and displacement processes gradually have changed the funding system along a number of dimensions and thus how a series of minor adjustments over time has led to a transformation of the system as a whole. The analysis also highlights the remarkable resistance of the traditional academically oriented research council system towards restructuring. Due to this resistance the political system has, however, circumvented the research council system and implemented change through other channels of the funding system. For periods of time these strategies have marginalized the role of the councils.

Full text: https://doi.org/10.1007/s11024-017-9317-1


Aagaard, K. & Schneider, J. W. (2017). Some considerations about causes and effects in studies of performance-based research funding systems. Journal of Informetrics 11(3):923-926.

Full text: https://doi.org/10.1016/j.joi.2017.05.018


Giménez-Toledo, E., Manana-Rodriguez, J. & Sivertsen, G. (2017). Scholarly book publishing: Its information sources for evaluation in the social sciences and humanities. Research Evaluation 26(2):91-101.

Abstract

In the past decade, a number of initiatives have been taken to provide new sources of information on scholarly book publishing. Thomson Reuters (now Clarivate Analytics) has supplemented the Web of Science with a Book Citation Index (BCI), while Elsevier has extended Scopus to include books from a selection of scholarly publishers. More complete metadata on scholarly book publishing can be derived at the national level from non-commercial databases such as Current Research Information System in Norway and the VIRTA (Higher Education Achievement Register, Finland) publication information service, including the Finnish Publication Forum (JUFO) lists (Finland). The Spanish Scholarly Publishers Indicators provides survey-based information on the prestige, specialization profiles from metadata, and manuscript selection processes of national and international publishers that are particularly relevant for the social sciences and humanities (SSH). In the present work, the five information sources mentioned above are compared in a quantitative analysis identifying overlaps and uniqueness as well as differences in the degrees and profiles of coverage. In a second-stage analysis, the geographical origin of the university presses (UPs) is given a particular focus. We find that selection criteria strongly differ, ranging from a set of a priori criteria combined with expert-panel review in the case of commercial databases to in principle comprehensive coverage within a definition in the Nordic countries and an open survey methodology combined with metadata from the book industry database and questionnaires to publishers in Spain. Larger sets of distinct book publishers are found in the non-commercial databases, and greater geographical diversity is observable among the UPs in these information systems. While a more locally oriented set of publishers which are relevant to researchers in the SSH is present in non-commercial databases, the commercial databases seem to focus on highly selective procedures by which the coverage concentrates on prestigious international publishers, mainly based in the USA or UK and serving the natural sciences, engineering, and medicine.

Full text: https://doi.org/10.1093/reseval/rvx007


Hammarfelt, B., de Rijcke, S. & Wouters, P. F. (2017). From eminent men to excellent universities: University rankings as calculative devices. Minerva 55(4):391–411.

Abstract

Global university rankings have become increasingly important ‘calculative devices’ for assessing the ‘quality’ of higher education and research. Their ability to make characteristics of universities ‘calculable’ is here exemplified by the first proper university ranking ever, produced as early as 1910 by the American psychologist James McKeen Cattell. Our paper links the epistemological rationales behind the construction of this ranking to the sociopolitical context in which Cattell operated: an era in which psychology became institutionalized against the backdrop of the eugenics movement, and in which statistics of science became used to counter a perceived decline in ‘great men.’ Over time, however, the ‘eminent man,’ shaped foremost by heredity and upbringing, came to be replaced by the excellent university as the emblematic symbol of scientific and intellectual strength. We also show that Cattell’s ranking was generative of new forms of the social, traces of which can still be found today in the enactment of ‘excellence’ in global university rankings.

Full text: https://doi.org/10.1007/s11024-017-9329-x


Lavik, G. A. V. & Sivertsen, G. (2017). Erih Plus – Making the SSH Visible, Searchable and Available. Procedia Computer Science 106:61–65.

Abstract

The European Reference Index for the Humanities and the Social Sciences (ERIH PLUS) may provide national and institutional CRIS systems with a well-defined, standardized and dynamic register of scholarly journals and series in the social sciences and humanities. The register goes beyond the coverage in commercial indexing services to provide a basis for standardizing the bibliographic data and making them available and comparable across different CRIS systems. The aims and organization of the ERIH PLUS project is presented for the first time at an international conference in this paper.

Full text: https://doi.org/10.1016/j.procs.2017.03.035


Müller, R. & de Rijcke, S. (2017). Thinking with indicators. Exploring the Epistemic Impacts of Academic Performance Indicators in the Life Sciences. Research Evaluation 26(3):157–168.

Abstract

While quantitative performance indicators are widely used by organizations and individuals for evaluative purposes, little is known about their impacts on the epistemic processes of academic knowledge production. In this article we bring together three qualitative research projects undertaken in the Netherlands and Austria to contribute to filling this gap. The projects explored the role of performance metrics in the life sciences, and the interactions between institutional and disciplinary cultures of evaluating research in these fields. Our analytic perspective is focused on understanding how researchers themselves give value to research, and in how far these practices are related to performance metrics. The article zooms in on three key moments in research processes to show how ‘thinking with indicators’ is becoming a central aspect of research activities themselves: (1) the planning and conception of research projects, (2) the social organization of research processes, and (3) determining the endpoints of research processes. Our findings demonstrate how the worth of research activities becomes increasingly assessed and defined by their potential to yield high value in quantitative terms. The analysis makes visible how certain norms and values related to performance metrics are stabilized as they become integrated into routine practices of knowledge production. Other norms and criteria for scientific quality, e.g. epistemic originality, long-term scientific progress, societal relevance, and social responsibility, receive less attention or become redefined through their relations to quantitative indicators. We understand this trend to be in tension with policy goals that seek to encourage innovative, societally relevant, and responsible research.

Full text: https://doi.org/10.1093/reseval/rvx023


Rushforth, A. & de Rijcke, S. (2017). Quality Monitoring in Transition: The Challenge of Evaluating Translational Research Programs in Academic Biomedicine. Science and Public Policy 44(4):513–523.

Abstract

While the efficacy of peer review for allocating institutional funding and benchmarking is often studied, not much is known about issues faced in peer review for organizational learning and advisory purposes. We build on this concern by analyzing the largely formative evaluation by external committees of new large, ‘translational’ research programs in a University Medical Center in the Netherlands. By drawing on insights from studies which report problems associated with evaluating and monitoring large, complex, research programs, we report on the following tensions that emerged in our analysis: (1) the provision of self-evaluation information to committees and (2) the selection of appropriate committee members. Our article provides a timely insight into challenges facing organizational evaluations in public research systems where pushes toward ‘social’ accountability criteria and large cross-disciplinary research structures are intensifying. We end with suggestions about how the procedure might be improved.

Full text: https://doi.org/10.1093/scipol/scw078


Sivertsen, G. (2017). Unique, but still best practice? The Research Excellence Framework (REF) from an international perspective. Palgrave Communications 3.

Abstract

Inspired by The Metric Tide report (2015) on the role of metrics in research assessment and management, and Lord Nicholas Stern’s report Building on Success and Learning from Experience (2016), which deals with criticisms of REF2014 and gives advice for a redesign of REF2021, this article discusses the possible implications for other countries. It also contributes to the discussion of the future of the REF by taking an international perspective. The article offers a framework for understanding differences in the motivations and designs of performance-based research funding systems (PRFS) across countries. It also shows that a basis for mutual learning among countries is more needed than a formulation of best practice, thereby both contributing to and correcting the international outlook in The Metric Tide report and its supplementary Literature Review.

Full text: https://doi.org/10.1057/palcomms.2017.78


Zhang, L., Rousseau, R. & Sivertsen, G. (2017). Science deserves to be judged by its contents, not by its wrapping: Revisiting Seglen’s work on journal impact and research evaluation. PLoS ONE 12(3): e0174205.

Abstract

The scientific foundation for the criticism on the use of the Journal Impact Factor (JIF) in evaluations of individual researchers and their publications was laid between 1989 and 1997 in a series of articles by Per O. Seglen. His basic work has since influenced initiatives such as the San Francisco Declaration on Research Assessment (DORA), the Leiden Manifesto for research metrics, and The Metric Tide review on the role of metrics in research assessment and management. Seglen studied the publications of only 16 senior biomedical scientists. We investigate whether Seglen’s main findings still hold when using the same methods for a much larger group of Norwegian biomedical scientists with more than 18,000 publications. Our results support and add new insights to Seglen’s basic work.

Full text: http://dx.doi.org/10.1371/journal.pone.0174205


Piro, F. N., Aksnes, D. W. & Rørstad, K. (2016). How does prolific professors influence on the citation impact of their university departments? Scientometrics 107(3):941–961.

Abstract

Professors and associate professors (“professors”) in full-time positions are key personnel in the scientific activity of university departments, both in conducting their own research and in their roles as project leaders and mentors to younger researchers. Typically, this group of personnel also contributes significantly to the publication output of the departments, although there are also major contributions by other staff (e.g. PhD-students, postdocs, guest researchers, students and retired personnel). The scientific productivity is however, very skewed at the level of individuals, also for professors, where a small fraction of the professors, typically account for a large share of the publications. In this study, we investigate how the productivity profile of a department (i.e. the level of symmetrical/asymmetrical productivity among professors) influences on the citation impact of their departments. The main focus is on contributions made by the most productive professors. The findings imply that the impact of the most productive professors differs by scientific field and the degree of productivity skewness of their departments. Nevertheless, the overall impact of the most productive professors on their departments’ citation impact is modest.

Full text: https://doi.org/10.1007/s11192-016-1900-y


Piro, F. N. & Sivertsen, G. (2016). How can differences in international university rankings be explained? Scientometrics 109(3):2263–2278.

Abstract

University rankings are typically presenting their results as league tables with more emphasis on final scores and positions, than on the clarification of why the universities are ranked as they are. Finding out the latter is often not possible, because final scores are based on weighted indicators where raw data and the processing of these are not publically available. In this study we use a sample of Scandinavian universities, explaining what is causing differences between them in the two most influential university rankings: Times Higher Education and the Shanghai-ranking. The results show that differences may be attributed to both small variations on what we believe are not important indicators, as well as substantial variations on what we believe are important indicators. The overall aim of this paper is to provide a methodology that can be used in understanding universities’ different ranks in global university rankings.

Full text: https://doi.org/10.1007/s11192-016-2056-5