Highlights from NASEM Workshop on Assessment and Incentives at U.S. Academic Institutions 

The National Academies of Sciences, Engineering, and Medicine (NASEM) hosted a two-day workshop titled “Rethinking Researcher Assessment and Incentives at U.S. Academic Institutions.” Professors, researchers, and administrators came together to discuss new approaches for assessing researchers at universities beyond traditional metrics such as grants, publications, and citation counts. Participants also explored how new assessment criteria can be implemented at universities across the country. 

[Videocast] [Agenda

Day One 

Drs. Julie Risien, Oregon State University, and Mitul Luhar, University of Southern California, co-chairs of the planning committee, opened the workshop. They noted that while the scientific community has been having conversations about researcher assessment and the need for reform for years, there has been less implementation than expected at this stage—the impetus for bringing this group of stakeholders together. Marsha McNutt, President of the National Academy of Sciences (NAS), also offered opening remarks, noting that the ongoing problem of public (dis)trust in science demands openness and transparency across the scientific community. However, these changes in expectations often are not reflected in researcher assessment—something that must change to sustain the movement towards more trustworthy science.  

The Day One keynote speaker, Dr. Laurie Leshin, California Institute of Technology, spoke about her experience as president of Worcester Polytechnic Institute (WPI), offering a case study for how an institution might change its approach to researcher assessment. Although WPI encouraged faculty to focus on research and educational impact, it did not reward faculty for doing so, instead relying on traditional metrics of assessment in promotion decisions. Over four years, WPI implemented new assessment criteria, requiring scholarly contributions to have an impact beyond the institution. Dr. Leshin summarized WPI’s approach to new assessment criteria as “broadening the bar” rather than raising or lowering it and aligning institutional values with promotion decisions.  

The rest of the day’s programming featured panel discussions and facilitated breakout sessions. The first panel—which included representatives from academia, publishing, and the nonprofit sector—discussed the current state of researcher assessment and academic advancement at higher education institutions in the U.S. The second panel, which included FABBS scientist Dr. Michael Dougherty, focused on how to initiate culture change at research institutions to incentivize certain actions, such as community-based research, sharing data, and mentorship. 

Day One ended with two breakout sessions covering six topics as introduced by flash talks: 

  • Engaging the Community in Research and Addressing Local, Societal, and Regulatory Needs 
  • Conducting Research-Centered Mentorship for Early Career Researchers 
  • Fostering Team Science and Interdisciplinary Collaborations 
  • Contributing to Open Access Publishing and Data Sharing 
  • Enhancing Research Transparency and Strengthening Research Integrity 
  • Facilitating Technology and Knowledge Transfer 

Day Two 

The workshop’s second day featured keynote speaker, Dr. Elizabeth Gadd, Loughborough University in London. Dr. Gadd has two roles: Head of Research and Innovation Culture and Assessment and also Vice Chair of the Coalition on Advancing Research Assessment (CoARA). During her talk, she spoke to the lessons she has learned throughout her work on researcher assessment. CoARA is a collective of over 700 organizations committed to reforming the methods and criteria by which researchers are evaluated. The coalition has ten core commitments supporting a common vision for assessment reform, including avoiding the use of research organization rankings and recognizing the diversity of research and research careers. Dr. Leshin offered examples of new assessment methods, like using narrative CVs to highlight both research and innovation. 

Although some institutions have been successful in changing their assessment criteria and methods, Dr. Leshin did caution attendees about potential pitfalls, suggesting that: 

  1. University officials think carefully before using assessment to incentivize behaviors.  
  1. Researcher assessment cannot be fixed in isolation as researchers exist in a wider ecosystem. 
  1. There is no such thing as a perfect assessment—uncertainty is a natural part of assessment due to limited methods, incomplete data, and the lack of a widely shared definition of excellence. 

Dr. Leshin ended her talk with an introduction to the SCOPE framework for research evaluation developed by the International Network of Research Management Societies (INORMS). This five-stage model is built on three core principles: (1) evaluate only where necessary; (2) evaluate with the evaluated; and (3) draw on evaluation expertise. SCOPE offers a step-by-step process to help anyone involved in research evaluations plan new assessments and review existing ones: 

  1. Start with what you know. 
  1. Context considerations. 
  1. Options for evaluating. 
  1. Probe deeply. 
  1. Evaluate your evaluation. 

The remainder of Day Two included reports from the previous day’s breakout sessions and a final panel discussion on opportunities, challenges, and next steps for implementing new assessment methods and criteria at U.S. institutions. 

NASEM