Challenges in research impact assessment

Deciding on the appropriate distribution and allocation of research funding in any sector is no easy task. In the case of the life, biomedical and health sciences, funders making such decisions may consider the greatest needs in research, existing gaps in topic or disease areas, or research that has the potential to demonstrate the greatest breakthroughs and health returns. While each of these considerations are taken into account to some extent at national or, in some cases, at a global funding level, there is little evidence to suggest that this is done systematically across funders in one country, let alone globally. Part of the challenge is the lack of accurate data both on the inputs into research (funding investments), and the accompanying attributed outputs and wider outcomes and impacts. The Global Forum for Health Research, for example, has published estimates for global spending on health research for the past 10 years based primarily on surveys conducted by the OECD (Landriault and Matlin, 2009). In 2014 the UK Clinical Research Collaboration (UKCRC) published the third UK-wide analysis of public and charity funded health relevant research since 2004, for which it used the Health Research Classification System (HRCS) to categorize projects corresponding to £3bn of spend in 2014. (UK Clinical Research Collaboration, 2015) The UKCRC report was helpful in demonstrating, for example, that half of all funding is concentrated in “basic research” (underpinning and aetiology research) although this proportion to other research has decreased over the ten-year reporting period. Data were collected from awards databases from the 64 participating funding institutions. However, the interpretation of this type of data can be complex and resource-intensive (Terry et al., 2012), and likely to be inconsistent if this is to be done across many different funding institutions globally. Similarly, the inputs and associated outputs, outcomes and wider impacts of research are also difficult to track, mainly because the research system has traditionally relied on academic publications being the main output type that is systematically tracked and documented, and even that is seldom attributed directly to funding sources.

In this article, we describe how as a research community (funders, administrators, researchers and beneficiaries) we are beginning to create more systematic ways of capturing inputs, and then tracking these to the wider outcomes and impact of research; but propose that there is still a long way to go. We explore this by using a framework through which research can be assessed for public benefit, centering on three broad elements or “3e’s” of research, shown in Table 1. This framework asks whether research is effective (that is, does it produce any outputs, outcomes and/or societal benefits or impact?), efficient (that is, how productive is the research system? is research happening at an appropriate “rate”? is there waste in research?), and equitable (that is, is the research achieving specific goals, reaching certain beneficiaries, or addressing specific health needs?).

Table 1 The “3e’s” for assessing research, with associated assessment questions

We acknowledge that in order to systematically answer these three questions there is an inevitable cost (both to researchers and to funders), which is spent in collecting and analysing the data required to assess research. In this article we therefore apply the same framework as a lens to explore the exercise of assessing research (that is, whether we have a research) assessment system that is effective, efficient, and equitable. In effect, we are exploring two questions: (1) to what extent does the research ecosystem and community have the infrastructure in place to systematically assess research (in line with the 3e’s)? and (2) would the inevitable transaction cost of such systematic assessments be appropriate?

Conceptualizing 3e’s for research impact assessment

The use of 3e’s as an approach to evaluation in general is not new. Effectiveness, efficiency and equity have been used in a range of settings, including general programme evaluation (Reinke, 1994), programme evaluation of quality of health care services (Donabedian, 1988), evaluating hospital performance (Davis et al., 2013), health system performance (Aday et al., 1999; Aday, 2004) and health promotion (Tones and Tilford, 2001). Outside of health, there are also examples of considering 3e’s to assess proposed options for climate change initiatives (Stern, 2007; Angelsen, 2009), and achieving value for money in international development (Department for International Development, 2011; OECD, 2012). Finally, the 3e’s have been conceptualized and used in other forms. For example, DFID’s approach to value for money uses three different terms: economy (where they examine the inputs to their programmes), efficiency and effectiveness (which includes considerations of outputs that lead to equity) (Department for International Development, 2011).

Most of these examples relate to the evaluation of delivering a programme or intervention. While programme evaluation shares characteristics and methodological approaches with research impact assessment, the focus in this article is specifically on how research is assessed. Research impact assessment can be thought of as research on research, with the aim of providing analysis that describes what works in research, helps better allocation of research funding, creates accountability for research, and supports advocacy initiatives in policy and practice (Morgan Jones and Grant, 2013).

Previous studies have reviewed the conceptual tools that have been developed for understanding, assessing and describing research activity (Banzi et al., 2011; Bornmann, 2013; Guthrie et al., 2013; Milat et al., 2015; Greenhalgh et al., 2016). The methods used within these tools include the use of bibliometrics to assess academic impact, quantitative indicators and metrics on economic and health outcomes, qualitative narratives and case studies, and conceptual frameworks such as logic models and related theories of change. All of these require data on the inputs of research, and, depending on the questions asked in the assessment, associated data on outputs, outcomes and impact or a combination of the three. Figure 1 is a simplified illustration of these essential elements of research and how we have conceptualized the 3e’s in the context of these elements. While we are aware of other conceptual frameworks for describing research processes (Buxton and Hanney, 1996; CAHS, 2009; Guthrie et al., 2013; Greenhalgh et al., 2016) we are using this simplified illustration to contextualize the 3e’s framework. The inputs of research include the funding invested, knowledge brought in, and resources required to deliver the research. The research process includes all the activities that enable the research to happen (ie reviewing of evidence, data collection, analysis, reporting and so forth). Asking if these processes are occurring optimally, or if there is waste, duplication of efforts, or indeed if there is a lack of productivity when comparing across research groups can assess the efficiency of this process, which we explore further below.

Figure 1
figure 1

Essential inputs and outputs, outcomes and impact of the research process used to explore the 3e’s (effectiveness, efficiency and equity) in research assessment.

Research activity leads to outputs, outcomes and wider impact, which can serve to tell us whether research has been effective. Finally, the information on inputs, research processes and outputs, outcomes and impact can all serve to determine if research is equitable. We explore assessment for equity further in the following sections, but emphasize that we interpret this in this context as whether the research achieves specific goals, reaching certain beneficiaries, or addressing specific health needs.

Applying the 3e’s to research impact assessment

In the following sections we explore these 3e’s further and demonstrate that the research community as a whole, including funders, researchers and administrators, is potentially in a position where it can assess or evaluate research not just according to academic outputs (production of knowledge), but also its outcomes and/or impact (effects on society). Such data are essential in being able to assess research itself for its effectiveness, efficiency and equity and we explore how each of these are achieved, or could be achieved. Furthermore, we argue that the various assessments of research that currently exist are primarily examining the effectiveness of research, and less attention is paid to whether research is efficient and equitable, mainly because the tools to do so do not yet exist. There is seldom a systematic attempt to gather data that shows whether research was produced in the most optimized way, or benchmarked for performance (efficient), or if it reached certain beneficiaries, or addressed specific health needs (equitable).

Effectiveness in research

Taking our simplified definition of effectiveness (Table 1), assessing whether research is effective simply means finding out if it produced any outputs, outcomes and/or societal benefits or impact. The main unit of analysis required is simply a measure of outputs (or outcomes and/or impact).

At its core, the proximate role of research is to produce new knowledge and understanding and to build on (or challenge) previous knowledge, which can then lead to improved understanding or benefits to society. The academic and funding community monitors, audits and/or evaluates research activity primarily by quality and excellence standards in the production of knowledge through journal articles, and discussions are ongoing on how to improve the measurement of “quality” of research through publications (Boaz et al., 2003). If research is assessed purely for its production of knowledge (ie academic publications), then the growth in publications and attempts to find the highest quality output through methods such as bibliometrics may serve this purpose. Similarly, a combination of publication and patent data can also demonstrate performance, as exemplified by the Elsevier report on comparative performance of UK research (Elsevier, 2013).

If, however, our focus shifts to non-academic societal outcomes and impact, the tools and methods for gathering this evidence vary. Publications alone, while capturing the contributions of research in the knowledge sphere, do not serve to systematically capture the wider impacts on society arising from research. More recently, therefore, research funders are collecting extra data to assess research on its secondary outcomes and benefits to wider society or “impact”. In the United Kingdom, for example, the Higher Education Funding Council for England (HEFCE) for the first time based 20% of the overall assessment on non-academic impact in the 2014 Research Excellence Framework (REF). The National Institute for Health Research (NIHR), the Medical Research Council (MRC) and other funders, including the medical research charities, collect information beyond publications in their progress and annual reports. This means we have a rich database available to us that demonstrates the effectiveness of research.

The ways in which we capture data beyond academic publications have also developed. For REF 2014, HEFCE chose to collect information on impact from UK researchers in the form of “impact case studies” (a four page narrative), which are available to read in an online searchable database.Footnote 1 Similarly, Research Councils and funding bodies in the United Kingdom regularly collect information (as descriptive text in annual reports) on the outputs of the research they have funded, beyond academic publications, and report on these. Much of this data on wider impacts was initially collected in the form of reports, using free-text written into word-processing documents. There are now a growing number of tools to facilitate the collection of evidence for these wider outcomes/impact; such as Researchfish®, Symplectic, ImpactStory and Kilola. By adopting these tools, funders can now also analyse the outputs from funded projects and report on this. Reports using Researchfish data by funders including Cancer Research UK, the MRC, and the Science and Technology Facilities Council, all linked from the Researchfish website;Footnote 2 while others such as the Association of Medical Research Charities are currently working on the analysis of their Researchfish data.Footnote 3

In these examples, we can see that most assessments of research are assessing whether the inputs of research (that is, funding) are producing outputs (knowledge in the form of publications), and, more recently, other outcomes or wider impact. Taking our simplified definition of effectiveness in Table 1, we see that publication lists, tools that collect research outputs and narrative descriptions of impact to society are therefore effective ways of demonstrating the “input-output” pathway of research. The analyses of the 6,679 impact case studies submitted to REF 2014 concluded that it is possible to extract useful information on the impact of research through impact case studies, especially if used in combination with text mining and other automated tools (King’s College London and Digital Science, 2015). If the role of research assessment is to assess whether research inputs are producing outputs and outcomes/impact, that is, whether research is effective, then all these tools serve this purpose.

Efficiency in research

Efficiency is generally reported in terms of the cost per unit of production (for example, how much does it cost to produce a number of cars in a production line per year?). In research it can essentially be used to test research performance—measuring whether the ratio of output/input of research can be optimized—by comparing, for example, against other research programmes or countries (external benchmarking), or to previous years’ performance (internal benchmarking). We have summarized this in Table 1; the working definition of efficiency in research asks how well the health research outputs, outcomes and impact occur. This often implies asking if there is waste in research, which means we now need two units for analysis: inputs and outputs (or outcomes and/or impact).

Efficiency in terms of academic outputs using a crude output/input ratio already occurs. In 1997, the UK Chief Scientist Robert May (May, 1997), as well as Grant and Lewison (1997) were the first to calculate publications and citations per funding spent by country. The UK Department for Business, Innovation and Skills has since published similar calculations in their reports that assess the performance of the United Kingdom compared with 7 other research-intensive countries (Elsevier, 2013). However, efficiency in terms of wider outcomes and impact, the connection between inputs and outcomes/impact are not clearly linked, making this calculation much more challenging. Taking the example of the REF 2014 process, which was largely peer-review based, the data were not available to conduct systematic, rigorous benchmarking of research outcomes and impact. The impact data was available in the form of narrative text, with no requirement to produce standardized reporting of the reach and significance of impact. For example, in our own analysis of these narrative texts, we had envisaged being able to extract quantitative information and to group such information by various indicators, thus enabling us to develop return-on-investment type estimates (King’s College London and Digital Science, 2015). However, this was not feasible as there was a very large amount of numerical data in the case studies that were inconsistently used and that would need converting to standard units. Financial information was expressed in different currencies, while measures and calculations of health gains (in terms of quality adjusted life years, or QALYs) were inconsistent. To calculate a crude estimate of total health gain, we had to supplement and manipulate the data in the case studies given by the authors with external data cited in their references or using our own judgement (King’s College London and Digital Science, 2015). Morevoer, the actual input data was not available at all, as researchers were not required (thankfully) to link each individual impact to a funding source or proportions thereof.

One could foresee, however, a scenario in which this information is captured systematically in more standardized units, whereby the impact of different research projects or programmes can be compared against each other, if project or programme outputs and impact were also linked to funding. For example, the ratio of health gains per £1 achieved in one project (in the form of QALYs) could be compared with the ratio of health gains per £1 spent in another (and if indeed the research investment, or inputs, of such outcomes could be attributed to one or more funding sources). Tools such as Researchfish that link inputs to research outputs could in future enable such calculations to allow at least funding bodies to make decisions about which research is working more efficiently. Within the Researchfish platform, research outputs (and outcomes and impact) are gathered through a “question set” that range from academic publications to patents and commercialization activities, to informing policy, products, and interventions. Researchers can attribute these entries to research grants and awards, thereby enabling funders to capture a range of data that have been submitted by the researchers they fund and evaluate the impact of their research funding by various units of assessment (for example, disciplinary focus, research funding mechanism, host institution and so on). Such evaluations strengthen accountability to the taxpayer and donor communities, and can be used to assess the effectiveness of different aspects of research funding (Hinrichs et al., 2015). Tools such as these, if used extensively, could provide funders with agile ways to discover how work across their research portfolio is progressing and what it is producing (such as knowledge, leverage and connections) and enable assessing for efficiency with more standardized data. Further considerations would then have to be made on whether efficiency is more important, than say, equity considerations, and we note the challenge of trade-offs in our discussion.

In addition to data linking challenges, another challenge to benchmarking impacts for comparisons in efficiency, there are no standardized measures of what constitutes a good impact story, nor which types of research are producing valuable impacts. It has recently been shown that both the general public and researchers value impact in different ways, which calls for the development of future generic measures of “impact utility” using micro-economic approaches such as contingent valuation and discrete choice modeling (Pollit et al., 2016). At present, therefore, a systematic “output/input” or “outcome (and impact)/input” calculation, taken to be our working definition for efficiency in Table 1, is not strictly used, with the exception of studies that benchmark publication and citation per £ spent (Elsevier, 2013). An alternative approach is to measure the rate of return of research, with a view to compare efficiencies of these across funding programmes, disease areas, or countries’ investments in research. There are examples where research benefits have been quantified within a disease area, such as cancer (Glover et al., 2014) and cardiovascular research (Buxton et al., 2008), but limited data availability and the associated necessary assumptions mean that direct comparisons on return on assessment should be avoided.

There is also a body of work that acknowledges that there may be inefficiencies in research, and therefore efforts should be made to reduce waste in research (Chalmers et al., 2014). In 2009, Chalmers and Glasziou (2009) had previously estimated that the cumulative effect was that about 85% of research investment is wasted—not taking into account the inefficiencies in regulation and management of research. We note that the concept of waste requires some critical reflection and definition. While in some fields, duplicating studies could indicate waste (since existing rigorous studies may have already answered relevant health-related questions), in others such duplication is necessary in order to validate findings which may not yet be definitive. Furthermore, most research systems include an element of competition, which is regarded as beneficial to the research process and may mean that multiple researchers may be tackling the same research questions at the same time, thereby encouraging optimized innovation rather than waste. Nevertheless, the principle of reflecting on potential waste still applies in these considerations. Waste can be reduced by ensuring new research builds on previous research and/or best practice, for example, by requiring systematic reviews as part of the research proposal, as encouraged by initiatives such as the EBRNetwork (http://ebrnetwork.org); by making protocols available to the public to ensure study designs build on previous experience; by encouraging publication of raw data (for example, clinical trial data via IMPACT, http://ottawagroup.ohri.ca/disclosure.html); and by encouraging open access and making research findings more accessible and promote and build on knowledge “efficiently”. The NIHR in the United Kingdom has been identified as championing reduction of waste in research by requiring systematic reviews for any application submitted to them, involving patients and the public in decision making, and making full protocols available for a number of their research projects (https://monanasser.wordpress.com/2015/12/03/how-to-reduce-waste-in-research-from-edinburgh-to-vienna-and-sarajevo; http://www.nihr.ac.uk/funding/pgfar-application-process.htm).

As a research community and ecosystem, therefore, while we do not yet systematically assess research outputs, outcomes and wider impacts for its efficiency, the tools are available to do so. Furthermore, the initiatives to reduce waste a priori, that is, at grant application assessment stage, suggest that there is willingness in the research community to reduce waste in research and promote efficiency in research. However, if the role of research assessment is to assess whether research inputs are producing outputs and outcomes/impact at an appropriate rate, ie whether research is efficient, then better tools are required to link these outputs to research inputs and systematically make such comparisons, especially for research outcomes and impacts that are not counted in the same way as academic publications.

Equity in research

Assessing research for equity involves setting priorities for research; ensuring that inputs and outputs, outcomes and impact are aligned to intended equitable social goals (which include, for example, eliminating extremes of wealth and poverty, avoiding neglect of specific disease areas, ensuring gender and race equality). Deciding on those particular goals depends on exactly where equity needs to be achieved. Another way to achieve equity is through the equitable funding of researchers, that is, ensuring that the allocation of health research funding (inputs) is done equitably and without biases (for example of gender, age and institutional ranking). In this article, however, because we are focusing on the outcomes and impact of research, we are not referring to equitable funding but rather the need to conduct research assessment to achieve equity. Consequently, we acknowledge that in order to achieve equity will require a value judgement by those making the decisions on how funds are distributed. This helps us distinguish equity from a broader concept such as diversity (Stirling, 2007), as the intention is to encourage careful thought on allocating research funds according to pressing social goals. A helpful definition for equity in this context is the distribution of benefits in a target population in relation to individual needs (Roemer, 1980; Reinke, 1994), which could be health needs.

Identifying health needs and matching these to funding allocation, however, can be challenging (Guindo et al., 2012). To be equipped to consider equity in research assessment, data is needed on how the outputs, outcomes and impact of research have contributed to specific health needs, or specific beneficiaries of research. We describe three challenges with respect to research assessment for equity below.

Firstly, there is a challenge in mapping health expenditure overall. Few funders publicly report disaggregated statistics on health R&D expenditures, and there is a lack of uniformity in the use of R&D classification systems across different funders (Terry et al., 2012). There have been some initiatives that have begun to address the first challenge, such as the WHO Global Health Observatory which identifies gaps in health R&D (WHO, 2016).

Second, there are challenges in identifying and then prioritizing health needs when it comes to research allocation. In the aforementioned 2014 UKCRC, burden of disease (measured using Disability Adjusted Life Years or DALYs’) was matched with HRCS to identify differences between health research funded and burden of disease (UK Clinical Research Collaboration, 2015). An analysis by Rottingen et al. of research investment and subsequent outputs confirm that there are substantial gaps in the global landscape of health R&D, especially for and in low-income and middle-income countries (Røttingen et al., 2013). Viergever (2013) has also demonstrated a mismatch between the health R&D that is needed and that which is undertaken, especially in the areas of neglected diseases, neglected populations, and neglected products such as diagnostics and platform technologies, because of the favoured investments in drugs and vaccines (Viergever, 2013). Part of the challenge, he argues, is that R&D is not needs-driven, there is no system to facilitate the prioritization of health needs, and, finally, the research system is largely dependent on market incentives. Furthermore, the role of other stakeholders outside the research community can also influence prioritization. For example, private investment in research can also be driven primarily by expected rate of return, which can distort equity considerations. Increasingly, public engagement has a role in setting priorities for health research, which can support its societal legitimacy and provide validation for making value judgements in prioritizing research needs.

Finally, the challenge shared with the consideration of “efficiency” in research assessment, is that we do not have systematic and standardized reporting on who benefits and “what works” in research funding (ie the outcomes and impact components of Fig. 1). The “gap maps” from the International Initiative for Impact Evaluation, 3ie, for example, demonstrate what is known and not known from impact evaluations and systematic reviews in particular areas such as education, HIV and AIDS, or agriculture (3IE, 2016). Gathering the data for this, however, can be challenging. For example, the latest edition of Millions Saved by the Center for Global Development has identified cases of proven success in global health (Glassman and Temin, 2016), but this required issuing public calls for submissions of good practice, reviewing systematic reviews databases and conducting interviews with subject matters. Women, for example, may be disadvantaged as the beneficiaries of research, in terms of its health, societal and economic impacts (Sen et al., 2007; Kuhlmann and Annandale, 2015; Schiebinger et al., 2011-2015). There is evidence to suggest that research that does not account for gender differences can result in inaccurate conclusions about how women respond to disease and this in turn will influence the effectiveness of treatment choices (Bartlett et al., 2005; Johnson et al., 2014).

Much more has been written on the subject of equity in research, and we will not attempt to list all the evidence nor enter into discussion about how equity is judged, as such arguments could generate different subjective views. However, we list the above examples to demonstrate the existence of activity in this area, and to state that to bring equity into research impact assessment, data still needs to be collected systematically. Ultimately, this will help prioritize resource allocation and we acknowledge this will still require value judgements. As discussed earlier, while the data on research inputs and processes is collected by different funders, that of outcomes and impact are not yet done in a systematic and standardized format.

3e’s in research assessment processes

We have so far examined whether current assessment processes are capable of assessing research for its 3e’s, and have argued that despite good will, there are still infrastructure and data collection and data sharing challenges to overcome. We now explore 3e’s in the process of assessing research.

Around the world most assessment for research performance are based on number and/or quality of publications, including Norway, Sweden, Canada, Australia, Italy, Denmark, Spain and Finland, and the Czech Republic (Krapels et al., 2016). There has been some debate on whether peer review and bibliometrics are the right tools for assessing research, but they are broadly accepted as key tools to assess research outputs (academic outputs). In practice, we also see much of the information on research outputs, outcomes and impact used effectively—for example, data collected through tools such as Researchfish have enabled funders such as the MRC to make strategic decisions about what works in their research funding portfolio (Hinrichs et al., 2015).

The assessment of non-academic outcomes and impact is relatively new, and therefore more difficult to assess given the lack of systematic and standardized reporting of these (as noted earlier). Following REF 2014, HEFCE commissioned a number of evaluations and reviews of its assessment process, including an independent review of the REF commissioned by government to also provide conclusions and suggestions for the next assessment cycle (Stern, 2016). If we simply wish to answer whether or not the assessment process worked, ie whether or not it is effective as a means of capturing research performance, then we would argue that to a large extent this was the case in REF 2014 (Manville et al., 2015a, b).

What has not been yet demonstrated, however, is whether research assessment is efficient (or whether it is “too expensive” to justify). Research assessment entails an inevitable transaction cost, both to the funder in analysing the outputs and to the research organizations who need to prepare the data to demonstrate outputs. Processes such as peer review have been noted to have substantial costs for upholding quality (Wessely, 1998) and questions have been raised about its cost-effectiveness (and indeed overall effectiveness) (Godlee et al., 1999).

The total transaction costs for both the universities and funding councils for REF 2014 were 2.4% of the total money allocated (Technopolis, 2015). To investigate whether these figures were higher or lower than expected, we conducted a brief search to find comparable transaction costs elsewhere—both in research assessment and in other areas. In our search to find comparable examples we found there were two ways in which transaction costs of assessment were reported: (i) private or internal transaction costs of organizations being assessed in preparing for assessment (such as universities’ internal costs in preparing REF submissions), and (ii) costs for undertaking assessments for the assessor or funder (usually expressed as a percentage of their total expenditure). We show a sample we found in Table 2, which includes the total transaction costs (including both to the assessor and the institutions being assessed, such as HEIs). The resulting figures are not intended to compare like for like, as each calculation differs in the methods employed in estimating individual costs, but serve to give a rough representation of transaction costs.

Table 2 Transaction costs as percentage of received funding

What we concluded from this table is first that direct comparisons are challenging, given the varying ways in which these estimates were calculated. Costs are more often shown in terms of direct costs to the organization doing the assessment (as in (ii) above, since this is easier to calculate for one single organization, rather than turning to the various assessed organizations to calculate their time spent in preparing for the assessment). An example of this is a report by Morton et al., which compares administration as a percentage of total budget for UK funders such as the Wellcome Trust, MRC and DFID, which range from 2.8 to 7.1% of their total budget (Morton et al., 2012). There is one example from another sector which could be comparable to assessment preparations (as in costs (i) above), which are farmer’s transaction costs (private transaction costs) incurred as a proportion of the premium received for being part of an agri-environmental scheme reported as high as 25% (Mettepenningen et al., 2009). We also note that the figures for RCUK for example may be higher if calculated today given the fall in success rates (although potentially balanced out again with efficiency gained in internal administration since then). Although we cannot make complete conclusions about how transaction costs of assessment in higher education compare with other forms of assessment in other sectors, we can observe that these costs vary and the process of assessment has the potential to become more efficient.

Finally, considerations of equity within the research assessment process can also drive how individuals, projects and institutions are assessed and rewarded. There is evidence to suggest that performance assessment can serve to either encourage or discourage equity in the distribution of research. Gender inequity, for example, could arise as a result of gender bias in both research and research assessment; women traditionally have received fewer awards than men, are less included as beneficiaries of research, and are cited less (Ovseiko et al., 2016). Research impact assessment, if motivated and driven by equitable principles, can become an engine for creating equity in the allocation of research funding (Ovseiko et al., 2016). Part of what may need to improve are the methods we employ within research assessment to avoid inequity or unequal opportunity. For example, the use of a mixture of disciplines, and incorporating diversity in review panels, can aid in avoiding unconscious biases among the panel. Adopting the appropriate method, whether it is peer review, the use of metrics, or alternative methods, is therefore important. In the independent review of the role of metrics in research assessment and management, a correlation analysis was undertaken to compare the use of individual metrics with the outcomes of the REF peer review process (Wilsdon et al., 2015). The review found evidence to suggest statistically significant differences in the correlation with REF scores for early-career researchers and women in a small number of Unit of Assessment (Wilsdon et al., 2015).

Concluding thoughts

The allocation of research funding can benefit greatly from robust analysis of what has worked in research, and, in turn, these analyses can help advocacy initiatives and demonstrate accountability to taxpayers and donors. Capturing and mapping data on the inputs, processes, outputs, outcome and impact of research is crucial for these analyses and helps conduct research on research. We have argued here that the research community as a whole, including funders, researchers and administrators, is potentially in a position where it can assess or evaluate research not just according to academic outputs (production of knowledge), but also its outcomes and/or impact (effects on society). Using an exploratory framework that assesses 3e’s of research and research assessment, we also argue that most assessments are primarily examining the effectiveness of research, as tools are not yet available to systematically assess research for its efficiency and equity.

We have also made a distinction between general evaluation and research impact assessment, emphasizing that the latter allows for better allocation of research funding, creates accountability for research, and supports advocacy initiatives in policy and practice (Morgan Jones and Grant, 2013). Each of the 3e’s are important considerations for improving assessments for these purposes and can help answer crucial funding policy questions. Essentially the 3e’s framework can help answer the following policy questions with regards to research funding: Which of our funding programmes are effective? Which funding programmes are most efficient? How do we allocate research funding? And, finally, is the transactional cost worth it?

Furthermore, we acknowledge that these 3e’s are not necessarily reinforcing and combining them may involve trade-offs. For example, it has been argued that “equity” still struggles to find its place as an equal among traditional public administration values of 3e’s (Norman-Major, 2011). In his application of 3e’s to programme evaluation, Reinke (1994) rightly points out, for example, that the high cost of equitably serving hard-to-reach members of the population may require efficiency considerations to be compromised. However, we echo Reinke’s sentiments that these considerations are informative and important in their own right and are increasingly being used in evaluation and assessment in this same or similar forms. Therefore not only is it important to consider what the research funding policy questions are in relation to these 3e’s, but the associated inevitable value judgements that will be required. We suggest that this framework does not replace such judgements but helps supports those decisions.

We acknowledge that this 3e’s framework needs further refinement and invite readers to examine it critically. Our purpose in writing this is driven by the fact that assessments occur anyway, and significant investments have gone into reviewing them. The recently published Stern review of the Research Excellence Framework was based on the assumption that research assessment exercises have contributed productively to driving competition and fostering research excellence (Stern, 2016). The existence of the review itself also points to the need for recommendations for shaping future assessment exercises. To consider the 3e’s in research assessment, especially in the systematic manner that we are suggesting, there will be inevitable transaction costs. Our crude comparisons have shown these may actually be comparatively small, although implementing much of what we suggest here could increase those costs. To manage those costs the research community and infrastructure would have to be tailored to systematically capture the information needed for such assessments. The 3e’s of research assessment provides an alternative approach on how research assessment can be framed so that it more holistically addresses research funding challenges, while remaining mindful of the realistic transaction costs that could be incurred.

Additional information

How to cite this article: Hinrichs-Krapels S and Grant J (2016) Exploring the effectiveness, efficiency and equity (3e’s) of research and research impact assessment. Palgrave Communications. 2:16090 doi: 10.1057/palcomms.2016.90.