Are we investing wisely in research for society?

For decades, conversations between research funders, users, and producers have focused on different aspects of what evidence is, the roles it plays in policy and practice, and the different ways in which roles can be enhanced and supported. Most researchers feel unequivocally that ‘more research’ is always better—and funders and governments seem to agree (Sarewitz, 2018). Governments are increasingly using investments explicitly to help create the evidence base for better decision-making. For example, funding has been explicitly focused on the United Nation’s Sustainable Development Goals (UKRI-UNDP, 2018). The UK government has made several targeted investments, including the £1.5 billion Global Challenges Research Fund to address substantive social problems, (Gov. UK, 2016; UKRI, 2017), and in health, the thirteen (up from nine) Collaborations for Leadership in Applied Health Research and Care, which received £232 million 2008–2019 (NIHR, 2009). This investment looks set to continue with a further £150 million allocated to the Applied Research Collaborations (NIHR, 2018). In the US, the Trump administration recently signed into law $176.8 billion for research and development, of which $543 million is specifically for translational health research (Science, 2018). These funds are made available to researchers with an effective proviso that the research is targeted towards questions of direct interest to policymakers and practitioners.

There has also been an increase in the infrastructure governments provide, such as scientific advisory posts and professionals (Doubleday and Wilsdon, 2012; Gluckman, 2014), and a range of secondments and fellowship opportunities designed to ‘solve’ the problem of limited academic-policy engagement (Cairney and Oliver, 2018). The UK Government recently asked departments to produce research priority areas (Areas of Research Interest (ARIs)), to guide future academic-policy collaboration (Nurse, 2015). Yet, there has been almost no evaluation of these activities. There is limited evidence about how to build the infrastructure to use evidence in impactful ways and limited evidence about the impact of this investment (Kislov et al., 2018). We simply do not know whether the growth of funding, infrastructure, or initiatives has actually improved research quality, or led to improvements for populations, practice or policy.

Thus, despite our ever-growing knowledge about our world, physical and social, it is not easy to find answers to the challenges facing us and our governments. Spending ever-increasing amounts on producing research evidence is not likely to help, if we do not understand how to make the most of these investments. Discussions about wastage within the research system often focus on valid concerns about reproducibility and quality (Bishop, 2019), but until we also understand the broader political and societal pressure shaping what evidence is produced and how, we will not be able to reduce this waste (Sarewitz, 2018). In short, our research systems are not guided by current theory about what types of knowledge are most valuable to help address societal problems, or how to produce useful evidence, or how to use this knowledge in policy and practice setting.

Who knows about how to improve evidence production and use?

Fortunately, even if under-used, there is a significant body of academic and practical knowledge about how evidence is produced and used. Several disciplines take the question of evidence production and use as a core concern, and this inherently transdisciplinary space has become populated by research evidence from different academic and professional traditions, jurisdictions and contexts.

Much of the funded research into knowledge production and use has been conducted in health and health care, and other applied disciplines. Although there are perennial inquiries about the ‘best’ research methods which should inform policy and practice (Haynes et al., 2016), this field has offered some very practical insights, from identifying factors which influence evidence use (Innvaer et al., 2002; Oliver et al., 2014; Orton et al., 2011), to identifying types of evidence used in different contexts (Dobrow et al., 2004; Oliver and de Vocht, 2015; Whitehead et al., 2004). Researchers have explored strategies to increase evidence use (Dobbins et al., 2009; Haynes et al., 2012; Lavis et al., 2003), and developed structures to support knowledge production and use—in the UK, see, for example, the What Works Centres, Policy Research Units, Health Research Networks and so forth (Ferlie, 2019; Gough et al., 2018). Similar examples can be found in the US (Tseng et al., 2018; Nutley and Tseng, 2014) and the Netherlands (Wehrens et al., 2010). Alongside these practical tools, critical research has helped us to understand the importance of diverse evidence bases (e.g., Brett et al., 2014; Goodyear-Smith et al., 2015), of including patients and stakeholders in decision-making (Boaz et al., 2016; Liabo and Stewart, 2012), and to contextualise the drive for increased impact outcomes (Boaz et al., 2019; Locock and Boaz, 2004; Nutley et al., 2000).

The social sciences have provided research methods to investigate the various interfaces between different disciplines and their potential audiences. Acknowledging insights from philosophy, critical theory and many other field (see, e.g., Douglas, 2009), we highlight two particular perspectives. Firstly, policy studies has helped us to understand the processes of decision-making and the (political) role of evidence within it (Dye, 1975; Lindblom, 1990; Weiss, 1979). A subfield of ‘the politics of evidence-based policymaking’ has grown up, using an explicitly political-science lens to examine questions of evidence production and use (Cairney, 2016b; Hawkins and Ettelt, 2018; Parkhurst, 2017). Political scientists have commented on the ways in which political debate has been leveraged by scientific knowledge, with particular focuses on social justice, and the uses of evidence to support racist and sexist oppression (Chrisler, 2015; Emejulu, 2018; Lopez and Gadsden, 2018; Malbon et al., 2018; Scott, 2011).

Secondly, the field of Science and Technology studies (STS) treats the practice and purpose science itself as an object of study. Drawing on philosophies of science and sociologies of knowledge and practice, early theorists described science as an esoteric activity creating knowledge through waves of experimentation (Kuhn, 1970; Popper, 1963). This was heavily critiqued by social constructivists, who argue that all knowledge was inherently bound to cultural context and practices (Berger and Luckmann, 1966; Collins and Evans, 2002; Funtowicz and Ravetz, 1993). Although some took this to mean that science was just another way of interpreting reality of equal status with other belief systems, most see these insights as demonstrating the importance of understanding the social context within which scientific practices and objects were conducted and described (Latour and Woolgar, 2013; Shapin, 1995). Similarly, Wynne showed how social and cultural factors determine what we consider ‘good’ evidence or expertise (Wynne, 1992). More recently, scholars have focused on how science and expertise is politicised through funding and assessment environments (Hartley et al., 2017; Jasanoff, 2005; Jasanoff and Polsby, 1991; Prainsack, 2018), the cultures and practices of research (Fransman, 2018; Hartley, 2016), through the modes of communication with audiences, and on the role for scientific advice around emerging technologies and challenges (Lee et al., 2005; Owen et al., 2012; Pearce et al., 2018; Smallman, 2018; Stilgoe et al., 2013).

Are we acting on these lessons?

However, funders and researchers rarely draw on the learning from these different fields; nor is learning shared between disciplines and professions (Oliver and Boaz, 2018). Thus, we have sociologists of knowledge producing helpful theory about the complex and messy nature of decision-making and the political nature of knowledge (e.g., Lancaster, 2014); but this is not drawn on by designers of research partnerships or evaluators of research impact (Chapman et al., 2015; Reed and Evely, 2016; Ward, 2017). This leaves individual researchers with the imperative to do high quality research and to demonstrate impact, but with little useful advice about how as individuals or institutions they might achieve or measure impact (Oliver and Cairney, 2019), leading to enormous frustration, duplicated and wasted effort. Even more damagingly, researchers produce poor policy recommendations, or naively engage in political debates with no thought about the possible costs and consequences for themselves, the wider sector, or publics.

We recognise that engaging meaningfully with literatures from multiple disciplines is too challenging a labour for many. The personal and institutional investment required to engage with the practical and scholarly knowledge about evidence production and use is—on top of other duties—beyond most of us. Generating consensus about the main lessons is itself challenging, although initial attempts have been made (Oliver and Pearce, 2017). Across the diverse literature on evidence use, terms are defined and mobilised differently. Working out what the terms are implying and what is at stake in the alternative mobilisation of these terms is a huge task. Many researchers are only briefly able to enter this broader debate, through tacked-on projects attached to larger grants. There is no obvious career pathway for those who want to remain at this higher level. There are simply too many threads pulling researchers and practitioners back into their ‘home’ disciplines and domains, which prevents people undertaking the labour of learning the key lessons from multiple fields.

Yet the history of research in this area, scattered and patchy though it is, shows us how necessary this labour is if useful, meaningful research is to be done and used (DuMont, 2019). Too much time and energy has been spent investigating questions which have been long-since answered—such as whether RCTs should be used to investigate policy issues, whether we need a pluralistic approach to research design; whether to invest in relationships as well as data production. But governments and universities have also failed to create environments where knowledge producers are welcome and useful in decision-making environments; where their own staff feel able to freely discuss and experiment with ideas; and universities consistently fail to reward or support those who want to create social change or work at the interfaces between knowledge production and use.

This failure to draw together key lessons also means that the scarce resources allocated to the study of evidence production and use have been misspent. There has been no sustained interdisciplinary funding for empirical research studies into evidence production and use in the UK, and in the US only over the last 15 years (DuMont, 2019). This has led to a dearth of shared empirical and theoretical evidence, but also a lack of community, which has had a detrimental effect on the scholarship in this space. All too often, research funding goes towards already-answered questions (such as whether bibliometrics are a good way to capture impact). We must ensure that new research on evidence production and use addresses genuine gaps. That can only be done by making existing knowledge more widely available and working together to generate collaborative research agendas.

An unfortunate side-effect of this lack of community is that many who enter it do so with the sense that it is a new, ‘emerging’ field, which will generate silver-bullet solutions for researchers and funders. Because it is new to them, researchers feel it must be new to all—not realising that their own journey has been undertaken by many others before them. For instance, there are many initiatives which claim to be ‘newly addressing’ the problem of ‘evidence use’, ‘research on research’, the ‘science of science’, ‘meta-science’, or some other variant. Whether they explore the allocation and impact of research funding and evaluation, the infrastructure of policy research units or the practice of collaborative research, they all make vital contributions. But to claim as many do that it is an ‘emerging field’ illustrates how easy it is, even with the best of intentions, to ignore existing expertise on the production and use of evidence. We must better articulate the difference between these pieces of the puzzle, and the difference those differences make. Too many are claiming that their piece provides the whole picture. In turn, funders feel they have done their part by funding this small piece of research, but remain ignorant of the existing knowledge, and indeed of the real gaps.

Research on evidence production and use is often therefore not as useful as it should be. Failing to draw on existing literature, the solutions proposed by most commentators on the evidence-policy/practice ‘gap’ often do not take into account the realities of complex and messy decision-making, or the contested and political nature of knowledge construction—leading to a situation where an author synthesising lessons from across the field can end up sharing a set of normative statements that might imply that there has been no conceptual leap in 20 years (see e.g., French, 2018; Gamoran, 2018).

Evidence and policy/practice studies: our tasks

There are therefore two key tasks for those primarily engaged in researching and teaching evidence production and use for policy and practice, which are to (1) identify and share key lessons more effectively and (2) to build a community enabling transdisciplinary evidence to be produced and used, which addresses real gaps in the evidence base and helps decision-makers transform society for the better. We close with some suggestions about possible steps we can take towards these goals.

Firstly, we must better communicate our key lessons. We would like to help people articulate the hard-won, often disciplinary-specific lessons from their own work for others—and to work with partners to embed them into the design, practice and evaluation of research. For instance, critical perspectives on power can describe the lines of authority and the institutional governance surrounding decision-making (Bachrach and Baratz, 1962; Crenson, 1971; Debnam, 1975); the interpersonal dynamics which determine everything from the credibility of evidence to the placement of topics on policy agendas (Oliver and Faul, 2018; Tchilingirian, 2018; White, 2008); to the practice of research itself, and the ways in which assumed and enacted power leads to the favouring of certain methodologies and narratives (Hall and Tandon, 2017; Pearce and Raman, 2014). How might this translate into infrastructure and funding to support equitable research partnerships (Fransman et al., 2018)? What other shared theory and practical insights might help us transform how we do and use research?

Secondly, we must generate research agendas collaboratively. In our view, the only way to avoid squandering resources on ineffective research on research is to work together to share emerging ideas, and to produce genuinely transdisciplinary questions. We made a start on this task at recent meetings. A 2018 Nuffield Foundation-funded symposium brought together leading scholars, practitioners and policymakers, and funders, to share learning about evidence use and to identify key gaps. We followed this meeting with a broader discussion at the William T. Grant Use of Research Evidence meeting in March 2019 which has also contributed to our thinking.

We initiated the conversation with a Delphi exercise to identify key research questions prior to the meeting. We refined the list, and during the meeting we asked participants to prioritise these. This was a surprisingly challenging process, which revealed that even to reach common understanding about the meaning of a research question, let alone the importance, discussants had to wade through decades-worth of assumptions, biases, preferences, language nuances and habits.

Based on this analysis, we identify three main areas of work which are required to transform how we think about to create and use evidence (Table 1):

  1. 1.

    Transforming knowledge production

  2. 2.

    Transforming translation and mobilisation

  3. 3.

    Transforming decision-making

Table 1 Emerging research agenda for evidence use studies, with illustrative topics

The topics below were selected to indicate the broad range of empirical and normative questions which need broader discussion, and are by no means definitive. Of course, much research on some topics has already been done, but we have included them—because even if research already exists, it is not widely enough known to routinely inform research users, funders or practitioners about how to better produce or use evidence. We observe that much of the very limited funding to investigate evidence production and use has gone to either developing metrics (responsible or otherwise, Row 2 column 4) or tools to increase uptake (Row 2, column 4), to the relative neglect of everything else. There are significant gaps which can only be addressed jointly across disciplines and sectors, and we welcome debates, additions, and critiques about how to do this better.

A shared research agenda

As we note above, these topics are drawn from proposed questions and discussions by an interdisciplinary group of scholars, practitioners, funders and other stakeholders. It became clear during this process that many were unaware of relevant research which had already been undertaken under these headings. These topics reflect our own networks and knowledge of the field, so cannot be regarded as definitive. We need and welcome partnership with others working in this space to attempt to broaden the conversation as much as possible. We have selected a proportion of the selected topics to illustrate a number of points.

First, that no one discipline or researcher could possibly have the skills or knowledge to answer all of these questions. Interdisciplinary teams can be difficult to assemble, but clearly required. We need leadership in this space to help spot opportunities to foster interdisciplinary research and learning.

Second that all of these topics could be framed and addressed in multiple ways, and many have been. Many are discussed, but there is little consensus; or there is consensus within disciplines but not between them. Some topics have been funded and others have not. We feel there is an urgent need to identify where research investment is required, where conversations need to be supported, and where and how to draw out the value of existing knowledge. Again, we need leadership to help us generate collaborative research agendas.

Third, that while we all have our own interests, the overall picture is far more diverse, and that there is a need for all working in this area to clearly define what their contributions are in relation to the existing evidence and communities. A shared space to convene and learn from one another would help us all understand the huge and exciting space within which we are working.

Finally, this is an illustrative set of topics, and not an exhaustive one. We would not claim to be setting the definitive research agenda in this paper. Rather, we are setting out the need to learn from one another and to work together in the future. Below, we describe some examples of the type of initial discussions which might help us to move forward, using our three themes of knowledge production, knowledge mobilisation, and decision-making. We have cited relevant studies which set out research questions or provide insights. By doing so, we hope to demonstrate the breadth of disciplines and approaches which are being used to explore these questions; and the potential value of bringing these insights together.

Transforming knowledge production

Firstly, we must understand who is involved in shaping and producing the evidence base. Much has been written about the need to produce more robust, meaningful research which minimises research waste through improving quality and reporting (Chalmers et al., 2014; Glasziou and Chalmers, 2018; Ioannidis, 2005), and the infrastructure, funding and training which surround knowledge production and evaluation have attracted critical perspectives (Bayley and Phipps, 2017; Gonzalez Hernando and Williams, 2018; Katherine Smith and Stewart, 2017). Current discourses around ‘improving’ research focus on making evidence more rigorous, certain, and relevant; but how are these terms interpreted locally in different policy and practice contexts? How are different forms of knowledge and evidence assessed, and how do these criteria shape the activities of researchers?

Enabling researchers to reflect on their own role in the ‘knowledge economy’—that is, the production and services attached to knowledge-intensive activities (usually but not exclusively referring to technological innovation (Powell and Snellman, 2004))—requires engagement with this history.

This might mean asking questions about who is able to participate in the practice and evaluation of research. Who is able to ask and answer questions? What questions are asked and why? Who gets to influence research agendas? We know that there are barriers to participation in research for minority groups, and for many research users (Chrisler, 2015; Duncan and Oliver, 2017; Scott et al., 2009). At a global level, how are research priorities set by, for example, international funders and philanthropists? How can we ensure that local and indigenous interests and priorities are not ignored by predominantly Western research practices? How are knowledge ‘gaps’ or areas of ‘non-knowledge’ constructed, and what are the power relationships underpinning that process (Nielsen and Sørensen, 2017)? There are important questions about what it means to do ethical research in the global society, with honesty about normative stances and values (Callard and Fitzgerald 2015), which apply to the practices we engage in as much as the substantive topics we focus on (Prainsack et al., 2010; Shefner et al., 2014).

It might also mean asking about how we do research. Many argue that research (particularly funded through responsive-mode arrangements) progresses in an incremental way, with questions often driven by ease, rather than public need (Parkhurst, 2017). Is this the most efficient way to generate new knowledge? How does this compare with, for example, random research funding (Shepherd et al., 2018)? Stakeholder engagement is said to be required for impact, yet we know it is costly and time-consuming (Oliver et al., 2019, 2019a). How can universities and funders support researchers and users to work together long-term, with career progression and performance management untethered from simplistic (or perhaps any) metrics of impact? Is coproduced research truly more holistic, useful, and relevant? Or does inviting in different interests to deliberate on research findings, even processes, distort agendas and politicise research (Parkhurst and Abeysinghe, 2016)? What are the costs and benefits to these different systems and practices? We know little about whether (and if so how well) each of these modes of evidence production leads to novel, useful, meaningful knowledge; nor how these modes influence the practice or outputs of research.

Transforming evidence translation and mobilisation

Significant resources are put into increasing ‘use’ of evidence, through interventions (Boaz et al., 2011) or research partnerships (Farrell et al., 2019; Tseng et al., 2018). Yet ‘use’ is not a straightforward concept. Using research well implies the existence of a diverse and robust evidence base; a range of pathways for evidence to reach decision-makers; both users and producers of knowledge having the capacity and willingness to engage in relationship-building and deliberation about policy and practice issues; research systems supporting individuals and teams to develop and share expertise.

More attention should be paid to how evidence is discussed, made sense of, negotiated and communicated—and the consequences of different approaches. This includes examining the roles of people involved in the funding of research, through to the ways in which decision-makers access and discuss evidence of different kinds. How can funders and universities create infrastructure and incentives to support researchers to do impactful research, and to inhabit boundary spaces between knowledge production and use? We know that potential users of research may sit within or outside government, with different levels and types of agency, making different types of decisions in different contexts (Cairney, 2018; Sanderson, 2000). Yet beyond ‘tailoring your messages’, existing advice to academics does not help them navigate this complex system (Cairney and Oliver, 2018). To take this lesson seriously, we might want to think about the emergence of boundary spanning- organisations and individuals which help to interface between research producers (primarily universities, but also civil society) and users (Bednarek et al., 2016; Cvitanovic et al., 2016; Stevenson, 2019). What types of interfacing are effective, and how—and how do interactions between evidence producers and users shape both evidence and policy? How might policies on data sharing and open science influence innovation and knowledge mobilisation practices?

Should individual academics engage in advocacy for policy issues (Cairney, 2016a; Smith et al., 2015), using emotive stories or messaging to best communicate (Jones and Crow, 2017; Yanovitzky and Weber, 2018), or rather be ‘honest brokers’ representing without favour a body of work (Pielke, 2007)? Or should this type of dissemination work be undertaken by boundary organisations or individuals who develop specific skills and networks? There is little empirical evidence about how best to make these choices (Oliver and Cairney, 2019), or how these consequences affect the impact or credibility of evidence (Smith and Stewart, 2017); nor is there good quality evidence about the most effective strategies and interventions to increase engagement or research uptake by decision-makers or between researchers and their audiences (Boaz et al., 2011). It seems likely that some researchers may get involved and others stay in the hinterlands (Locock and Boaz, 2004), depending on skills and preference. However, it is not clear how existing studies can help individuals navigate these complex and normative choices.

Communities (of practice, within policy, amongst diverse networks) develop their own languages and rationalities. This will affect how evidence is perceived and discussed (Smallman, 2018). Russell and Greenhalgh have shown how competing rationalities affect the reasoning and argumentation deployed in decision-making contexts (Greenhalgh and Russell, 2006; Russell and Greenhalgh, 2014); how can we interpret local meanings and sense-making in order to better communicate about evidence? Much has been written about the different formats and tailored outputs which can be used to ‘increase uptake’ by decision-makers (Lavis et al., 2003; Makkar et al., 2016; Traynor et al., 2014)—although not with conclusive findings—yet we know so little about how these messages are received. Researchers may be communicating particularly messages, but how can we be sure that decision-makers are comprehending and interpreting those messages in the same way? Theories of communication (e.g., Levinson, 2000; Neale, 1992) must be applied to this problem.

Similarly, drawing on psychological theories of behaviour change, commentators have argued for greater use of emotion, narrative and story-telling by researchers in an attempt to influence decision-making (Cairney, 2016b; Davidson, 2017; Jones and Crow, 2017). Are these effective at persuading people and if so how do they work? What are the ethical questions surrounding such activities and how does this affect researcher identity? Should researchers be aiming to communicate simple messages about which there is broad consensus?

Discussions of consensus often ask whether agreement is a laudable aim for researchers, or how far consensus is achievable (De Kerckhove et al., 2015; Lidskog and Sundqvist, 2004; Rescher, 1993). We are also interested in the tension between scientific and politician consensus, and how differences in interpretations of knowledge can be leveraged to influence political consensus (Beem, 2012; Montana, 2017; Pearce et al., 2017). What tools can be used to generate credibility? Is evidence persuasive of itself; can it survive the translation process; and is it reasonable to expect individual researchers to broadcast simple messages about which there is broad consensus, if that is in tension with their own ethical practices and knowledge (even if the most effective way to influence policy? Is consensus required for the credibility of science and scientists, or can am emphasis on similarity in fact reduce the value of research and the esteem of the sector? Is it the task of scientists to surface conflicts and disagreements, and how far does this duty extend into the political sphere (Smith and Stewart, 2017)?

Transforming decision-making, and the role of evidence within it

Finally, we need to understand how research and researchers can support decision-making given what we know about the decision-making context or culture, and how this influences evidence use (Lin, 2008). This means better understanding the roles of professional and local cultures of evidence use, governance arrangements, and roles of public dialogues so that we can we start to investigate empirically-informed strategies to increase impact (Locock and Boaz, 2004; Oliver et al., 2014). This would include empirical examination of individual strategies to influence decision-making, as well as more institutional infrastructures and roles; case studies of different types of policymaking and the evidence diets consumed in these contexts; and how different people embody different imperatives of the evidence/policy nexus. We need to bring together examples of the policy and practice lifecycles, and examine the roles of different types of evidence throughout those processes (Boaz et al., 2011, 2016).

We want to know what shapes the credibility afforded to different experts and forms of expertise, and how to cultivate credibility to enable better decision-making (Grundmann, 2017; Jacobson and Goering, 2006; Mullen, 2016; Williams, 2018). What does credibility enable (greater attention or influence; greater participation by researchers in policy processes; a more diverse debate)? What is the purpose of increasing credibility? What is the ultimate aim of attempting to become credible actors in policy spaces? How far should universities and researchers go—should we be always aiming for more influence? Or should we recognise and explore the diversity of roles research and researchers can play in decision-making spaces?

Ultimately, methods must be found to evaluate the impact of evidence on policy and practice change, and on populations—including unintended or unwanted consequences (Lorenc and Oliver, 2013; Oliver et al., 2019, 2019a). Some have argued that the primary role for researchers is to demonstrate the consequences of decisions and to enable debate. This requires the development and application of methods to evaluate changes, understand mechanisms, and develop theory and substantive and normative debates, as well as engage in the translation and mobilisation of evidence. It also requires increased transparency to enable researchers to understand evidence use (Nesta, 2012), while also allowing others like Sense about Science to check the validity of evidence claims on behalf of the public (Sense about Science, 2016).

Next steps and concrete outputs

These illustrative examples demonstrate the vast range of discussions which are happening, and need to happen to help us transform how we produce and use evidence. We are not the first to identify the problems of research wastage (Glasziou and Chalmers, 2018) or to emphasise the need to maximise the value of research for society (Duncan and Oliver, 2017). Nor are we the first to note that all the parts of the research system play a role achieving this, from funding (Geuna and Martin, 2003), to research practices (Bishop, 2019; Fransman, 2018), to translational activities (Boaz et al. 2019; Nutley and Tseng, 2014), professional science advice (Doubleday and Wilsdon, 2012) and public and professional engagement (Holliman and Warren, 2017). There have been sustained attempts to build communities and networks to attempt ways to improve parts of this systemFootnote 1. However, most of these initiatives are rooted in particular disciplines or professional activities. We see a need for a network which bridges these initiatives, helping each other articulate their key lessons for one another, and progressing our conversations about how to do better research about evidence production and use.

Researchers, funders, decision-makers and publics will approach and inhabit this space from different, sometimes very different directions. We do not claim to be writing the definitive account. But we would like to open the door to more critical accounts of evidence production and use which are specifically aimed at multi-disciplinary and sectoral audiences. Our aim is to welcome and support debate, to introduce parts of our diverse community to each other, and to enable our individual perspectives and knowledge to be more widely valued.

We anticipate disagreement and discussion, and support a multitude of ways of approaching the issues we identify above. Some may feel that our energies should be directed to democratising knowledge for all and ensuring that this is mobilised to maximise equality and fairness (Stewart et al., 2018). Others may feel that our task is to observe, problematise and critique these processes, rather than engage in them directly (Fuller, 1997). Our view is that both normative and critical approaches are vital; as are empirical and theoretical contributions to our understanding of high-level research systems, down to micro-interactions in evidence production and use. Our contention is that we must keep this space vibrant and busy, producing new knowledge together, and learning from each other. This requires investment in research on evidence production and use, in virtual and literal spaces to hold conversations, as well as in capacity and capability. There are significant and important gaps in what we know about evidence production and use, but identifying the particular and specific research agendas for each of these gaps must be a collaborative process.

We also see a need to support those who are new to this space. Many come to the problem of evidence use without any training in the history of research in this space. We see a need to provide an accessible route into these debates, and welcome opportunities to collaborate on textbooks or learning resources to support new students, non-academics and those new to the field.

The Nuffield Foundation meeting which led to this paper demonstrated how valuable these opportunities are to enable learning and relationship-building through face-to-face interactions. We will continue to create opportunities for greater transdisciplinary and academic-partner conversations, to share learning across spheres of activity and to build capacity, and to use these new perspectives to generate fresh avenues of enquiry, through the new Transforming EvidenceFootnote 2 collaboration.

Finally, we argue for increased investment to maximise the learning we already have, and to support more effective knowledge production and use. Too much money and expertise has been wasted, and too many opportunities to build on existing expertise have been squandered. We must find better ways to make this learning accessible, and to identify true knowledge gaps. Indeed, we believe that collaboration across disciplinary and sectoral boundaries is the only way in which this space will both progress and demonstrate its true value. We must prevent the waste of limited resources to understand how to transform evidence production and use for the benefit of society. Putting what we already know into practice would be an excellent place to start.