Introduction

Deep within the less glamorous pages of the Apple web site, in the small print of the “legal” section, sits a table of data relating to the electromagnetic fields (EMF) generated by the latest iPhone. This tells me that the maximum Specific Absorption Rate at which my body takes in electromagnetic energy from this phone is 0.98 watts per kilogram, averaged over 10 grams of body tissue, regardless of whether I’m making a call holding it to my head or sending an email with it resting on my lap (see https://www.apple.com/legal/rfexposure/iphone7,2/en/, accessed 1 June 2015). When the thing is transmitting at full power on Bluetooth, cellular and wireless networks, the exposure could in theory get close to the threshold of 2 watts per kilogram established by the International Commission on Non-Ionising Radiation Protection (ICNIRP), but I can rest assured that the iPhone, like all other phones sold internationally, complies with this standard. If for some reason I am still worried, the Apple web site advises:

To reduce exposure to RF [radio frequency] energy, use a hands-free option, such as the built-in speakerphone, the supplied headphones, or other similar accessories

In this statement is captured a paradox that characterizes the now almost dormant controversy about the health effects of mobile phones. I should, I am told, have no reason for any concerns about the phone’s impact on my physiology. And yet the provision of information, at the behest of the UK Government, legitimates concerns or questions that I might have. I am permitted a modicum of uncertainty, as is the company selling me this product and the regulator deciding on guidelines. If technology is a social experiment (Krohn and Weyer, 1994) then I can, if I wish, regard myself as an experimental subject.

Concerns about the health risks of mobile phones have largely faded from public consciousness, at least in the United Kingdom. Some people may recall widespread public attention near the turn of the millennium to questions about our phones microwaving our heads or giving us cancer but, in the years since, the uptake of the technology has advanced seemingly without hesitation.

A 2013 survey conducted by polling company YouGov on behalf of a British mobile phone trade body found that less than a tenth of the population thought that mobile phone handsets or masts were a problem for their health (Figure 1). In 2000, this figure was 27% (see graph below). Since 2000, mobile phone ownership went up from 50% of the survey respondents to 100%. Over the same period, the uses of mobile technology have proliferated as phones have got “smart”, without generating substantially greater electromagnetic fields. (Part of the reason for this is that mobile phone base stations have become more numerous and can therefore connect with handsets at lower power.) Concerns that we have about mobile phones are now more likely to relate to the privacy of our personal data or being deprived of a technology on which we have become dependent.

Figure 1
figure 1

Percentage mentioning handsets/masts as a concern.

Source: YouGov survey 2013 http://www.mobilemastinfo.com/opinion-research/opinion-research.html.

Base: All GB adults (2,164).

P10Q1: What, if any, health-related dangers concern you most nowadays?

P15Q1: And which other health-related dangers are you also seriously concerned about?

The straightforward explanation for the demise in general public concern about this now-ubiquitous technology is that sensible users are calculating that perceived benefits clearly outweigh perceived risks. There is of course some truth to this. But its banality obscures what I believe is a far more interesting and far more important discussion about scientific advice, uncertainty and the politics of regulation. The science of mobile phone risk is far from settled. Uncertainties remain, if one chooses to look for them (Boschen, 2010), relating to the effects of pulsed rather than continuous EMF radiation and the vulnerability of particular subgroups, some of whom regard themselves as “electrosensitive”. The voices of concerned scientists can still be heard (for example, Sage and Carpenter, 2009) and some advisory bodies continue to draw attention to troubling epidemiological data, criticizing industry bodies for their “inertia” (for example, EEA, 2013). In 2011, the International Agency for Research on Cancer classified radiofrequency EMFs as a “possible human carcinogen” (Baan et al., 2011) (Extremely low frequency EMFs, such as those produced by overhead power cables, were categorized in the same way a decade earlier). But these uncertainties seem, at least for now, at least in the United Kingdom and at least for most of the population, under control.

In this article, I argue that the British experience with mobile phone risks is an instructive example of the practice of expert advice in which success might be described in terms of public experimentation. Rather than regarding the issue as a static one, characterized by scientific evidence and reactionary public opinion, advisory scientists, in recognizing the instability of a previous consensus, did not seek to restabilise the scientific evidence, but rather to regard it as a work in progress, an experimental exploration of uncertainties that involved publics as well as experts. This offers another way of viewing the twenty-first century trend in UK science governance towards greater “openness”.

This article revisits, updates and develops qualitative research conducted at the time of the original controversy, which included more than 30 interviews with scientists, stakeholders and members of relevant expert committees. Reflecting on how this issue became an issue and then, at least in the United Kingdom, a non-issue for all but a small subset of interested publics, the conclusion is that a fixation on “closure” in some social studies of expertise and controversy, even if closure is considered to happen socially rather than scientifically (for example, Latour, 1987; Collins and Evans, 2002; Sismondo, 2011) misrepresents the practice of expert advice, in particular by exaggerating the importance of scientific certainty. The case of mobile phones and health and its reconstruction in part through the actions of an expert committee challenges this teleology of closure. Here is a story of intentional opening-up, coming at a time in which openness was being pressed upon British political culture, in which uncertainty was publicly acknowledged and the mobile phones health issue was recognized as open-ended. This case offers lessons not just for our understandings of expertise in society, but also for advisory practices and structures.

Expertise as dynamic and relational

The paradox of expertise, as described by Bijker et al. (2009) is that as we rely on experts more and more, we trust them less and less. To further complicate this paradox, we can also observe that it is not at all clear what it means to be an expert. Lentsch and Weingart (2011) rightly point to a lacuna in thinking about the design of advisory processes and institutions. Nor do we have any good way of talking about the quality of expert advice. Expert advice is as old as the institutions of science themselves and yet there is little agreement on what it is, what it is for, when it is good and when it is bad.

The grand narrative of expert advice, discursively bolstered from both the scientific and policy sides, is of science “speaking truth to power” (see Wildavsky, 1979). This linear model, in which science is presumed to be autonomous and value-free, has been challenged by Science and Technology Studies (STS) accounts of expertise. The constructivist critique of the “truth to power” model of expertise begins with the realization that expertise does not neutrally inform policy-making. It can instead be used to add weight to a predetermined position (Nelkin, 1975), giving the impression that a decision is technocratic. The linear model “requires politics to masquerade as science, with scientists either making covert political judgements or having their judgements politically misrepresented as sufficient and decisive” (Millstone and van Zwanenberg, 2001: 100). This has led some to observe that science is largely irrelevant in decision-making. It is either accepted if it supports a policy consensus, or obfuscated in technical debate if it proves problematic (Collingridge and Reeve 1986). The higher the stakes, the more likely expertise is to be twisted in this way.

However, if we refuse to accept the political neutrality of experts, we can, as STS has been able to do over the last three decades, develop a richer account of expertise that opens the space for an appreciation of the constructive role that experts clearly do play in the construction of policy. As Jasanoff (1990: 229) argued so powerfully, when experts are brought to bear on decision-making, “what they are doing is not ‘science’ in any ordinary sense, but a hybrid activity that combines elements of scientific evidence with large doses of social and political judgement”. Policy issues are chronically underdetermined by scientific evidence. Without democratic scrutiny, the space of judgement that is opened up creates a risk of “scientism”, whereby “scientific advice and authority [are] systematically exaggerated in regulatory control and public debate” (Doubleday and Wynne, 2011).

The relevance of experts and their advice is determined not by scientific criteria but by the demands of policy. The science that might inform a particular policy issue does not come pre-packaged. It must be assembled for particular purposes. The incompleteness of this reassembly inevitably means that important decisions turn on questions of uncertainty. STS accounts of science in public have explained the efforts of experts to tame uncertainty, epistemologically and socially (Jasanoff and Wynne, 1998). Within a linear model, scientific uncertainty tends to be seen as undermining the authority of experts, and is therefore an embarrassment to be rationalized, reduced or hidden.

The deconstruction of simplistic models of experts’ relationships with decision makers has generated a minor intellectual backlash. Harry Collins and Robert Evans (2002) have put forward an alternative theory of expertise. For many of those involved in empirical studies of expert-public encounters, their contribution, which attempts a reconstruction of the separation between science and politics that has been challenged by STS research, would seem to subtract from the sum of human knowledge. However, their provocation has elicited numerous responses that have helped to clarify the value of STS in closely researching expertise in action, explaining its dynamics and offering constructive suggestions for improved models.

Collins and Evans (2008), extending their argument in a subsequent book, reduce the challenge of expert advice to one question: “who knows what they are talking about?” They focus, as Wynne puts in a response to the original paper, on “propositional decision-questions such as whether nuclear power, anti-misting kerosene or UK beef is safe”. For Wynne (2003), this represents “a seriously impoverished account of what is involved when we address science in public arenas” (401–402). Collins and Evans harbour a suspicion, for which there is little evidence, that debates about expertise are a form of battle for knowledge between experts and non-experts. There are cases where policymakers will urgently need technical answers during an acute challenge: Could volcanic ash clog the engines of aeroplanes? Is this storm likely to hit a city? Might this damaged nuclear power plant generate a radiation leak? But anyone who has been involved in the construction, provision or reception of expert advice knows that the challenges are rarely just epistemic. Even the institutions of scientific advice themselves recognize that this does not describe their contribution. Bodies such as the National Academies see scientific advice as more to do with processes rather than a body of knowledge (Fears and Ter Meulen, 2011: 346). The job of experts is more often one of sense-making than fact-making. In addition, to presume that expertise is about who knows best is to ignore the wealth of evidence (for example, Jasanoff and Wynne, 1998) that scientific certainty does not translate to policy consensus. Indeed, as Sarewitz (2004) has argued, in cases where stakes are high and the politics are polarized, the injection of science may worsen the controversy. Rather than being the cause of political controversies, scientific uncertainty tends to be their product (Campbell, 1985; Yearley, 2000). Given that policy concerns are not defined by their scientific content, Marres (2007) calls for greater attention to the public reframing of issues, following in the tradition of pragmatist philosophers like John Dewey. Dewey advocated expanding scientific experimentalism beyond science. For Dewey, and the sociologists he would influence (particularly those of the early twentieth Century “Chicago school”), society and democracy were best understood, and conducted, as grand experiments (Haworth, 1960; Gross and Krohn, 2005). While we can recognize, and even highlight, the place of science and technology in public issues, to say that they are “scientific controversies” or even “science-based controversies” (Brante et al., 1993) is misleading. These experiments are more than scientific. Indeed, for Gross (2016), laboratory experiments are merely part of the broader set of intentional and accidental experiments that constitute public life. For experts, “knowing what they are talking about” is as much a challenge of understanding the politics of the issues under question (such as, in Pielke’s (2007) terms, the distinction between “abortion politics” and “tornado politics”, and its implications for expert roles) as it is of grasping technical details.

Science can at times offer closure, but non-scientists will still ask relevant questions. In real-time public science, policy and the public, rather than forming neat rings around science (as in the Collins and Evans model), are endogenous to the construction of relevant expertise.Footnote 1 Jasanoff’s (2004) more convincing and more widely accepted explanation is that scientific and social orders are coproduced in public, and experts may be at the heart of this process of coproduction. Susan Owens, also responding to Collins et al. (2011), describes her experience with the Royal Commission on Environmental Pollution, for which she was a member as well as an observer and historian, in these terms. The Royal Commission, now disbanded, was a prominent “committee of experts” (to be distinguished from an “expert committee”). Using examples from its history, Owens reiterates that public controversies involving science are never merely about technical questions (see also Fischer, 2009). The task of both experts and those who study expertise in action is not to purify and separate science, but rather to “learn to live with co-production and nurture it as a positive force” (Owens, 2011: 330). In British political history, the Royal Commission stands out for its ability to do this, to reframe policy issues as a challenge to policy—asking new questions rather than merely providing the best available answers to questions set by policymakers.

Collins and Evans presume that the relevant experts for a particular policy challenge can be identified ex ante, and their worth assessed according to their scientific merit. In which case, it is hard if not impossible to explain the selection of experts to appear on the Royal Commission, the committee that took control of the mobile phones issue, or indeed many other advisory committees in UK policy. Such committees typically recruit not just from pool of experts with prior attachments to the relevant scientific questions, but also from a cadre of senior individuals with track records in managing complex issues. Expertise is, in Nowotny’s (2000) words, “transgressive competence”. As former President of the Royal Society Martin Rees (2002) has put it, experts are “depressingly ‘lay’ outside their specialisms”. Experts are valuable not just for what they know, but also for how they are able to operate outside their own immediate domains. Expert roles differ from scientific roles. It is here that the risk of scientism arises, as expert meanings are imposed onto public issues, which can and should be mitigated. However, we should not pretend that experts stick only to their “expertise”, just as we should not pretend that they only speak about facts. We should therefore understand expertise not as a mere resource, but as relational and dynamic. Issues wax and wane, public concerns change and evidence moves in and out of relevance. Uncertainties can shrink, but they can also multiply with the reframing of issues. Experts are actively involved not just in the marshalling of science, but also in the management of science/policy hybrids (Miller, 2001) and the configuration of boundaries (van Egmond and Bal, 2011). Studying expert advice in practice, we can therefore ask how reflexive experts are about such activities. This reflexivity is a crucial dimension of “openness”, if that term is to be anything other than vacuous.

Following Werner Heisenberg’s definition of an expert as “someone who knows some of the worst mistakes that can be made in their subject and who manages to avoid them” (quoted in Keane, 2009), we can equate expert wisdom with the ability to navigate and makes sense of uncertainties (Stilgoe et al., 2006). Expertise, therefore, is an act—performative as well as epistemological. Stephen Hilgartner (2000) develops this point in his analysis of expert advice as drama. For Hilgartner, as for others (Shapin, 1995; Brown and Michael, 2002), expert advice is a project of credibility. Where once scientists could rely on a degree of public authority, they now need to earn their right to talk credibly in public on any particular issue. In his earlier work, Collins (1988) considers the staging of public “experiments” in the search for credibility. Such experiments—he gives the examples of a simulated aeroplane disaster and a televised crash involving a train and a nuclear fuel storage container—are, for Collins, a form of “pathological science”. Rather than engaging with uncertainty, they keep it hidden from public view. The possibility of surprise, one of the key characteristics of real experimentation (Rheinberger, 1997) is virtually nil. These “experiments” are mere demonstrations, displays or performances. However, the theatrical metaphor only gets us so far. If we are to take seriously the possibility of opening up and democratizing expert advice, we should revisit the idea of public experiments and ask what it would take to make expert advice genuinely experimental.

The public image of science can resemble the Janus described by Bruno Latour (1987: 4). One face—“science in the making”—points towards the internal processes of science, experimentation, uncertainty and open controversies while the other looks back at the “ready made science” of certainty, closure and facts. Expertise is conventionally seen as the speech of the latter face, exuding authority and confidence. In terms of policy, this is the domain of “science for policy”, rather than the separate set of considerations concerning “policy for science” (a distinction drawn by Brooks (1964), though he recognized some of its problems).

Expert advisors need not and should not accept such dichotomies. Pielke and Betsill have described how the reality of science policy is often closer to “policy-for-science-for-policy… a recursive process of defining societal goals, using those goals to identify questions to be addressed by science, then relating the findings of science back to the original goals, and if necessary, revisiting the goals themselves”. (Pielke and Betsill, 1997: 158). Turning attention to expert advice to provide a new starting point, we can explore the potential of “science-for-policy-for-science” in expert practice. Part of the novelty of the approach taken by British scientific advisers with respect to mobile phone risk was their willingness to entertain not just existing evidence but also new research agendas and experimental goals. To the extent that these goals were reframed through public engagement, this was therefore a case of public experimentation.

The politics of SAR

Guidelines for exposure to EMFs have existed globally since the late 1950s, although there was originally little agreement between countries as to what they should be. In the United Kingdom, the first scientific advice regarding exposure to EMFs was an exposure restriction addressed to workers in the Post Office (Home Office, 1960), which, at the time, also managed radio transmissions. In 1989, the National Radiological Protection Board (NRPB) published the first UK guidelines on exposure to EMFs based on a thorough review of the available science. These guidelines, as well as suggesting restrictions on field strength exposure, suggested basic restrictions based on SAR (which measures absorption), following the lead of American bodies, who had been the first to provide a dosimetric set of guidelines (measuring the dose absorbed by the body) in the early 80sFootnote 2 (Kuster and Balzano, 1997).

The 1989 guidance drew on scientific responses to two consultation documents (NRPB, 1982 and NRPB, 1986). The first of these noted that “the public has become increasingly aroused to the possibility of hazards to health from exposure to non-ionising electromagnetic sources of radiation such as microwave ovens, radar and radio equipment, lasers and overhead power lines” (NRPB, 1982, Foreword). All of these technologies had previously been associated with public doubts about safety, despite generating non-ionizing radiation (less energetic than ultraviolet light and therefore conventionally seen as incapable of causing permanent tissue damage).

SAR is a measure of absorption, a measure of the rate of absorption and it is specific (to the tissue that is absorbing the RF energy). The effect it measures is one of heating, the same thermal effect that powers microwave cooking. Knowledge of this effect and its mechanism is at the centre of “what science knows” (Epstein, 1996) about the health effects of microwaves. The power of a mobile phone (less than one watt) is less than a thousandth of the power of microwave oven, but mobile phones are usually held next to a headful of wet, sensitive tissue. The assumptions that underlie the calculation of SAR and its selection as the relevant metric for setting guidelines, are based on a scientific consensus as to the known effects of non-ionizing radiation. SAR is intended as an authoritative representation of certainty, of a known effect. The reassurance offered was that if there was no significant thermal effect there was no reason to worry (Stilgoe, 2005). As one NRPB spokesman put it in 1999, “If it doesn’t heat you, then it doesn’t harm you”.Footnote 3 For critics, SAR guidelines were seen as a solidification of regulatory assumptions. They therefore became the battleground for controversy over the health risks of mobile phones (see Stilgoe, 2005).

Just as it is hard to record the time and place at which public science controversies die, so it is hard to know precisely when they begin. For mobile phones we can point to some well-cited events, such as the case of a man in America who in 1992 sued the manufacturer of his wife’s cellphone after she was found to have a brain tumour. By the time that a federal judge had ruled that the submitted evidence was not “scientifically valid”, news media in Europe had begun to report similar stories (Burgess, 2004). Campaigners and investigative journalists (following the lead of Brodeur (1989)) began to unpick the technical basis for EMF regulation, drawing attention to uncertainties relating to possible non-thermal effects and epidemiological studies that appeared to point to dangers from long-term, cumulative exposure. Suggestions of these effects had circulated in the scientific literature for decades, but were not considered sufficiently troublesome to figure in British or US regulation (Stilgoe, 2005). As well as drawing attention to paradigmatic differences between physicists and biologists in the uneasy portmanteau discipline of “bioelectromagnetics” (Miller, 2005) campaigners highlighted international regulatory disagreements.

For many years the countries of Eastern Europe and the former Soviet Union had suggested very different standards, orders of magnitude more stringent than those adopted in the United Kingdom. Activists have drawn attention to these disagreements (Maisch, 2000), reporting that, at one meeting in 1999, a Russian regulatory body claimed that non-thermal effects, cumulative exposures and subjective symptoms experienced by users should be taken into account, while ICNIRP insisted that the only effects from which conclusions could be drawn were thermal (Soneryd, 2007).

Mobile phones technologies are tangible and, in the case of large mobile phone masts, all too visible for some local residents, but the source of concern is invisible. The public experience of electromagnetic fields is therefore unavoidably mediated through expertise and technologies of measurement. Other than those people who claim to be “electrosensitive” (detecting and suffering from surrounding fields) (Soneryd, 2007; De Graaff and Bröer, 2012) or those who, as electrosensitive people often do, arm themselves with personal dosimeters (see Mitchell and Cambrosio, 1997), EMFs remain uncanny (or, as Nordmann (2005) puts it, “noumenal”). In the main, to use the distinction adopted by Soneryd (2007; following Michael, 2002), our comprehension of EMFs is detached from our embodied prehension. The trust relationship that this entails made the NRPB’s approach to public reassurance particularly brittle in the face of public challenge (Stilgoe, 2005).

Opening up

Under the auspices of the NRPB, who managed the issue until the end of the 1990s, the uncertainties surrounding mobile phone safety were cordoned off from wider scrutiny. Public questioning was met with reassurance that all currently available technology complied with regulations (Stilgoe, 2005). The science was hidden behind public statement and restatement of SAR guidelines.

As controversy over mobile phones grew, alongside their rapid uptake during the 1990s, the NRPB’s stock response drew criticism. During a House of Commons Science and Technology select committee enquiry in 1999, the NRPB’s director, Roger Clarke said that “all marketed telephones meet our exposure guidelines and as such there is no need for any further consideration” (quoted in Stilgoe, 2005). The mobile phones health controversy can be seen as a public rejection of this statement. Relevant publics saw many reasons to consider issues beyond compliance, and, as described elsewhere (Stilgoe, 2007), the NRPB’s intransigence led to it losing control of the definition of relevant uncertainty (cf Jasanoff and Wynne, 1998). Towards the end of the 1990s, mobile phones ownership was taking off and network expansion necessitated the building of thousands of new base stations. Licences for third generation mobile phone bandwidth had been auctioned, giving the Government a £22 billion windfall. (The economists involved in its design trumpeted it as “the biggest auction ever” (Binmore and Klemperer, 2002)). As the stakes rose, so the NRPB’s lack of control became clearer.

The NRPB’s inability to respond to public questioning led to a collapse in its credibility, prompting the government to create a new body, the Independent Expert Group on Mobile Phones, chaired by former UK Government Chief Scientific Adviser (GCSA) Sir William Stewart. The IEGMP was formed in 1999, with a remit not just to review the science but also to consider “present concerns about the possible health effects from the use of mobile phones, base stations and transmitters”. Its membership blended international researchers on EMF risk research with senior biological scientists who carried experience of policy engagement, and two lay members.

The life of the IEGMP overlapped with two important reports for UK science policy at the turn of the millennium. The IEGMP published its conclusions (known as the Stewart Report) in May 2000. Three months earlier, a report had been published by the House of Lords Science and Technology Committee on “Science and Society” that captured and communicated the changing dynamics of public engagement with science. Its pragmatic conclusion was that:

Policy makers will find it hard to win public support on any issue with a science component, unless the public’s attitudes and values are recognised, respected and weighed along with the scientific and other factors. (House of Lords, 2000: 6)

The Committee’s recommendation that “direct dialogue with the public should move from being an optional add-on to science-based policy-making and to the activities of research organisations and learned institutions, and should become a normal and integral part of the process” (House of Lords, 2000: paragraph 5.48) moved the British scientific elite away, at least rhetorically, from a “deficit model” of public engagement (Wynne, 1993). The House of Lords report would come to represent a landmark for institutional openness towards public dialogue on public science issues.

In October of the same year, the report of the Phillips Inquiry into the Government’s handling of Bovine Spongiform Encephalopathy was released. This narrated policy failures in the commissioning and use of expert advice as the Government systematically overlooked uncertainties about the public safety of beef as part of a rhetoric of reassurance. It laid bare the politics of expert advice, offering an exhaustive, multi-volume critique of the limits of a technocratic, linear model of expert advice (see Millstone and Van Zwanenberg, 2001). Within this model, which was not unique to BSE, but also characterized the British approach to nuclear power and chemicals regulation, The Public were imagined solely as receivers of an expert consensus based on “sound science”. Evidence from non-experts in these cases, including farmers and factory workers, was routinely dismissed, and often labelled merely “anecdotal” (Wynne, 1989; Irwin, 1995; also Stilgoe et al., 2006).

The remedy offered by Phillips was one of openness and the Government was quick to endorse his mantras:

  • ‘Trust can only be generated by openness’;

  • ‘Openness requires recognition of uncertainty, where it exists’;

  • ‘The public should be trusted to respond rationally to openness’;

  • ‘Scientific investigation of risk should be open and transparent’;

  • ‘The advice and reasoning of advisory committees should be made public’.Footnote 4

Sir William Stewart had told the Phillips Inquiry that, even though he was GCSA from 1990–1995, preceding the Government’s admission in 1996 that BSE had caused variant Creutzfeldt Jacob Disease in humans, his involvement in the issue had been “negligible”. Nevertheless, he admitted the importance of BSE to the IEGMP’s thinking when asked by the House of Commons Trade and Industry Select Committee in 2001:

The BSE inquiry impacted upon us. Never again will any scientific committee say that there is no risk.

In Stewart’s reframing of the mobile phone risk issue, a number of policy assumptions were deliberately destabilized. The first of these was the question of labelling. A year before the IEGMP issued their report, an episode of the BBC’s flagship investigative TV journalism programme Panorama had “discovered” that large variations in the Specific Absorption Rates of different mobile phones. Even though all complied with current regulations, the implication was that some were safer than others and that concerned consumers should be able to choose. The pre-Stewart regulatory presumption was that such distinctions were illegitimate and labelling phones with their SAR levels would confuse consumers; compliance was all (Stilgoe, 2005). Stewart, however, recommended that mobile phones should be labelled with their SAR levels, determined by an internationally standard procedure. Labels should appear on the handset’s box, in leaflets at stores, on a national web-site and as one of the phone’s menu options (IEGMP, 2000, paragraph 1.52).

The endorsement of SAR labelling began to segment an imagined public that had until that point been presumed to be homogenous and uninterested in the science behind mobile phone risk assessment (Stilgoe, 2007). The IEGMP also identified and made explicit some other lines of segmentation. Their report recognized the concerns of local communities who experienced the imposition of mobile phone masts during the rapid rollout of second-generation mobile phone networks in the 1990s. A policy of “permitted development rights” had easer the planning application process for all mobile phone masts less than 15 m high and many of the masts erected in closest proximity to people’s homes appeared on local authority properties, welcomed by councils who were compensated by phone network companies. A rash of protests among these communities blended messages of alienation from decision-making with concern about the uncertainties of constant, long-term exposure to electromagnetic radiation. And while health concerns were not deemed to be a “material consideration” in planning decisions, the IEGMP, whose remit was health rather than planning, brought this question into its purview. Their report ruled that current planning rules were unacceptable, that base stations impacted upon people’s well-being and that permitted development rights should be revoked (IEGMP, 2000, paragraphs 1.30–1.40).

The IEGMP recommendation that received most media attention was that children should be discouraged from using mobile phones. The reasoning was that, “If there are currently unrecognised adverse health effects from the use of mobile phones, children may be more vulnerable because of their developing nervous system, the greater absorption of energy in the tissues of the head” (IEGMP, 2000, paragraph 1.53). Most mainstream scientists felt that there was little scientific justification for such a recommendation. One told me during an interview, “this idea that children are more vulnerable is complete politics” (“politics” here was clearly intended to denote a distortion of science).

Reaching further still outside the previous scientific consensus, the IEGMP also recommended that “the totality of the information available, including non-peer-reviewed data and anecdotal evidence, be taken into account when advice is proffered” (paragraph 1.70). This recommendation and its endorsement of a notion of “anecdotal evidence” that had previously been used by advisory scientists in a pejorative sense is symbolically important in the context of this and other public science controversies (Moore and Stilgoe, 2009). Viewing the practice of advice as a public experiment, the additional insight is that this recommendation was discursively and pragmatically linked with the establishment of a new research programme:

We recommend that a substantial research programme should operate under the aegis of a demonstrably independent panel. The aim should be to develop a programme of research related to health aspects of mobile phones and associated technologies. This should complement work sponsored by the EU and in other countries. In developing a research agenda the peer-reviewed scientific literature, non-peer reviewed papers and anecdotal evidence should be taken into account. (IEGMP, 2000, Paragraph 1.58)

The Mobile Telephones Health Research programme was set up to fill some of the space opened up and then vacated by the temporary IEGMP. Its budget was modest relative to its aims, which included large epidemiological studies: one case-control study swallowed almost £1 million of the £7.4 million available over 8 years. Stewart claimed that the aim of this programme was to plug scientific holes, but it is better explained as a procedural innovation. The programme was co-funded by industry and government but agendas and allocations were tightly controlled by a group whose membership overlapped with the IEGMP, drawing on their renewed credibility. STS (for example, Collins, 1985) would predict that the outputs of research into, for example, the external detectability of reported electrosensitivity (the target of one MTHR study) would not settle the controversy over the physiological basis of these symptoms.Footnote 5 More important than the answers generated by this new research are the new questions that it chose to ask, which reflected a legitimation of the public reframing of experimentation. So research into long-term exposure and vulnerable sub-populations, previously deemed unimportant under a consensus about acute thermal effects, was brought to the fore, at least discursively.

The Stewart report’s reconstruction of the mobile phones health issue (with a dual reconstruction of both scientific uncertainties and legitimate public concerns (Stilgoe, 2007)) was a clear departure from a pre-BSE technocratic mode, but it did not comfortably land upon a coherent alternative. In this way, the Stewart report rested on what Irwin (2006) has called an “uneasy blend of ‘old’ and ‘new’ assumptions”. We can imagine two models of expertise (see Table 1; also Millstone and Van Zwanenberg, 2001). The mode of openness prescribed by Phillips was more disruptive than the transparency that was being advocated and grudgingly adopted in the early 2000s. Phillips’s openness involved opening doors to new policy actors as well as striving to open the minds of experts to new perspectives. Since BSE, we have seen British advisory bodies attempting to bridge the gap the between the two models of expertise, with sporadic experiments in open governance, institutional redesign and occasional nervous retreating.

Table 1 Two models of expertise (from Stilgoe et al., 2006)

Irwin (2006: 300) argues that “the new assumption appears to be that greater public consultation over scientific and technological developments can eliminate (or at least reduce) subsequent opposition to technical change and achieve broad social consensus. Transparency and openness are intended to win back members of the public who have grown sceptical of governmental risk-handling”. He goes on to say that researchers in STS should adopt a reasonable degree of scepticism about the new rhetoric of openness. So how should we view the Stewart report’s apparent openness (and that of subsequent bodies in Sweden (see Soneryd, 2007))? As formal expert advice attempts to incorporate processes of public dialogue, lay membership and transparency, it is tempting to see such governance experiments as a lever for the opening up of issues. However, using the lens of coproduction (Jasanoff, 2004), we can see that such initiatives are as much a symptom of openness as its cause. These micro-experiments are part of the larger public experiment of democratic governance.

Mobilizing expert advice

As I discussed in this article’s introduction, the apparent fading of the mobile phones risk issue might simply be explained according to a risk/benefit calculus. However, this framing neither explains the nuances in the controversy, nor provides useful insights for the future practice of scientific advice (see also Hom et al., 2011). I instead read this case in terms of the social control of uncertainty. First, the broader issue of EMF health effects has multiple objects of concern, whose uncertainties are unevenly distributed. Public controversies over mobile phone masts, for example, may adopt the language of health risk, but they are always about more besides, encompassing the politics of planning and the imposition of infrastructure (Drake, 2010; Hermans, 2014).

Since 2000, while general public concern about mobile phone risks has died down in the United Kingdom, it has sporadically bubbled up in other countries. Hermans considers the politics of mobile phone infrastructure siting in the Netherlands, highlighting the limits of “risk” language in dialogue between experts and publics, which typically also touches upon considerations of democracy, fairness, aesthetics and property prices. A loss of control by Dutch experts has seen citizens attempt to reclaim experimentation with the generation of alternative knowledge through the use of personal dosimeters (Hermans, 2014). The challenge is not just to expert practice, but also to scholarly analysis, which still privileges explanations that pretend “the solution is to mind the gap between laypersons’ and experts’ views on the risks” (Hermans, 2014: 26). Similarly, Borraz (2011) has described how in France, just as with the NRPB, the attempt to govern this issue as one of risk has led to an expansion and loss of control of uncertainty.

Concerns over the rollout of subsequent technologies, whose EMF exposures may be similar but whose distributions of risk, benefit and ethical concerns may vary, suggest that the issue has morphed rather than died. Wireless smart meters (Hess and Coley, 2014) and Wi-Fi in schools (Bale, 2006) have resurfaced some of the same questions that characterized the mobile phones controversy.

It is of course impossible to say definitively whether expert advice has been successful, not least because of disagreements on the purposes of expert advice and the multiple and conflicting interests with which it must necessarily engage. We can acknowledge, however, instances where experts appear to recognize and engage with the coproductions (Jasanoff, 2004) of which they are a part. The IEGMP did not attempt the separation of risk from politics that had characterized previous engagements. Its precautionary approach, I have argued, reframed the issue as one of ongoing experimentation. For those working with a linear model of expert advice, this approach, which involved explicitly widening the bounds of legitimate uncertainty, seemed risky. And some social scientists have criticized the Stewart report in these terms. Burgess (2004) read the controversy as a straightforward “health panic”, while Durodié saw the IEGMP “elevating public opinion over professional expertise and subordinating science to prejudice” (Durodié, 2009: 112). One experimental study claims to demonstrate that precautionary recommendations amplify risk perceptions (Wiedemann and Schütz, 2005). Not only are such analyses analytically flawed, in that they fail to engage with the technical or political specificities of issues, but they are also counterproductive for expert practice. Focus group research suggests no clear evidence of increased public concern in response to precautionary advice on EMFs (Timotijevic and Barnett, 2006). However, we should keep hold of the insight that public issues are in part constructed by their governance, rather than being a product either of fixed risk perception or technological essence. As Barry (2001) argues, the attempt to aggressively depoliticize issues by experts will lead to politics bubbling up in new and surprising places.

The trouble with interpreting the issue as one of risk, whether in scholarly research or advisory practice, is that it becomes static: scientific opinion and public opinion are both imagined as immutable. If we look instead at coproduced technical uncertainties and politics, we can better understand the potential for mobility of both science and publics, and account for the success of the IEGMP in regaining control of a high-stakes issue.

The science of EMF risk, while anchored to a well-known thermal mechanism and represented by a dosimetric unit (SAR), was shown to have far more flexibility than had been acknowledged by the NRPB. Crucially, as the IEGMP realised, “science” here was not just about answers, it was also about relevant research questions. The IEGMP reframed science in terms of experimentation as well as evidence and, in demanding the construction of a reframed research programme, invited non-experts into the experiment. Similarly, in its public engagement, the IEGMP did not presume a static view of public opinion. With the move from deficit to dialogue in public engagement, public opinion has acquired new significance in expert advisory processes. In her study of public dialogue practice around mobile phone risks in Sweden, Soneryd (2007) talks in terms of “articulations” of public concern rather than fixed public opinion, with the recognition that articulations can change, and can be a way of navigating around things that may be “unsayable” in certain circumstances. (Elsewhere, Lezaun and Soneryd (2007) have described institutional nervousness of stakeholder interests in dialogue exercises precisely because their attitudes are seen as fixed, even if the stakeholders themselves describe the possibility of rearticulation through dialogue).

The recognition that publics and science are both mobile challenges dominant metaphors of ideal policy as “evidence-based”. As research in policy studies has argued, following Lindblom (1959), policymakers cannot expect a synoptic sense of the relevant evidence before making decisions. Policy might better be understood as a process of “puzzling together” (see Porteous, 2016 for a review). The more empirically and normatively satisfying view of expert advice in practice might be one of “collective experimentation” (Latour, 1998), in which experts and publics recognize together that the process is an open-ended one. Treating expert advice as a public experiment democratizes the possibility of surprise and treats uncertainty as inevitable rather than intrinsically problematic. When mobile phones were first available and very few people owned one, they were officially safe. Now, the jury is officially out on the risks of mobile phones, and we seem to be OK with it. The question “are mobile phones safe?” will not be settled until questions of trust, credibility and the validity of ongoing research are resolved, and such things are largely unresolvable. The safety of mobile phones therefore represents a work-in-progress.

Data availability

Data sharing not applicable to this article as no datasets were generated during the current study. Any analysed data, and their sources, are indicated in the text.

Additional information

How to cite this article: Stilgoe J (2016) Scientific advice on the move: the UK mobile phone risk issue as a public experiment. Palgrave Communications. 2:16028 doi: 10.1057/palcomms.2016.28.