Skip to main content
Log in

studying media events in the european social surveys across research designs, countries, time, issues, and outcomes

European Political Science Aims and scope Submit manuscript

Abstract

Scholars often study isolated media effects using one method at one time point in one country. We seek to generalise the research in this area by examining hundreds of press-worthy events across dozens of countries at various points in time with an array of techniques and outcome measures. In particular, we merge a database containing thousands of events with five waves of the European Social Survey to conduct analyses across countries and individuals as well as within countries and for specific respondents. The results suggest that there is an impressive degree of heterogeneity when it comes to how citizens react to political developments. Some events generate significant opinion changes when groups of individuals who are ‘treated’ are compared with ‘control’ cases. However, other events produce modest or even null findings with methods that employ different counterfactuals. Thus, findings of both strong and weak media effects that scholars have uncovered over the years could be a function of methodological choices as well as context-specific factors such as institutional arrangements, media systems, eras, or event characteristics. Data limitations also make some research designs possible while they preclude others. We conclude with advice for others who wish to study political events in this manner as well as discussion of media effects, broadly construed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Figure 1
Figure 2

Notes

  1. While wording-limits preclude a more extensive review of the literature, see Chandler and Munday (2011), DeFleur (2009), or Jennings et al (2002) for more on media effects research.

  2. Events have been used to study the effects of media on public opinion. Smetko et al (2003) focused on the June 1997 Amsterdam Summit, while Statham and Tumber (2013) examined linkages between events related to gay rights in Ireland and public support for those issues. While media events are sometimes staged public spectacles (e.g., the wedding of Prince Charles and Lady Diana featured in Dayan and Katz 1994), the events we study are press-worthy developments that were featured in news coverage across many different countries (see footnote 8 for more details).

  3. For earlier within-subjects panel designs on media effects, see Lazarsfeld and Fiske (1938) or Lazarsfeld et al (1944).

  4. The paradox concerns the twin possibilities of pre-treatment effects (e.g., Druckman and Leeper, 2012) in which the communication effects are already taking place before the analysis starts or alternative paths to influence that exist outside the mass media, such as when individuals communicate with each other (e.g., McClurg, 2006; Ryan, 2010). Again, design choices may help contend with these possibilities.

  5. Although we adopt a macroscopic perspective, it is worth commenting upon some of the potential microfoundations – albeit briefly because editorial and journal limits which preclude going into depth. As Hetherington and Rudolph (2008) argue, outcomes we will study like trust in government are a product of a multi-stage process whereby agenda-setting via the media and other events affect importance judgements. Primed to think some issues are more important than others, citizens ultimately alter their views toward the government (also see Miller and Krosnick, 2000).

  6. The average number of respondents was 1,923 with roughly the same number in each round (round 1 average=1,925, round 2=1,887, round 3=1,891, round 4=1,968, and round 5=1,943). Likewise, in most rounds the ESS approached the target response rate of 70 per cent with averages in the low 60s for each round (61, 62, 63, 62, and 60 for each of the rounds respectively).

  7. See http://www.europeansocialsurvey.org/ for more details on the surveys and methodology.

  8. Most of the events are in round 5 (n=2,153) while the least are in round 1 (n=717). Most events are single day events (80 per cent). Not all countries have events recorded, but of those that do, 11 per cent occur within 30 days of the survey start and 77 per cent take place within the interview period. As discussed earlier, the ESS reporting guidelines state that ‘Events should be reported once they get “prominent attention” in national newspapers. For the purposes of monitoring, prominent attention means “making front page news” or “appearing regularly in larger articles on later pages”’ (ESS 5 Event Reporting Guidelines available at: http://www.europeansocialsurvey.org/docs/round5/methods/ESS5_event_reporting_guidelines.pdf).

  9. Specifically, the ESS characterises this event as follows: ‘The collapse of the government was directly caused by the dismissal of deputy prime minister Andrzej Lepper on the grounds of his suspected involvement in a land purchase scandal (Lepper was a controversial leader of the Self-Defence party, often described as populist). Self-Defence decided to leave the coalition, and the other coalition partner, League of Polish Families, followed suit. Another politician who was dismissed in connection with the scandal was Janusz Kaczmarek, then minister of interior. A suspicion was voiced that he had warned Lepper of actions planned against the latter by the Central Anticorruption Bureau (CBA). Also some other well-known individuals were suspected of obstructing justice, among them the head of the police and one of the richest Polish businessmen. After his dismissal Janusz Kaczmarek gave many interviews where he implied that public security services were being used for political purposes. He painted a particularly unfavourable picture of the then justice minister Zbigniew Ziobro. As it turned out, many of Kaczmarek’s comments were departures from the truth, which generally undermined the credibility of his accusations.’

  10. In a randomly selected sample of fifty events, the two research assistants achieved relatively high intercoder reliability statistics for domestic (Krippendorf alpha=0.92) and major versus minor distinctions (Krippendorf α=0.60).

  11. We created dummy variables for each of these relative to the omitted baseline of non-economic national events.

  12. The appendix contains details on the countries and events by ESS round as well as other coding decisions.

  13. The trust in politicians question was, ‘Using this card, please tell me on a score of 0-10 how much you personally trust each of the institutions I read out. 0 means you do not trust an institution at all, and 10 means you have complete trust. Firstly … trust in politicians’. The economic satisfaction question was an 11 point scale (recoded in 0.1 increments from 0=extremely dissatisfied, 1=extremely satisfied) of ‘On the whole how satisfied are you with the present state of the economy in [country]?’ Finally, the government satisfaction item used the same 11 point scale in response to ‘Now thinking about the [country] government, how satisfied are you with the way it is doing its job? These variables have the ESS mnemonics of TRSTPLT, STFECO, and STFGOV’.

  14. The media index was an additive scale built from the responses to 8-point measures of ‘on an average weekday, how much of your time watching television is spent watching news or programmes about politics and current affairs?’ for television and similar items for radio, and newspapers. The answer choices were time-based increments ranging from ‘no time at all’ to ‘more than three hours’. We examined the media exposure coefficient and found it to be a fairly reliable measure across all five waves of the ESS (average Cronbach’s α=0.53). However, there was a lot of variability, with reliabilities of the top two countries averaging closer to 0.71 in each ESS waves. We trichotomise the media measure due to design considerations (i.e., to separate high exposure from medium and low), but we also do so to counteract differential scale usage by the respondents across countries (see King et al, 2004). Doing so is advantageous; the correlation between our measure and the first dimension in a confirmatory factor analysis is 0.82 across all waves. Other work, by Jowell et al (2007); Herda (2010); and Sides and Citrin (2007) confirms the reliability of scales built upon ESS data.

  15. The education item was a seven-point measure from less than lower/secondary to higher tertiary education above an MA degree. Race was a binary indicator of whether the respondent belonged to ‘a minority ethnic group’ in the country. Income was a twelve point measure of household net total income from less than €1,800 to €120,000 or more. All independent and dependent variables were rescaled to the 0–1 interval.

  16. The media freedom measure is a continuous measure that rates countries based on government interference in their media sectors. In its original form, it is scaled from 0 (most free) to 100 (least free) and is constructed from 23 items that are subdivided into three equally weighted subcategories: legal environment, political environment and economic environment. See Schoonvelde (2014) for a detailed description of the subcategories, but broadly they cover laws and the legal regulatory environment (legal), political control over media content (political), and ownership structures (economic). The variable was inverted and rescaled to the 0–1 interval so that higher values convey more freedom. Becker et al (2007) find that Freedom House measures were reliable across time, and that they reflected variations in the media environment linked to the collapse of communism in the late twentieth century. Furthermore, Becker and Vlad (2009) report high correlations (i.e., Pearson’s r values of 0.80 or better) between Freedom House scores and Reporters without Boarders Measures of Press Freedom from 2002 to 2008.

  17. We make use of 741 unique events for which models could be estimated because of data requirements (i.e., the events occur at the proper moment relative to the survey interview period). Some of these events are repeated in the dataset when analysed by different designs. There were 741 events analysed by any of the three types of models, of which 640 were used in the DID analyses which make use of observations before and after the event (see Panel C of the Figure 1). For the DID design, events must occur during the survey field period for the country in question, and there must be survey data from another country (or countries) that did not experience the event which is used as a counterfactual. The other two designs are post-test designs (see Panels A and B of the new Figure 1 in the paper), which make use of nearly 100 events each. For the first two designs shown in panels A and B of Figure 1, we look at respondents within a single country. For the last design shown in panel C, the DID analysis, we pool all available respondents in the country with the event as well as comparison observations in other countries, if they were available.

  18. We used the log of the number of observations instead of the count to produce more meaningful results, but we obtain the same finding with the unlogged counts for all three outcome variables; the coefficients are negatively signed and significant at p<0.01, two-tailed.

  19. Interactions with the design dummy variables and the number of observations (logged) reveal negative and significant coefficients for the DID design interacted with the number of observations (p<0.05 for trust in politicians and but p>0.10 for the satisfaction outcomes). For trust in politicians model where we are able to contrast the WS/WS technique, that interaction term between the WS/WS design and the number of observations is also negative and significant (p<0.05); the term is positive but insignificant (p<0.20).

  20. For trust in politicians, a round 5 ESS dummy variable has a coefficient of −0.765 with a standard error of 0.383, p<0.05 (the baseline is round 1). For economic satisfaction, the coefficient is=−1.001 with a standard error of 0.477, p<0.05. The dummies for rounds 2–4 are also negatively signed, but most are p >0.05.

  21. Once again, there is some evidence that the effect is specific to the DID design based on interactions with the design and media freedom (all three interaction term coefficients are negative, but the p-values range from 0.01 to 0.192.

  22. For trust in politicians, 33 per cent of the models produced media effects of 1.96 or greater (mean=0.332, sd=0.471). For economic satisfaction, the mean was similar (mean=0.328, sd=0.469) and for government satisfaction, there were a few more significant effects (mean=0.362, sd=0.481).

  23. In addition to the same set of variables we considered earlier, we include the standard error of the coefficient as a precaution on the idea that a big coefficient might not be meaningful except in relation to the size of the standard error.

  24. We were able to include a term on the right-hand side, which captured whether the coefficient was negative or positive. Those ‘negative coefficient’ terms are themselves negative and significant (p<0.05), and their inclusion did not change the patterns reported earlier.

  25. In November of 2013, the ESS was awarded ERIC (European Research Infrastructure Consortium) status. According to the news release (http://www.europeansocialsurvey.org/about/news.html), ‘ERICs are facilities for the scientific community, allowing researchers access to archives and tools to conduct top-level research. Member States, Associated and Third Countries and intergovernmental organisations may become members of an ERIC’.

  26. Other questions concerning the events arise too, such as the relationship of events to actual coverage. Others who study events (e.g., Smetko et al, 2003; Ladd and Lenz, 2009; Stevens et al, 2011) show that they do generate coverage.

References

  • Abadie, A., Diamond, A. and Hainmueller, J. (2010) ‘Synthetic control methods for comparative case studies: Estimating the effect of california’s tobacco control program’, Journal of the American Statistical Association 105 (490): 493–505.

    Article  Google Scholar 

  • Barabas, J. and Jerit, J. (2009) ‘Estimating the causal effects of media coverage on policy-specific knowledge’, American Journal of Political Science 53 (1): 73–89.

    Article  Google Scholar 

  • Barabas, J. and Jerit, J. (2010) ‘Are survey experiments externally valid?’ American Political Science Review 104 (2): 226–42.

    Article  Google Scholar 

  • Bartels, L.M. (1993) ‘Messages received: The political impact of media exposure’, American Political Science Review 87 (2): 267–85.

    Article  Google Scholar 

  • Becker, L.B. and Vlad, T. (2009) ‘Validating Country-Level Measures of Media Freedom with Survey Data.’ Paper presented at the Midwest Association for Public Opinion Research, Chicago, IL, November 20–21.

  • Becker, L.B., Vlad, T. and Nusser, N. (2007) ‘An evaluation of press freedom indicators’, International Communication Gazette 69 (1): 5–28.

    Article  Google Scholar 

  • Bennett, W.L. and Iyengar, S. (2008) ‘A new Era of minimal effects? The changing foundations of political communication’, Journal of Communication 58 (4): 707–31.

    Article  Google Scholar 

  • Berinsky, A.J. and Kinder, D.R. (2006) ‘Making sense of issues through media frames: Understanding the Kosovo crisis’, Journal of Politics 68 (3): 640–56.

    Article  Google Scholar 

  • Campbell, D.T. and Stanley, J.C. (1963) Experimental and Quasi-Experimental Designs for Research, Boston: Houghton-Mifflin.

    Google Scholar 

  • Chandler, D. and Munday, R. (2011) A Dictionary of Media and Communication, New York: Oxford University Press.

    Book  Google Scholar 

  • Chong, D. and Druckman, J.N. (2007) ‘A theory of framing and opinion formation in competitive elite environments’, Journal of Communication 57 (1): 99–118.

    Google Scholar 

  • Curran, J., Iyengar, S., Lund, A.B. and Salovaara-Moring, I. (2009) ‘Media system, public knowledge, and democracy: A comparative study’, European Journal of Communication 24 (1): 5–26.

    Article  Google Scholar 

  • Dalton, R.J., Beck, P.A. and Huckfeldt, R. (1998) ‘Partisan cues and the media: Information flows in the 1992 presidential election’, American Political Science Review 92 (1): 111–26.

    Article  Google Scholar 

  • Dayan, D. and Katz, E. (1994) Media Events: The Live Broadcast of History, Cambridge, MA: Harvard University Press.

    Google Scholar 

  • DeFleur, M.L. (2009) Mass Communication Theories: Explaining Origins, Processes, and Effects, New York: Pearson.

    Google Scholar 

  • de Vreese, C.H. and Boomgaarden, H.G. (2006) ‘Media effects on public opinion about the enlargement of the European union’, Journal of Common Market Studies 44 (2): 419–36.

    Article  Google Scholar 

  • Dilliplane, S., Goldman, S.K. and Mutz, D.C. (2013) ‘Televised exposure to politics: New measures for a fragmented media environment’, American Journal of Political Science 57 (1): 236–48.

    Article  Google Scholar 

  • Dunaway, J. (2008) ‘Markets, ownership, and the quality of campaign news coverage’, Journal of Politics 70 (4): 1193–1202.

    Article  Google Scholar 

  • Druckman, J.N. and Leeper, T.J. (2012) ‘Learning more from political communication experiments: Pretreatment and its effects’, American Journal of Political Science 56 (4): 875–96.

    Article  Google Scholar 

  • Druckman, J.N. and Parkin, M. (2005) ‘The impact of media bias: How editorial slant affects voters’, Journal of Politics 67 (4): 1030–49.

    Article  Google Scholar 

  • Fair, C., Malhotra, N. and Shapiro, J.N. (2012) ‘Faith or doctrine? Religion and support for political violence in Pakistan’, Public Opinion Quarterly 76 (4): 688–720.

    Article  Google Scholar 

  • Finseraas, H., Jakobsson, N. and Kotsadam, A. (2011) ‘‘Did the Murder of Theo van Gogh change Europeans’ immigration preferences?’ Kyklos 64 (3): 396–409.

    Article  Google Scholar 

  • Finseraas, H. and Listhaug, O. (2013) ‘It can happen here: The impact of the Mumbai terror attacks on public opinion in Western Europe’, Public Choice 156 (1): 213–228.

    Article  Google Scholar 

  • Fraile, M. (2013) ‘Do information-rich contexts reduce knowledge inequalities? The contextual determinants of political knowledge in Europe’, Acta Politica 48 (2): 119–43.

    Article  Google Scholar 

  • Gerber, A. and Malhotra, N. (2008) ‘Do statistical reporting standards affect what is published? Publication bias in two leading political science journals’, Quarterly Journal of Political Science 3 (3): 313–26.

    Article  Google Scholar 

  • Gerber, A.S., Green, D.P. and Nickerson, D. (2000) ‘Testing for publication bias in political science’, Political Analysis 9 (4): 385–92.

    Article  Google Scholar 

  • Gomez, B.T. and Wilson, J.M. (2008) ‘Political sophistication and attributions of blame in the wake of Hurricane Katrina’, Publius 38 (4): 633–650.

    Article  Google Scholar 

  • Hallin, D.C. and Mancini, P. (2004) Comparing Media Systems: Three Models of Media and Politics, New York: Cambridge University Press.

    Book  Google Scholar 

  • Herda, D. (2010) ‘How many immigrants? Foreign-Born population innumeracy in Europe’, Public Opinion Quarterly 74 (4): 674–95.

    Article  Google Scholar 

  • Hetherington, M.J. (1996) ‘The media’s effect on voters’ national retrospective economic evaluations in 1992’, American Journal of Political Science 40 (2): 372–95.

    Article  Google Scholar 

  • Hetherington, M.J. and Rudolph, T.J. (2008) ‘Priming, performance, and the dynamics of political trust’, Journal of Politics 70 (2): 498–512.

    Article  Google Scholar 

  • Hetherington, M.J. and Rudolph, T.J. (2015) Why Washington Won’t Work: Polarization, Political Trust, and the Governing Crisis, Chicago: University of Chicago Press.

    Book  Google Scholar 

  • Hutchings, V.L. (2001) ‘Political context, issue salience, and selective attentiveness: Constituent knowledge of the clarence thomas confirmation vote’, Journal of Politics 63 (3): 846–68.

    Article  Google Scholar 

  • Iyengar, S., Peters, M.D. and Kinder, D.R. (1982) ‘Experimental demonstrations of the “not-so-minimal” consequences of television news programs’, American Political Science Review 76 (4): 848–58.

    Google Scholar 

  • Iyengar, S., Hahn, K.S., Bonfadelli, H. and Marr, M. (2009) ‘‘Dark areas of ignorance’ revisited: Comparing international affairs knowledge in Switzerland and the United States’, Communication Research 36 (3): 341–58.

    Article  Google Scholar 

  • Iyengar, S., Curran, J., Lund, A.B., Salovaara-Moring, I., Hahn, K.S. and Coen, S. (2010) ‘Cross-national versus individual-level differences in political information: A media systems perspective’, Journal of Elections, Public Opinion & Parties 20 (3): 291–309.

    Article  Google Scholar 

  • Jerit, J., Barabas, J. and Bolsen, T. (2006) ‘Citizens, knowledge, and the information environment’, American Journal of Political Science 50 (2): 266–82.

    Article  Google Scholar 

  • Jowell, R., Roberts, C., Fitzgerald, R. and Eva, G. (2007) Measuring Attitudes Cross-Nationally: Lessons from the European Social Survey, Thousand Oaks, CA: Sage.

    Book  Google Scholar 

  • Jusko, K.L. and Shively, W.P. (2005) ‘Applying a two-step strategy to the analysis of cross-national public opinion data’, Political Analysis 13 (4): 327–44.

    Article  Google Scholar 

  • Kahn, K.F. and Kenney, P.J. (2002) ‘The slant of the news: How editorial endorsements influence campaign coverage and citizens’ views of candidates’, American Political Science Review 96 (2): 381–94.

    Article  Google Scholar 

  • Keele, L., Malhotra, N. and McCubbins, C.H. (2013) ‘Do term limits restrain state fiscal policy? Approaches for causal influence in assessing the effects of legislative institutions’, Legislative Studies Quarterly 38 (3): 291–326.

    Article  Google Scholar 

  • Kellstedt, P.M. (2000) ‘Media framing and the dynamics of racial policy preferences’, American Journal of Political Science 44 (2): 245–60.

    Article  Google Scholar 

  • Keeter, S., Kennedy, C., Dimock, M., Best, J. and Craighill, P. (2006) ‘Gauging the impact of growing nonresponse on estimates from a national RDD telephone survey’, Public Opinion Quarterly 70 (5): 759–79.

    Article  Google Scholar 

  • Keeter, S., Kennedy, C., Clark, A., Tompson, T. and Mokrzycki, M. (2007) ‘What’s missing from national landline RDD surveys?: The impact of the growing cell-only population’, Public Opinion Quarterly 71 (5): 772–92.

    Article  Google Scholar 

  • King, G., Murray, C.J.L., Salomon, J.A. and Tandon, A. (2004) ‘Enhancing the validity and cross-cultural comparability of measurement in survey research’, American Political Science Review 98 (1): 191–207.

    Article  Google Scholar 

  • Klapper, J.T. (1960) The Effects of Mass Media, New York: Free Press.

    Google Scholar 

  • Krosnick, J.A. (1990) ‘Government policy and citizen passion: A study of issue publics in contemporary America’, Political Behavior 12 (1): 59–92.

    Article  Google Scholar 

  • Ladd, J.M. and Lenz, G.S. (2009) ‘Exploiting a rare communication shift to document the persuasive power of the news media’, American Journal of Political Science 53 (2): 394–410.

    Article  Google Scholar 

  • Lassen, D.D. (2005) ‘The effect of information on voter turnout: Evidence from a natural experiment’, American Journal of Political Science 49 (1): 103–18.

    Article  Google Scholar 

  • Lazarsfeld, P. and Fiske, M. (1938) ‘The “panel” as a new tool for measuring opinion’, Public Opinion Quarterly 2 (4): 596–612.

    Article  Google Scholar 

  • Lazarsfeld, P.F., Berelson, B. and Gaudet, H. (1944) The People’s Choice: How the Voter Makes Up His Mind in a Presidential Campaign, 2nd ed. New York: Duell Sloan and Pearce.

    Google Scholar 

  • Leamer, E.E. (1978) Specification Searches: Ad Hoc Inference with Nonexperimental Data, New York: Wiley.

    Google Scholar 

  • Lewis, J.B. and Linzer, D.A. (2005) ‘Estimating models in which the dependent variable is based on estimates’, Political Analysis 13 (4): 345–64.

    Article  Google Scholar 

  • Liu, Y., Shen, F., Eveland, W.P. and Dylko, I. (2013) ‘The impact of news use and news content characteristics on political knowledge and participation’, Mass Communication and Society 16 (5): 713–73.

    Article  Google Scholar 

  • Maestas, C., Atkeson, L., Croom, T. and Bryant, L. (2008) ‘Shifting the blame: Federalism, media and public assignment of blame following Hurricane Katrina’, Publius: The Journal of Federalism 38 (4): 609–32.

    Article  Google Scholar 

  • McClurg, S.D. (2006) ‘The electoral relevance of political talk: Examining the effect of disagreement and expertise in social networks on political participation’, American Journal of Political Science 50 (3): 737–54.

    Article  Google Scholar 

  • McGuire, W.J. (1986) ‘The Myth of Massive Media Impact: Savagings and Salvaging’, in G. Comstock (ed.) Public Communication and Behavior, Orlando: Academic Press, pp. 173–257.

    Google Scholar 

  • Miller, B. (2010) ‘The effects of scandalous information on recall of policy-related information’, Political Psychology 31 (6): 887.

    Article  Google Scholar 

  • Miller, J.N. and Krosnick, J.A. (2000) ‘News media impact on the ingredients of presidential evaluations: Politically knowledgeable citizens are guided by a trusted source’, American Journal of Political Science 44 (2): 301–15.

    Article  Google Scholar 

  • Morgan, S.L. and Winship, C. (2007) Counterfactuals and Causal Inference: Methods and Principles for Social Research, New York: Cambridge University Press.

    Book  Google Scholar 

  • National Research Council. (2013) Nonresponse in Social Science Surveys: A Research Agenda. Panel on a Research Agenda for the Future of Social Science Data Collection Committee on National Statistics. Division of Behavioral and Social Sciences and Education. Washington DC: The National Academies Press.

  • Neuman, W.R., Just, M.R. and Crigler, A.N. (1992) Common Knowledge: News and the Construction of Political Meaning, Chicago: University of Chicago Press.

    Google Scholar 

  • Patterson, T. and McClure, R.D. (1976) The Unseeing Eye: The Myth of Television Power in National Elections, New York: Putnam.

    Google Scholar 

  • Prior, M. (2007) Post-Broadcast Democracy: How Media Choice Increases Inequality in Political Involvement and Polarizes Elections, New York: Cambridge University Press.

    Book  Google Scholar 

  • Prior, M. (2009) ‘The immensely inflated news audience: Assessing bias in self-reported news exposure’, Public Opinion Quarterly 73 (1): 130–43.

    Article  Google Scholar 

  • Rosen, J. (2001) What Are Journalists For?, New Haven: Yale University Press.

    Google Scholar 

  • Rudolph, T.J. and Popp, E. (2009) ‘Bridging the ideological divide: Trust and support for social security privatization’, Political Behavior 31 (3): 331–51.

    Article  Google Scholar 

  • Ryan, J.B. (2010) ‘The effects of network expertise and biases on vote choice’, Political Communication 27 (1): 44–58.

    Article  Google Scholar 

  • Schoonvelde, M. (2014) ‘Media freedom and the institutional underpinnings of political knowledge’, Political Science Research and Methods 2 (2): 163–78.

    Article  Google Scholar 

  • Semetko, H.A., Van Der Brug, W. and Valkenburg, P.M. (2003) ‘The influence of political events on attitudes toward the European union’, British Journal of Political Science 33 (4): 621–34.

    Article  Google Scholar 

  • Shadish, W.R., Cook, T.D. and Campbell, D.T. (2002) Experimental and Quasi-Experimental Designs for Generalized Causal Inference, Boston: Houghton Mifflin.

    Google Scholar 

  • Shields, T.G., Goidel, R.K. and Tadlock, B. (1995) ‘The new impact of media exposure on individual voting decisions in U.S. elections’, Legislative Studies Quarterly 20 (3): 415–30.

    Article  Google Scholar 

  • Sides, J. and Citrin, J. (2007) ‘European opinion about immigration: The role of identities, interests and information’, British Journal of Political Science 37 (3): 477–504.

    Article  Google Scholar 

  • Soroka, S. (2006) ‘Good news and bad news: Asymmetric responses to economic information’, Journal of Politics 68 (2): 372–85.

    Article  Google Scholar 

  • Soroka, S., Andrew, B., Aalberg, T., Iyengar, S., Curran, J., Coen, S., Hayashi, K., Jones, P., Mazzoleni, G., Rhee, J.W., Rowe, D. and Tiffen, R. (2013) ‘Auntie knows best? Public broadcasters and current affairs knowledge’, British Journal of Political Science 43 (4): 719–39.

    Article  Google Scholar 

  • Statham, P. and Tumber, H. (2013) ‘Relating news analysis and public opinion: Applying a communications method as a “tool” to aid interpretation of survey results’, Journalism 14 (6): 737–53.

    Article  Google Scholar 

  • Stevens, D. and Banducci, S. (2013) ‘One voter and two choices: The impact of electoral context on the 2011 UK referendum’, Electoral Studies 32 (2): 274–84.

    Article  Google Scholar 

  • Stevens, D., Banducci, S., Karp, J. and Vowles, J. (2011) ‘Prime time for Blair? Media Priming, Iraq, and leadership evaluations in Britain’, Electoral Studies 30 (3): 546–60.

    Article  Google Scholar 

  • Stevens, D. and Karp, J.A. (2012) ‘Leadership traits and media influence’, Political Studies 60 (4): 787–808.

    Article  Google Scholar 

  • Stoop, I., Billiet, J., Koch, A. and Fitzgerald, R. (2010) Improving Survey Response: Lessons Learned from the European Social Survey, West Sussex, UK: John Wiley and Sons.

    Book  Google Scholar 

  • Wei, L. and Blanks, H.D. (2011) ‘Does the digital divide matter more? Comparing the effects of new media and old media use on the education-based knowledge gap’, Mass Communication and Society 14 (2): 216–35.

    Article  Google Scholar 

  • Wooldridge, J. (2013) Introductory Econometrics: A Modern Approach, 5th ed. Mason, OH: South-Western Cengage Learning.

    Google Scholar 

  • Zaller, J.R. (2002) ‘The statistical power of election studies to detect media exposure effects in political campaigns’, Electoral Studies 21 (2): 297–329.

    Article  Google Scholar 

  • Zaller, J.R. (1996) ‘The Myth of Massive Media Impact Revived: New Support for a Discredited Idea’, in D. Mutz, R. Brody and P. Sniderman (eds.) Political Persuasion and Attitude Change, Ann Arbor: University of Michigan Press, pp. 17–79.

    Google Scholar 

  • Zaller, J.R. and Hunt, M. (1995) ‘The rise and fall of candidate perot: The outsider versus the political system – Part II’, Political Communication 12 (1): 97–123.

    Article  Google Scholar 

Download references

Acknowledgements

The authors of this paper wish to acknowledge the receipt of generous research support from the Economic and Social Research Council (ESRC). John Barry Ryan provided valuable comments as did panelists at the 2014 London Media Effects Research workshop. David Martin and Matt Harris provided helpful research assistance.

Author information

Authors and Affiliations

Authors

Additional information

Supplementary information accompanies this article on the European Political Science website (www.palgrave-journals.com/eps)

Electronic supplementary material

APPENDIX

APPENDIX

This appendix provides a description of the events data as well as details on data processing and coding that was necessary to undertake before our analysis. Replication data and code will be available on the authors’ website (http://www.jasonbarabas.com) after publication.

DESCRIPTION OF ESS EVENTS DATA

The ESS is a cross-national study that has been conducted every 2 years since 2001 in various countries across Europe. In conjunction with the individual-level data sets for each round, the ESS team has also released data designed to capture the political context within the participating countries. The political structure of Europe is such that there are likely to be shared environmental factors affecting sets of countries, as well as domestic factors specific to individual nations. The ESS event file offers an expansive, publicly available data source for researchers looking to integrate these factors into their analyses.

Each event report typically provides several pieces of information, including a substantive description (e.g., ‘UK house prices have fallen for an 11th consecutive month’) and categorisation ([e]vents concerning the national economy, labour market), start and end dates, and potentially connected items from the survey instrument. Responsibility for collecting these data appears decentralised, falling to separate research teams in each country involved in the broader study. Each group follows a set of common instructions on how to collect and record media-reported events. This delegation of collection duty to the numerous local teams has advantages with respect to accurately capturing events occurring in many locales at once. On the other hand, one likely drawback of this approach is variation in the nature of reporting. For instance, some events are sourced, while others are not. There are also practical differences in formatting and structure between countries.

Standardisation is an obvious imperative for the construction and employment of the events data in a modelling capacity. We transformed the data set in several ways to improve its usefulness in our analyses. The issues we identify may deter users upon first opening the unprocessed events file, but our corrections are broadly applicable. The corrected events data set and underlying code are available in the replication materials for this paper.

Appendix Table A1 summarises the cumulative events data file for all countries participating in any of the first five ESS studies. This table shows, which survey rounds each country participated in, as well as counts of events in the data set. We first show the total number of events reported by a country, and then subdivide this number into (a) events reported in the 30 days before the start of one of a given country’s survey rounds, and (b) events reported within the duration of one of a given country’s survey rounds. The table also displays separate counts for the subset of events coded as domestic/major. Ignoring for a moment these final three columns, several features of the data are worth noting. First, the pattern of inclusion in the five rounds varies considerably across the set of countries. Fewer than half of the participating nations were present for all rounds (i.e., Denmark, UK). Others are included for only a single year (Austria), while the rest participate in some continuous (Ukraine) or non-continuous (Netherlands) subset of rounds.

Table A1 Summary of European Social Survey (ESS) Events

Similarly, there is a large degree of variance in the overall number of stories reported in each country. This may be due in some part to substantive differences in political context between the nations under study, but there are also systematic differences in reporting frequencies that seem unexpected on substantive grounds. For example, Spain and the United Kingdom (UK) are both large countries that participated in all five waves. However, the former reported nearly three times as many events (1,441) as the latter (484). Such extreme discrepancy likely reflects differences between the reporting patterns of the separate ESS teams rather than real variance in the political environment between the countries. Caution is advised in using these data for any application that might require comparable between-country counts of events.

Figure A1a and A1b, graphically illustrate a few of the ways in which event reporting differed between countries, again using Spain and the UK as examples. The two countries first vary in terms of the time frame and length of survey interview periods, as depicted by the horizontal lines within the chart space. Likewise, there are also substantial differences in the timing of event reports, denoted by the rug plot (i.e., the black vertical lines) positioned above the X-axis. Spain reported more events than the UK overall (see Appendix Table A1), and reporting closely coincides with the timing of the five ESS rounds. On the other hand, the UK team reported many events in the intervening period between rounds.

Figure A1a
figure 3

ESS interview and event dates (Spain).

Figure A1b
figure 4

ESS interview and event dates (United Kingdom).

The data set includes media-reported events occurring both internationally and domestically. An election in the United States, for instance, might be reported if it receives significant coverage. While international events could be utilised in other settings, the most useful reports for our analyses were those reflecting unique qualities of the political environment within a single country. To identify this category of events, coders read through every entry in the cumulative file, and judged whether each operated at the international (i.e., an election in the United States reported by the UK team) or domestic level (an election in the UK reported by the UK team).

Finally, events within the file vary considerably in terms of their magnitude of importance. Perceptions of importance are, of course, subjective to a degree, but some events clearly stood out as more likely to have perceptible effects on ESS survey responses than others. Our coders made entries denoting which events appeared to be ‘major’ compared with the others reported. To illustrate, we judged an attempted car bombing at Glasgow airport to be major, while a report about an isolated factory closing was judged to be minor. The intersection of events that were both domestic and major was of greatest interest. As shown in the rightmost columns of Appendix Table A1, these events comprise a small subset of the overall reporting.

In addition to our new coding, we also corrected numerous existing issues within the data:

Creating Consistent Date Formats

Maintaining a uniform date format for each record is necessary to effectively use the events file with statistical software. Unfortunately, the date entries in the unprocessed file fluctuate between four different primary formats: mm/dd/yyyy for single dates, and either dd-dd/mm/yyyy, dd/mm/yyyy-dd/mm/yyyy, or dd/mm-dd/mm/yyyy in cases where an event spanned multiple days. There are also dozens of entries with idiosyncratic formatting that does not match any pattern. We first made edits where necessary to ensure that all entries conformed to one of the four main formats identified. Next, we identified the format of each entry in order to parse information on the day, month, and year of each event. Finally, this information allows us to construct new, uniformly formatted date variables to mark the start and end of each event (these variables are identical for single-day events). These new dates take a single, common format (mm/dd/yyyy) easily read by modern software.

SURVEY FIELD PERIOD VERSUS INTERVIEW DATES

Another problem related to dates involved the published beginning and end of the ESS survey periods for each country. Documentation on the ESS website provides a set of ‘fieldwork period’ dates corresponding to each country for each round. However, these dates often fail to correspond with the earliest and/or latest interview dates recorded in the survey data. Having an accurate sense of when events occurred relative to the beginning of each country-round was important for many of our analyses. Thus, we constructed our own survey start and end variables from the dates of the actual interviews in the survey data.

REMOVAL OF DUPLICATE EVENTS

We deleted a total of 207 duplicate entries in the events file. Many of these entries were a result of multiple reporting of a single, on-going event. Such duplication appeared throughout the data, so we adopted the convention of keeping only a single event report in all cases. Other duplicates had no obvious reason for being repeated, and were also removed.

Figure A1a

Figure A1b

Table A1

Table A2

Table A2 Generalized Media Effects: Predicting Significant Effects (t>|1.96|)

Table A3

Table A3 Generalized Media Effects: Predicting (Absolute Value) Coefficient Size

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

pollock, w., barabas, j., jerit, j. et al. studying media events in the european social surveys across research designs, countries, time, issues, and outcomes. Eur Polit Sci 14, 394–421 (2015). https://doi.org/10.1057/eps.2015.67

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1057/eps.2015.67

Keywords

Navigation