Abstract
Career grants are an important instrument for selecting and stimulating the next generation of leading researchers. Earlier research has mainly focused on the relation between past performance and success. In this study we investigate how the selection process takes place. More specifically, we investigate which quality dimensions (of the proposal, of the researcher and societal relevance) dominate. We also study which phases in the process (peer review, committee review, interview) are dominant in the evaluation process. Finally, we investigate whether differences between disciplines are visible. The analysis of our data set, consisting of the reviews of 898 grant applications, shows that talent has different dimensions and therefore is not obvious. The evaluation of talent was found to be contextual, although there were only small differences between disciplines. Unlike the interviews with the applicants, the external peer reviews hardly influence the decision-making on grant allocation. The notion of talent was found to be the least evident in the social sciences and humanities.
Similar content being viewed by others
Notes
For a more elaborate literature review on the process of grant reviewing and group decision-making: van Arensbergen et al. (2012), Olbrecht and Bornmann (2010). For an elaborate review on peer review including the reviewing of scientific articles, see Bornmann (2011).
More precisely this is the quality, innovative nature and academic impact of the proposed research.
This is the intended societal, technological, economic, cultural or policy-related use of the knowledge to be developed over a period of 5–10 years.
From 2011 the Research Impact score will no longer be left out if it is detrimental to the total committee score, but will always be included in the calculation of the total committee score.
These are the following divisions: (1) Earth and life sciences; (2) chemistry; (3) mathematics, computer science and astronomy; (4) physics; (5) technical sciences; (6) medical sciences; (7) social sciences; (8) humanities. About 7% of the applications are cross-divisional.
We aggregated the scientific dimensions to domain level as follows: SSH=social sciences and humanities; STE=chemistry, mathematics, computer sciences and astronomy, physics, and technical sciences; LMS=earth and life sciences, and medical sciences.
Note that the applications are sent to different external reviewers, so generally reviewers are involved in the evaluation of a single application.
Since the data set is characterized by a large number of tied ranks, we use Kendall's tau instead of Spearman's rho.
Please note that also here lower scores correspond with higher numbers.
As the following results show, the stepwise method eliminates two variables since they do not contribute significantly to the model.
This is in line with the findings by Hodgson (1995), and contrasts with the findings of Bornmann et al. (2008).
References
Addis, E. and Brouns, M. (eds) (2004) Gender and Excellence in the Making, Bruxelles: European Commission.
Auriol, L. (2010) ‘Careers of doctorate holders: employment and mobility patterns’, OECD Science, Paris: OECD, Technology and Industry Working Papers.
Baron-Cohen, S. (1998) ‘Superiority on the embedded figures test in autism and in normal males: evidence of an “innate talent”?’ Behavioral and Brain Sciences 21 (3): 408–409.
Bazeley, P. (1998) ‘Peer review and panel decisions in the assessment of Australian research council project grant applicant: what counts in a highly competitive context?’ Higher Education 35 (4): 435–452.
Bornmann, L. (2008) ‘Scientific peer review. An analysis of the peer review process from the perspective of sociology of science theories’, Human Architecture: Journal of Sociology of Self-Knowledge 6 (2): 23–38.
Bornmann, L. (2011) ‘Scientific peer review’, Annual Review of Information Science and Technology 45: 199–245.
Bornmann, L., Leydesdorff, L. and van den Besselaar, P. (2010) ‘A meta-evaluation of scientific research proposals: different ways of comparing rejected to awarded applications’, Journal of Informetrics 4 (3): 211–220.
Bornmann, L., Mutz, R. and Daniel, H.D. (2008) ‘Latent Markov modeling applied to grant peer review’, Journal of Informetrics 2 (3): 217–228.
Busse, T.V. and Mansfield, R.S. (1984) ‘Selected personality-traits and achievement in male scientists’, Journal of Psychology 116 (1): 117–131.
Cicchetti, D.V. (1991) ‘The reliability of peer-review for manuscript and grant submissions — a cross-disciplinary investigation’, Behavioral and Brain Sciences 14 (1): 119–134.
Cole, S., Cole, J.R. and Simon, G.A. (1981) ‘Chance and consensus in peer review’, Science 214 (4523): 881–886.
de Grande, H., de Boyser, K. and van Rossem, R. (2010) Carrièrepaden van doctoraathouders in belgië. Loopbaanpatronen naar wetenschapsgebied, Gent: University of Gent.
Eisenhart, M. (2002) ‘The paradox of peer review: admitting too much or allowing too little?’ Research in Science Education 32 (2): 241–255.
Ericsson, K.A., Roring, R.W. and Nandagopal, K. (2007) ‘Giftedness and evidence for reproducibly superior performance: an account based on the expert performance framework’, High Ability Studies 18 (1): 3–56.
European Commission. (2010) Europe 2020 strategy, http://ec.Europa.Eu/europe2020, accessed 25 May 2012.
Feist, G.J. (1998) ‘A meta-analysis of personality in scientific and artistic creativity’, Personality and Social Psychology Review 2 (4): 290–309.
Feist, G.J. and Barron, F.X. (2003) ‘Predicting creativity from early to late adulthood: intellect, potential, and personality’, Journal of Research in Personality 37 (2): 62–88.
Gross, M.U.M. (1993) ‘Nurturing the Talents of Exceptionally Gifted Individuals’, in K.A. Heller, F.J. Mönks and A.H. Passow (eds.) International Handbook of Research and Development of Giftedness and Talent, London: Routledge, pp. 473–490.
Hemlin, S. (1993) ‘Scientific quality in the eyes of the scientist — a questionnaire study’, Scientometrics 27 (1): 3–18.
Hemlin, S. (2009) ‘Peer review agreement or peer review disagreement. Which is better?’ Journal of Psychology of Science and Technology 2 (1): 5–12.
Hodgson, C. (1995) ‘Evaluation of cardiovascular grant-in-aid applications by peer-review — influence of internal and external reviewers and committees’, Canadian Journal of Cardiology 11 (10): 864–868.
Hornbostel, S., Bohmer, S., Klingsporn, B., Neufeld, J. and von Ins, M. (2009) ‘Funding of young scientist and scientific excellence’, Scientometrics 79 (1): 171–190.
Howe, M.J.A., Davidson, J.W. and Sloboda, J.A. (1998) ‘Innate talents: reality or myth?’ Behavioral and Brain Sciences 21 (3): 399–407.
Huisman, J., de Weert, E. and Bartelse, J. (2002) ‘Academic careers from a European perspective — the declining desirability of the faculty position’, Journal of Higher Education 73 (1): 141–160.
Jayasinghe, U.W., Marsh, H.W. and Bond, N. (2003) ‘A multilevel cross-classified modelling approach to peer review of grant proposals: the effects of assessor and researcher attributes on assessor ratings’, Journal of the Royal Statistical Society Series A (Statistics in Society) 166 (3): 279–300.
Knorr-Cetina, K. (1981) The Manufacture of Knowledge: An Essay on the Constructivist and Contextual Nature of Science, Oxford: Pergamon Press.
Lamont, M. (2009) How Professors Think. Inside the Curious World of Academic Judgment, Cambridge: Harvard University Press.
Langfeldt, L. (2001) ‘The decision-making constraints and processes of grant peer review, and their effects on the review outcome’, Social Studies of Science 31 (6): 820–841.
Langfeldt, L. and Kyvik, S. (2011) ‘Researchers as evaluators: tasks, tensions and politics’, Higher Education 62 (2): 199–212.
Langfeldt, L., Stensaker, B., Harvey, L., Huisman, J. and Westerheijden, D.F. (2010) ‘The role of peer review in Norwegian quality assurance: potential consequences for excellence and diversity’, Higher Education 59 (4): 391–405.
Laudel, G. (2006) ‘The ‘quality myth’: promoting and hindering conditions for acquiring research funds’, Higher Education 52 (3): 375–403.
Lepori, B, van den Besselaar, P., Dinges, M., Potì, B., Reale, E., Slipersaeter, S., Theves, J. and van der Meulen, B. (2007) ‘Indicators for comparative analysis of public project funding. Concepts, implementation and evaluation’, Research Evaluation 16 (4): 243–255.
Marsh, H.W., Jayasinghe, U.W. and Bond, N.W. (2008) ‘Improving the peer-review process for grant applications. Reliability, validity, bias, and generalizability’, American Psychologist 63 (3): 160–168.
Melin, G. and Danell, R. (2006) ‘The top eight percent: development of approved and rejected applicants for a prestigious grant in Sweden’, Science and Public Policy 33 (10): 702–712.
Merton, R.K. (1973 [1942]) ‘The Normative Structure of Science’, in R.K. Merton (ed.) The Sociology of Science: Theoretical and Empirical Investigations, Chicago: University of Chicago Press.
Olbrecht, M. and Bornmann, L. (2010) ‘Panel peer review of grant applications: what do we know from research in social psychology on judgment and decision-making in groups?’ Research Evaluation 19 (4): 293–304.
Porter, A.L. and Rossini, F.A. (1985) ‘Peer review of interdisciplinary research proposals’, Science, Technology & Human Values 10 (3): 33–38.
Sandstrom, U. and Hallsten, M. (2008) ‘Persistent nepotism in peer-review’, Scientometrics 74 (2): 175–189.
Simonton, D.K. (2006) ‘Scientific status of disciplines, individuals, and ideas: empirical analyses of the potential impact of theory’, Review of General Psychology 10 (2): 98–112.
Simonton, D.K. (2008) ‘Scientific talent, training, and performance: intellect, personality, and genetic endowment’, Review of General Psychology 12 (1): 28–46.
Smith, S.R. (2001) ‘The social construction of talent: a defence of justice as reciprocity’, Journal of Political Philosophy 9 (1): 19–37.
van Arensbergen, P., Van den Besselaar, P. and Van der Weijden, I. (2012) The Selection of Talent as a Group Process Den Haag: Rathenau Instituut.
van Balen, B. (2010) Op het juiste moment op de juiste plaats. Waarom wetenschappelijk talent een wetenschappelijke carrière volgt, Den Haag: Rathenau Instituut.
van den Besselaar, P. and Leydesdorff, L. (2007) Past Performance as Predictor of Successful Grant Applications, Den Haag: Rathenau Instituut.
van den Besselaar, P. and Leydesdorff, L. (2009) ‘Past performance, peer review, and project selection: a case study in the social and behavioral sciences’, Research Evaluation 18 (4): 273–288.
van Steen, J. (2011) Feiten en cijfers: Overzicht totale onderzoek financiering 2009–2015, Den Haag: Rathenau Instituut.
Viner, N., Powell, P. and Green, R. (2004) ‘Institutionalized biases in the award of research grants: a preliminary analysis revisiting the principle of accumulative advantage’, Research Policy 33 (3): 443–454.
Whitley, R. (2000) The Intellectual and Social Organization of the Sciences, Oxford: Oxford University Press.
Author information
Authors and Affiliations
Additional information
The study was conducted in collaboration with the Netherlands Organisation for Scientific Research NWO. Cas Maessen, Mariken Elsen and two anonymous reviewers provided useful suggestions for improvement.
Rights and permissions
About this article
Cite this article
van Arensbergen, P., van den Besselaar, P. The Selection of Scientific Talent in the Allocation of Research Grants. High Educ Policy 25, 381–405 (2012). https://doi.org/10.1057/hep.2012.15
Published:
Issue Date:
DOI: https://doi.org/10.1057/hep.2012.15