“What rules do we play by?” is the question we’ve followed in our bibliometric study which has just been published in the renowned journal Research Policy. Given the growing importance of journal rankings in academic performance management, it is relevant to researchers and managers alike whether there are certain characteristics of publications that are more prevalent the higher a journal is ranked. Our paper examines how tangible and adaptable characteristics of papers vary between different rating categories of journals and what the drivers of publication in journals at the top of rankings are. We build on a bibliometric analysis of more than 85,000 papers published in 168 management and business journals as rated in 18 popular journal rankings. Results refute some often repeated but rarely substantiated criticisms of journal rankings. Contrary to many voices, we find that interdisciplinarity and innovativeness are positively associated with publication in highly ranked journals. In other respects, our results support more critical assumptions, such as a widespread preference for quantitative methods. By providing more evidence on the implicit standards of journal rankings, this study expands on the understanding of what intended or unintended incentives they provide and how to use them responsibly.
The results of an international web-based survey on journal editors in four disciplines were published in Research Evaluation. The article is titled “How innovative are editors?: evidence across journals and disciplines”. Journal editors play a crucial role in the scientific publication system, as they make the final decision on acceptance or rejection of manuscripts. Some critics, however, suspect that the more innovative a manuscript is, the less likely it will be accepted for publication. Especially top-tier journals are accused of rejecting innovative research. As evidence is only anecdotal, this article empirically examines the demand side for innovative research manuscripts. I assess journal editors’ innovativeness, i.e. their general predispositions for innovative research manuscripts. As antecedents to innovativeness, personal and contextual factors are taken into account. I differentiate the concept of innovativeness in research by distinguishing three dimensions: innovativeness in terms of research problems, theoretical approaches, and methodological approaches. Drawing on an international web-based survey, this study is based on responses of 866 journal editors. The article sheds light on editors’ inclination toward accepting different forms of innovative research for publication. Overall, findings indicate that individual characteristics, such as editorial risk-taking or long-term orientation, are more decisive than journal-related characteristics regarding innovativeness. However, editors of older journals turn out to be less open toward new research problems and a u-shaped relationship between a journal’s rating score and editor’s willingness to adopt new theoretical approaches exists. Most surprisingly, editors’ consensus orientation regarding reviewer recommendations is positively associated with methodological innovativeness.
Scientometrics published our article Editorial governance and journal impact: a study of management and business journals. It examines how characteristics of editors, in particular the diversity of editorial teams, are related to journal impact. Our sample comprises 2244 editors who were affiliated with 645 volumes of 138 business and management journals. Results show that multiple editorships and editors’ affiliation to institutions of high reputation are positively related to journal impact, while the length of editors’ terms is negatively associated with impact scores. Surprisingly, we find that diversity of editorial teams in terms of gender and nationality is largely unrelated to journal impact. Our study extends the scarce knowledge on editorial teams and their relevance to journal impact by integrating different strands of literature and studying several demographic factors simultaneously.
The closing conference of the program “Performances de la recherche en sciences humaines et sociales” takes place at the University of Bern from 3 – 4 November 2016.
The conference addresses topics such as impact and quality in research in the humanities and the social sciences, excellence in research and societal usefulness, conveying quality of research in political areas and preserving diversity of research in times of excellence categories and rankings. The closing conference presents the results of the program and opens the discussion. It is the conference’s aim to bring together researchers, project leaders and important stakeholders of the Swiss higher education landscape.
- Prof. Dr. Shalini Randeria, Rector of the Institute for Human Sciences (IWM) in Vienna and Research Director and Professor of Social Anthropology and Sociology at the Graduate Institute of International and Development Studies (IHEID) in Geneva
- Prof. Dr. Peter Dahler-Larsen, Department of Political Science, University of Copenhagen, former President of European Evaluation Society
A recent study found that students’ evaluation of teaching quality at universities drops by about half a standard deviation when the effectiveness of the teacher in improving students’ performance increases by one standard deviation. In other words: the better the teacher, the worse the evaluation. This is mainly due to the extra-effort that good teachers require from their students. The weather also proves to be a determinant of evaluation results. Overall, these findings cast doubts on the evaluation practices of universities, both with regard to current and prospective teachers.
On the 15th and 16th of July Alfred Kieser, Jetta Frost, Rick Vogel, Fabian Hattke, Jessica Petersen and me attended the “Governance, Performance & Leadership of Research and Public Organizations” symposium at the Bavarian Academy of the Sciences in Munich.
We heard several keynote addresses, inter alia of Margit Osterloh who gave a lecture about the shortcomings of current assessment procedures in academia and initiated a passionate discussion about this topic and John P.A. Ioannidis who illustrated paradoxes and room for improvement of research evaluations.
The parallel tracks concerned the central topics of the symposium (Governance, Performance & Leadership) and contained several interesting contributions. Rick Vogel, Alfred Kieser, Fabian Hattke and Jessica Petersen introduced the audience to the reasons of journals ranking-success. Jetta Frost, Fabian Hattke and me presented the audience how effects of performance measurement in academia can be captured through a multi-dimensional framework of scholars‘ organizational identification.
On the whole it was a really inspiring symposium with a nice composition of lecturers!
As our IndiKon project proceeds, it splits up into several sub-projects, one of which is concerned with the hidden drivers of journal rankings. Such rankings are increasingly important elements of performance management systems in higher education. We are now in the midst of gathering data on the composition of editorial boards, and we are truly amazed by the great variety of different roles, functions and bodies in the editorial governance of journals. Here is a selection (making no claim to be exhaustive):
Area Editor, Associate Editor, Associate Editor Board, Associate Editor ex officio, Associate Editor for Reviews, Board of Professionals, Book Review Board, Book Review Editor, Co-Editor, Consulting Editor, Contributing Editor, Coordinating Editor, Copy Editor, Cross-National Studies Editor, Department Editor, Deputy Editor, Editor, Editor Elect, Editor Emeritus, Editor-at-Large, Editorial Advisor, Editorial Advisory Board, Editorial Assistant, Editorial Board, Editorial Coordinator, Editorial Manager, Editorial Review Board, Editor-in-Chief, Editor-in-Chief Elect, Executive Board, Executive Director, Executive Editor, Executive Editorial Board, Feature Editor, Former Editor, Former Editor-in-Chief, Founding Editor, General Editor, Graphics Editor, Honorary Editor, Incubator, Joint Editor, Managing Editor, Managing Editor Emeritus, Manuscript Editor, Past Editor, Past Editor-in-Chief, Point-Counterpoint Editor, Policy Board, Product Editor, Production Coordinator, Production Editor, Production Manager, Regional Assistant Editors, Regional Editor, Reviewing Editor, Section Editor, Senior Advisory Board, Senior Associate Editor, Senior Editor, Special Adviser, Special Editor, Special Projects Manager, Subject Area Associate Editor, Technical Editor, Web Editor.
With some delay to Ferdinand’s report, but here they are: our impressions from the EGOS colloquium in Rotterdam, Netherlands. IndiKon’s Alfred Kieser (Zeppelin University) co-chaired the track “Universities in Unsettled Times: Effects of Evaluations, Accreditations and Rankings” together with Richard Whitley (University of Manchester) and Lars Engwall (Uppsala University). The sub-theme’s program covered a broad range of topics relating to current reforms in higher education systems around the world. We presented our paper “What makes journals highly ranked?” and received valuable comments from the other participants. The positive feedback encourages us to continue with this sub-project on a broader database and with new bibliometric indicators. Jessica Petersen (Zeppelin University) and Fabian Hattke (University of Hamburg) will join us in our efforts.
One of the most striking insights I had in the course of the three days was the following: Much is written about the impact of performance management systems in higher education on the professional identities of academics, and many authors argue that the new managerialism in universities is a serious threat to academic identities. However, as Richard Whitley and others argued, performance management systems may also strengthen the commitment of scholars to their professional communities. If performance measurement is based on rankings, elements of peer control within scientific communities become incorporated into output-based control systems of universities. Since rankings reflect the stated or revealed preferences of scholarly groups (i.e., invisible colleges), their professional standards gain in importance for, and are appreciated by, performance appraisals in universities (i.e., visible colleges). This, in turn, may even elevate the professional identities of scholars. For me, this is a nice thought worth examining empirically.
As you might have noticed, our blog has become somewhat quiet recently. But the hot summer is not the (only) reason.. During the past weeks, we’ve been busy crafting our papers for this year’s EGOS conference.
As you’ll know from your own experience, the title of a short paper may change significantly when it is developed towards a full paper. The story gets a new twist, is enlarged or further condensed, theoretical constructs may change, and conceptual frameworks are specified. And sometimes empirical observations just won’t tell the story you’ve expected in the first place..
So here’s the list of our short and full paper titles:
Short: Organizational responses to evaluations, rankings and performance indicators – Evidence from French and German universities. Full: Organizational Responses to Institutional Complexity – Evidence from French and German Universities.
Short: How Rankings Impede Scientific Progress. Full: What Makes Journals Highly Ranked? A Bibliometric Analysis of Management and Organization Studies.
Short: University commons: An empirical analysis of collective resources in German universities. Full: Organizing Collective Action? An Empirical Analysis of Common Goods in German Universities.
Steven Ward has published a great comment on the trend to evaluate, rank, audit, and assess every inch of scientific life: “Academic assessment gone mad.” Hilarious!!!!
Our joint projects are represented at this years EGOS conference in Rotterdam. Here’s a short summary of our activities:
Alfred Kieser is chairing a track together with Lars Engwall and Richard Whitley on “Universities in Unsettled Times: Effects of Evaluations, Accreditations and Rankings” (see this post for the cfp).
A research collaboration that started at last years EURAM conference has led to first results. Markus, Ferdinand and myself are co-authoring a paper from Anne Riviere and Marie Boitier (both Toulouse Business School) entitled: “Organizational responses to evaluations, rankings and performance indicators – evidence from French and German Universities”.
Rick and Alfred will evaluate “How rankings impede scientific progress” – a major concern of both, scientists and policy makers.
Together with Jetta and Steffen, I’ll elaborate our theoretical reasoning on university commons by using empirical data: “Universities commons: An empirical analysis of collective resources in German universities”.
Last but not least, Markus and Ferdinand will present their paper “From institutional contradictions to organizational transformation: The case of a university merger” in Sub-theme 23: Public Sector Reforms and Organizational Responses: Comparing Universities and Hospitals.
So, plenty of opportunities to meet, discuss, work, laugh & chat @ Rotterdam! Hope to see you there.
This year’s annual VHB WK Organisation workshop was held at the historical Friedrich-Schiller-University in Jena. Around 100 organization researchers and management scholars from Germany, Switzerland, and Austria gathered in the city, in which Wilhelm Alexander von Humboldt and Friedrich Schiller had their disputes on natural sciences and the humanities in the late 18th century. In this blog entry, I’ll report on two aspects from the overall program with its many great presentations, interesting talks and inspiring discussions.
First, in two podium discussions, the WK Organisation harshly criticized the (mis)use of rankings for tenure decisions and the allocation of ressources. Important rankings, such as the VHB journal ranking Jourqual, fail to meet scientific standards of validity and reliability. It was claimed, that the overall attempt to measure research activities does nothing but standardize thinking and thereby hinders the emergence of radical innovations – which is a core purpose of research activities. Besides, rankings are widely deemed unappropriate for a multidisciplinary field like organization research: they compare apples and oranges when multiple paradigms and methods are developed and applied to a variety of contexts and problems. The future will show, how the VHB reacts to these strong arguments and whether the discussion spreads to other commissions in similar vein.
Second, our article on a “micro-foundation of leadership, governance and management in universities” was well received. The shift away from the analysis of legal frameworks towards actual (or in our case: documented) governing behavior sheds light on how macro-orders of new public management translate into communicative relations between governing bodies on the micro-level. Leadership, governance and management forfeit their conflictary logics and form complementary patterns as they unfold in linguistic practices of agenda building, critical reflection, devising, and debriefing to bring forth organizational change. Both, the current political discussion as well as future research might benefit from the analysis.