The results of an international web-based survey on journal editors in four disciplines were published in Research Evaluation. The article is titled “How innovative are editors?: evidence across journals and disciplines”. Journal editors play a crucial role in the scientific publication system, as they make the final decision on acceptance or rejection of manuscripts. Some critics, however, suspect that the more innovative a manuscript is, the less likely it will be accepted for publication. Especially top-tier journals are accused of rejecting innovative research. As evidence is only anecdotal, this article empirically examines the demand side for innovative research manuscripts. I assess journal editors’ innovativeness, i.e. their general predispositions for innovative research manuscripts. As antecedents to innovativeness, personal and contextual factors are taken into account. I differentiate the concept of innovativeness in research by distinguishing three dimensions: innovativeness in terms of research problems, theoretical approaches, and methodological approaches. Drawing on an international web-based survey, this study is based on responses of 866 journal editors. The article sheds light on editors’ inclination toward accepting different forms of innovative research for publication. Overall, findings indicate that individual characteristics, such as editorial risk-taking or long-term orientation, are more decisive than journal-related characteristics regarding innovativeness. However, editors of older journals turn out to be less open toward new research problems and a u-shaped relationship between a journal’s rating score and editor’s willingness to adopt new theoretical approaches exists. Most surprisingly, editors’ consensus orientation regarding reviewer recommendations is positively associated with methodological innovativeness.
I just came across this beautiful website on which Northeastern University’s Barabasi Lab created an interactive visualization of Roberta Sinatra and colleagues’ paper “Quantifying the Evolution of Individual Scientific Impact”. The paper argues that scientists’ most impactful publications (as measured by citations) could occur at any point in their career. The authors base their argument on a large-scale bibliographic dataset containing publications of more than 10,000 scientists in different disciplines.
The visualization looks like a life-line for an individual scientist’s work (picture on the left) or an ocean with wave peaks and valleys for whole disciplines (picture on the right). It is striking how the overall disciplinary patterns look very similar – at least when they are corrected for absolute citation counts. The authors found that impact is randomly distributed within a scientist’s career. So, if you haven’t published an impactful paper yet, don’t give up – it might just be the next one.
Pictures are screenshots from the website: http://scienceofsuccess.barabasilab.com/
There’s also a short clip about the project on youtube:
I know, publication bias is not a new topic but it is still of high relevance. I found some very interesting results in a study from Annie Franco, Neil Malhotra, Gabor Simonovits published in Science (19 Sep 2014, Vol. 345, Issue 6203, pp. 1502-1505): Publication bias in the social sciences: Unlocking the file drawer. According to the authors, “only 10 out of 48 null results were published, whereas 56 out of 91 studies with strongly significant results made it into a journal.” The following figure summarizes the results:
The pattern is quite remarkable. The majority of evidence that does not support any hypothesized relationship is not even written-up in the first place. So there’s reason for doubt that special platforms or journals who publish papers with contrary findings – as it is regularly discussed for overcoming publication bias – will significantly increase the number of null results published.
The publication bias in science, and the underappreciation of replication studies, has made it into late night entertainment on TV. Watch the episode of Last Week Tonight with John Oliver here.
As reported by the Korea Harold, South Korea is facing academic scandal. Prosecutors suspect 200 professors and several employees from academic publishers to be involved in a huge copyright-violation complot. Professors allegedly changed the covers of existing books which were authored by other scholars and published them in their own names. Most original authors seem to have had no idea what was going on, others are accused of having particpated in the fraud for financial compensations. Investigations suggest that scholars tried to boost their academic profiles ahead of rehiring assessments.
If accusations turn out to be true, it would raise serious concerns about the certain quality control mechanisms in scholarly publishing and HRM practices. It also seems to provide a rich case for studying the dynamics of corruption in academia.
In his post, Rick already reported on the problem of fake science in predatory journals. After being contacted by a dubios publisher, I came across a blog on scholarly open access that provides a list of predatory publishers. The blog seems pretty reliable to me, so it may be worth checking the comprehensive list in case you have doubts about the trustworthiness of a publication opportunity. It also provides excellent information about open access in general.
Reproducibility is a core value of research. An open collaboration of more than 100 authors has recently conducted replications of empirical studies published in psychology journals and found that replication effects are considerably weaker than the original effects. While 97% of original studies had significant results, only 36% of replications had. The authors conclude that journal reviewers and editors may reject replication studies as unoriginal and prefer innovative studies instead. However, “innovation points out paths that are possible; replication points out paths that are likely; progress relies on both.” Read the full paper in Science here.
Although transparency, openness and reproducibility are core values of science, the academic reward system does not sufficiently incentivize according practices. In the present reward system, excessive emphasis on innovation and the neglect of negative and null findings may undermine practices that support verification and replication. The Transparency and Openness Promotion (TOP) Committee has now released eight standards for more open journals’ procedures and policies for publication. You can download these guidelines here and read the full article in the Science Magazine here.
Research collaborations are getting more and more important in contemporary sciences. An article from the CERN research institute recently published in nature, now probably sets a record. It is titled “Observation of the rare Bs0 →µ+µ− decay from the combined analysis of CMS and LHCb data” and gives reference to over 3.000 authors (yes, in words: three thousand).
According to the disclaimer: “All authors have contributed to the publication, being variously involved in the design and the construction of the detectors, in writing software, calibrating sub-systems, operating the detectors and acquiring data and finally analysing the processed data.” Well, congratulations to this piece of work!
As reported in my last post, we’re currently analyzing keywords of articles in organization and management journals. I used wordle to provide a first visual summary for our blog. The “top500” wordcloud detailed hardly any methods. The new wordcloud below is exclusively based on keywords with methodological classifications and shows their “top250” terms. It seems, stochastic models and regression analyses are most frequently indexed.
Our current bibliometric study includes over 85.000 articles from approx. 170 management journals published between 2000 and 2013. The wordcloud below shows the top 500 keywords used in these articles. In the end, it looks like studying organization and management is all about performance. Related concepts, e.g. firm performance or competitive advantage are also among the top 500. Another important subject includes keywords such as knowledge, information, technology, and innovation. Behavioral constructs, e.g. trust, satisfaction, or commitment are also among the most visible attributes of articles, although of less significance. Methods are hardly indexed in the top 500 wordcloud.
Peer reviewing is a widespread procedure for evaluating the quality of scholarly manuscripts before publication. As such, it is at the heart of academia. Yet, little is known how scholars percieve the peer-review process.
Today, we initiated our survey into the peer-review system from the perspective of those who submit papers to academic journals. The survey asks questions about general attitudes towards your job, your personal experiences with peer reviews, and possible alternatives to the common pre-publication peer review process. We’ll keep you posted on the results!
While the future of scholarly communication is still written in the stars, the present publication system reveals its shady sides: In “predatory” journals, activists who pursue political or commercial goals circumvent peer review and publish fake evidence on scientific or pseudo-scientific issues (e.g., reporting alien sightings, denying global warming or promoting untested medicines) in return for payment of a publication fee. The problem: Once published, the articles get indexed in Google Scholar and thus flow freely into the communcation process in science and beyond. Tom Spears reports on this problem in the Ottawa Citizen.
The body of available management literature has grown considerably during the past years. A search in Thompson Reuters’ ISI Web of Knowledge shows 19,143 articles published in the journals listed in the Journal Citation Reports for Busines and Management in 2013. Meaning 52.4 new articles were published per day – or one every 27.5 minutes. These numbers have almost doubled since 2003 when “only” 30.9 articles were published per day. And they are more than five times as much as 1993 (9.7 articles per day).
But how may we interpret these numbers? Is this increase a sign of scientific advancement, as treated by most rankings and performance-management systems? Or does it indicate a dysfunctional “inflation” of publications without further enlarging the scientific knowledge-base?
Either way, the vast majority of publications are rarely recognized by the scientific community (i.e. cited) at all. Of course, it’s not possible for scholars to keep up with the development of management science by reading all published articles. But considering this trend, we should ask ourselves wether there are more effective ways to communicate our scholarly results and to contribute to the development of new knowledge.
Let’s take a closer look at the scholars behind the publications. For that purpose, I’ve inverted the network from the last post . Now, vertices represent authors and a tie indicates a journal, in which both authors have published. The ties are stronger if both authors wrote several articles for the same journals. The size of a node represents the number of articles in our database from that particular author.
Again, we can identify our four clusters:1) Upper left: sociological studies 2) Upper right: organization and management studies 3) Lower left: research on higher education management and 4) Lower right: studies on technology transfer and science communication.
Now, a high betweenness centrality of an author indicates boundary spanning research by contributing to journals in different discourses. The authors with the highest betweenness are: 1 Cynthia Hardy (1350), 2 Loet Leydesdorff (723), 3 Ase Gornitzka (643), 4 Georg Krücken (627) and 5 Karl Weick (610). If an author has a high degree centrality (i.e. the node has many direct ties) he or she is well connected in terms of journal diversity. In contrast to the betweenness measure, diversity in this case most likely refers to journal diversity within a certain discourse. The top five authors according to degree centrality are: 1 Barbara Sporn (27), 2 Royston Greenwood (26), 3 Georg Krücken (24), 4 Dennis Gioia (22), 5 Christopher Hinnings (22).
Of course, the shown results are limited in terms of generalizability. The measures reflect only a part of the authors’ works and should not be interpreted as an indicator for performance. Besides, the network is only based on 68.6% (more like 40%, since I’ve removed quite some isolates) of all the publications that happend to find their way into our database, which, obviously, is pre-selected by subjective preferences for specific theories, contexts, and methods. However, the networks provide useful information about the foundations of our work at a glance. I wonder how the actual collaboration network looks like..
Over the past three years, we have gathered and sighted literature related to higher education governance and the organization of knowledge work. Now, it is time to open the black box and take a look into a good part of the research that our work is based on. In the RePort project, we’ve used Mendeley as a collaborative tool, which helped us to consolidate publications from the Leuphana and Hamburg teams.
Almost 1,400 authors contributed to 930 studies (avg. 1.5 authors per publication). The most common form of publication is journal articles (68.6%), followed by chapters in edited volumes (14.5%). Monographs (7.2%), working papers (4.8%), project reports (1.8%), dissertations (1.7%), and conference proceedings (1.4%) are of less importance.
The figure displays a network of related journals in our database. The journals are shown as nodes (the size depends on the number of articles in the respective journal). The authors are displayed as ties between the nodes. Two journals are connected if one author contributed an article to both journals. The tie strength indicates if more than one author has published in both journals. The resulting network displays denser clusters of stronger interrelated journals and structural holes with no author connecting the journals. We identify four clusters (three large and one small) in the network. They can be interpreted as outlets for four distinct groups of scholars.
1) Upper right: studies on technology transfer and science communication.
2) Upper left: research on higher education management.
3) Lower right: organization and management studies.
4) Lower left: sociological studies.
The most important journals in terms of betweenness centrality (i.e. the number of shortest paths from all nodes to all others that pass through that node) are: Academy of Management Review (401), Higher Education (327), Research Policy (227). These journals attract scholars from different discourses.
Note that the network only covers a small part of the database, since it only contains journal publications. Besides, a threshold was set for 2 authors. Journals with only one author contributing an article to another journal in the database or journals without a connecting author at all (“isolates”) were eliminated for better visualization.