Publication Productivity in Research in Higher Education and The Journal of Higher Education, 1995-2005
By Rachal, John R Shelley, Kyna; David, William W V
Journal publication productivity has for several decades been a commonly used index of program quality within numerous fields. Following this ample precedent, articles in each issue of two of the premiere journals within the academic discipline of higher education were examined for the 11-year period 1995-2005 inclusive. Utilizing a point system based on authorship order and affiliation, institutions and individuals were ranked by overall publication productivity in Research in Higher Education and The Journal of Higher Education. Institutions were also ranked by productivity in each of the two journals separately. Additionally, data were collected concerning gender as well as academic rank or position of authors in relation to productivity. Findings include UCLA’s first position in the overall rankings, considerable variation in the likelihood of collaboration among high-producing faculty, and the greater productivity of full professors as compared to other positions and ranks. Examining institutional and individual publication productivity in scholarly journals is a commonly used index of institutional quality, influence, and prestige within a discipline. Research in this area has spanned several decades and numerous disciplines: psychology (Cox & Catt, 1977; Howard, Cole, & Maxwell, 1987; Webster, HaU, & Bolen, 1993), reading (Hopkins, 1979; Johns, 1983), computer science (Tinian, 2002), social work (Rothman, Kirk, & Knapp, 2003), law (Ellman, 1983), sociology (Lewis, 1968), gerontology (Rachal, Hemby, & Grubb, 1996), marketing (Clark, 1995; Clark & Hanna, 1986), finance (Heck & Cooley, 1988; Heck, Cooley, & Hubbard, 1986), advertising (Barry, 1990; Henthorne, LaTour, & Loraas, 1998), political science (McCormick & Rice, 2001), journalism (Cole & Bowers, 1973), and criminal justice (Fabianic, 2002), Some of these studies have examined single journals (Clark, 1995; Clark & Hanna, 1986; Heck, Cooley, & Hubbard, 1986; Johns, 1983). The general field of education has also investigated institutional publication productivity (Rachal, Bromfield-Day, & Gorman, 2000; West, 1978), as has the specific field of adult education (Rachal & Sargent, 1995). Other authors have examined issues related to publication productivity such as the influence of feminism (Hayes, 1992), productivity’s relation to graduate training and graduate students (Blunt & Lee, 1994; McCormick & Rice, 2001), teaching vs. research productivity (Fairweather, 2002), productivity as a function of “intimate academic partnerships” (Creamer, 1999, p. 261), and correlates of productivity (Teodorescu, 2000).
As an index of program quality, publication in premiere research journals has the advantage of being quantifiable based on clear criteria that are independent of institutionally unique criteria concerning, for example, teaching-criteria that may vary from institution to institution. Productivity studies allow comparisons among programs within institutions which are not subject to impressionistic assessments based on general institutional reputation. Certainly such general institutional reputation assessments are made and have value, based on subjective criteria such as teaching reputation, general environment, and student friendliness, as well as such measurable criteria as selectivity, endowments, grant procurement, and faculty-student ratios. But general institutional reputation assessments do not necessarily reflect individual program quality, nor do they focus on research. The rationale for the long-standing and cross-disciplinary interest specifically in journal publication productivity studies is that they are a particularly useful and quantifiable measure of an academic program’s engagement in research. While books, presentations, and grants are also useful measures of research activity, the esteemed place of the refereed academic journal is perhaps the sine qua non of scholarship, representing a discipline’s most current thought, its newest findings, and critique of its established paradigms. This is not to say that journal publication productivity studies are above criticism. Indeed, caution in reviewing their results is not only wise but necessary, in view of the fact that by definition they ignore excluded journals as well as all books and grants, not to mention more subjective factors in the research process such as the quality of collaborative mentoring or collegiality vs. competitiveness. Add to such provisos these studies’ tendency to reward programs with larger numbers of faculty as well as programs with one or two research “stars.” With such caveats in mind, however, journal publication productivity studies provide a very useful barometer of a program’s research excellence, and thus one important factor of its overall excellence.
The methodology of the present study followed the typical approach of many previous studies: to examine each issue of the selected journals during a selected time period, to record various information about each major article within each issue, and to award points to institutions based on authorship order and author’s institutional affiliation for each article. Thus we examined two of the premiere journals in higher education, The Journal of Higher Education and Research in Higher Education, for the 11-year period 1995-2005 inclusive. We recorded in Excel format for each full article the following: the journal name, year, volume, and issue number; the author’s name, institution, gender, rank, and unit affiliation; and country of origin. Only full-length articles were included; shorter works such as book reviews, editorials, replies, and responses were excluded. Points were allocated using the formula utilized by Rachal and Sargent (1995): (a) one institution, 1.0 point; (b) two institutions, .6 point for first, .4 for second; (c) three institutions-.5, .3, .2; (d) four institutions-.40, .26, .19, .15;(e)fiveinstitutions-.37, .22, .18, .14, .09. (For individual author productivity, the same point values are allocated to individual authors instead of institutions). Following Rachal and Sargent, articles having six or more institutions had the point equally divided among them, on the assumption that with large numbers of authors the labor of authorship is sufficiently divided that distinctions are minimized. Similarly, for the rare articles in which authorship was stated to be alphabetical without priority, the point was also divided equally. With these two exceptions, any particular author position receives a smaller fraction of the point the more authors mere are: a second author in a three-author article receives less (.3 of a point) than a second author in a two-author work (.4 of a point). We define a “full equivalent article” as synonymous with the number of points earned, such that 5.0 points would be five full equivalent articles, irrelevant of the total number of discrete articles that an institution or individual may have contributed to. Thus a school might have, for example, 5 full equivalent articles, but due to authorship order reflected in the above allocation system, that school might have 13 total discrete articles.
The present authors made the difficult decision to record institutional affiliation precisely as it was presented in the journal rather than try to interpret what city or campus a given article was from when that information was not provided. Since there was no objective way to discern with certainty the article’s specific provenance in such cases, campuses and potentially entire universities that are part of a state system might be collapsed into one; for example, when no city identification was provided for The State University of New York, of necessity we treated that as one institution; but when the information provided in the journal indicated SUNY-Buffalo, we treated that as a separate institution. This also occurred with Indiana University at Bloomington, which was fourth in the overall ranking, and Indiana University (with no city affiliation given), which tied for 39th position. Consideration was given to searching the author and trying to establish a city identification, but the potential for author mobility within the 11year period created the possibility of overt errors on our part. Thus our data reflect the provenance information exactly as it was presented in the two journals.
Data concerning institutional productivity were examined collectively and broken out by journal. Data concerning individual author productivity are combined for both journals. The method of data collection also allowed examination of trends concerning single- authorship vs. collaborative authorship, unit or program affiliation distribution, seniority and rank distribution, and gender distribution.
For the 11-year period, Research in Higher Education published 226 full articles written by 318 unduplicated authors (i.e., the author’s name is included only once, despite that author possibly having contributed to numerous articles), while The Journal of Higher Education published 304 with 426 unduplicated authors. Table 1 displays the overall rank order of institutions based on the points derived from both journals, the total number of discrete articles identified with each institution, the total number of authors contributing to the points, and the total number of first authors contributing to the institution’s points. In both of these latter author columns, a single individual might have contributed to three articles, and thus his or her name would count three times; i.e., a single individual’s name might be duplicated multiple times. Table 1 displays all institutions that received three full points or higher based on authorship in both The Journal of Higher Education and Research in Higher Education, with the University of CaliforniaLos Angeles, Pennsylvania State, University of Michigan, Indiana University-Bloomington, Michigan State, University of Arizona, University of Southern California, Stanford, University of Kansas, and University of Maryland-College Park in the first 10 positions respectively. At one (1.0) point per full equivalent article, UCLA earned 20.73 points and thus produced an overall mean of slightly under the equivalent of two full equivalent articles per year in the two journals for the 11-year period; and 42 authors (including probable duplications) participated in the production of UCLA’s 28 total discrete articles, 19 of which had a UCLA author as lead author. Maryland, in 10th position, produced a total of 8.3 points and a mean of three-fourths of a full equivalent article for each of the 11 years, and 14 authors participated in the production of 10 total discrete articles, all 10 of which had lead authors from Maryland. Tables 2 and 3 reflect the same information but broken out by journal-Table 2 displaying productivity in The Journal of
Higher Education, and Table 3 displaying productivity for Research in Higher Education. The two tables include all institutions scoring a total of at least 2 points in the journal covered by that table. Six institutions-UCLA, University of IndianaBloomington, University of Michigan, Michigan State, Penn State, and University of Arizona-published in both journals sufficiently to place in the top ten of both. North Carolina State University as well as the Universities of Kansas, Minnesota, and Massachusetts placed in the top ten of Research in Higher Education; and the University of Southern California, Vanderbilt, Stanford, and Ohio State were among the top 10 in The Journal of Higher Education.
Table 4 displays publication productivity by individual author and includes all authors with 2 full points or more. Thus Adriana Kezar earned 6.8 total points from 8 separate publications, and Amaury Nora earned 2.032 from 9 publications. Table 5 reports gender and position of the authors. Combining the two journals, 549 or 56.7% of the total authors (including duplicated names who contributed to more than one article, such that Kezar is listed 8 times) are male, 409 or 42.3% are female, and 1% were unknown. As for position, 33.7% of the authors are full professors, 11.8% are graduate students, 10.6% are administrators, etc.
Examining publication productivity in premiere journals has been a practice of many disciplines over several decades, yet the field of higher education as an academic discipline has not heretofore undertaken such an examination, including analysis of two of its two premiere American journals, The Journal of Higher Education and Research in Higher Education. Such productivity studies are by no means immune to criticism: most notably, they are only one measure of program quality, ignoring such other measures as teaching, grant procurement, resources, and collegiality, as well as other forms of research productivity, for example, book production. Nevertheless, publication productivity studies should not be dismissed as mere counting exercises; journal publication is often considered the primary criterion in such academic processes as tenure and promotion deliberations. Indeed frequency of publication sometimes seems to surpass examination of the quality of the publications in such deliberations. Nevertheless, quantity and quality are inextricably related when publications are in the best journals: quality is assessed in the initial peer review process leading to acceptance and publication, and intuitively the more an individual or institution has of such publications (quantity), the higher the overall prestige (quality). Who, for example, would choose to have two high-quality works if, all else being equal, she could have 20?
Our findings reveal that 41 institutions (three tied at rank 39) published three or more articles in the two journals in the 11year period and that nine published over 10. Given the fact that both journals are American and deal with American higher education themes, it is not surprising that all the top 41 institutions are in the United States. Some caveats to the rankings are in order, however. Clearly program size should not be ignored when reviewing the rankings. Since faculty cannot be assumed to stay at one institution, and since the number of faculty can be assumed to rise or fall within a given program over the course of the 11 years, a ranking based on publications per faculty member in each program was not feasible. Nevertheless, a per-faculty member ranking would surely have altered the rankings; the methodology used here of necessity tends to lower the rankings of smaller programs. All of the top 10 institutions are large universities that likely have fairly large education programs, as well as significant pressure to publish. Another potential caveat relative to the rankings is the potential of a “star” publisher to improve aprogram’s rank. Lastly, as noted in the Method, for a few institutions, program identification was not always clear; for example, the University of Maryland-College Park was in 10th place overall, and the University of Maryland (without further identification) was 18th. Were these the same? If so, that would have moved Maryland into sixth position. But due to the inconsistency in whether authors identified their institutions as main campuses or satellite campuses, we followed the identifying information precisely as it was given. For three or four institutions in our top 42, this could have altered the rankings.
Higher education, as an academic field of study, seems to be a field in which collaboration is more the norm than in a field such as English, but less the norm than in, for example, psychology. Rachal and Sargent (1995) found that over 80% of the articles in several major psychology journals were collaborations, as contrasted with a recent scan of three years of PMLA, a highly esteemed American journal in English literature, which showed that only about 15% of its articles were collaborations. Our results found 49% of articles were single authored overall (52% in Research in Higher Education and 47% in Journal of Higher Education) and 51% were collaborations, with the majority of those being two-author works. One possible explanation is that the soft sciences, including education, may tend to distribute the workload among those who do data collection, literature review, and statistical analysis, whereas liberal arts authors tend explore a topic from beginning to end on their own. This pattern certainly applies to one of the authors of the present study, who never co-authored in his liberal arts career, but frequently co-authored in his education career. Individual authors within our study also varied among each other concerning collaboration: the individual receiving the highest number of points (6.8) contributed to eight articles, suggesting very little collaboration; but the individual at the 42nd position with 2.032 points contributed to nine, suggesting considerable collaboration. Out of the top 42 individual producers, only three people did not collaborate at least once; by contrast, two people collaborated enough to contribute to 13 different articles.
Unlike other academic disciplines, whose major research journals are filled with articles from within that discipline, higher education is unusual, if not quite unique, in publishing on topics of interest across the academic landscape. Whereas chemistry journals publish articles from chemistry researchers, higher education is the common thread for all kinds of academic disciplines, resulting in 3 7% of Research in Higher Education articles coming from authors not in the academic field of education, and a full majority (52%) of The Journal of Higher Education authors not identifying education as their professional association. Such diversity no doubt reflects, in part, the fact that most themes covered in the journals transcend academic department borders- themes such as pedagogy, financial matters, policy, faculty issues, and administration.
Although the data in Table 5 are reasonably selfexplanatory, the issue of “position” deserves some comment. Almost 70% of all authors (duplicated) identified themselves as either assistant, associate, or full professors-possibly a little lower than expected. A full third (33.7%) of all Ae authors identified themselves as full professors, somewhat dispelling the notion that tenured, full professors might rest on their laurels, while concomitantly reinforcing the idea that full professors deserve their rank in view of their higher publication rates. As a corollary, assistant professors (20.6%) out-produced associate professors (15.1%), reinforcing the notion of the former as hungry seekers of tenure. Indeed, graduate students comprised 11.8% of the authors, not far behind associate professors and likely suggesting publication of their dissertation work and possibly their pursuit of faculty positions in academe-not to mention that graduate students outnumber professors. Nor do those identifying themselves as administrators fail to publish: 10.6% did so in addition to their administrative duties, closely following graduate students. No doubt many of these administrators hold academic rank and continue to work in their original academic role-that of professor in a particular academic discipline. Of course it should be noted, in looking at all of the position percentages, that it is impossible to know how many submissions are made relative to any given position-for example, is the fact that full professors have a higher percentage of the total articles than other groups the result of their simply submitting mors manuscripts than other groups? Similarly, it is impossible to know whether the number of actual publications is proportional to the number of submissions for any given position-for example, do full professors have a higher acceptance rate than other groups? Conclusion
Research remains to be done in the area of research productivity and its correlates, including such areas as other forms of productivity (e.g., grant procurement, book authorship, conference presentations); potential connections between research productivity and teaching effectiveness; more in-depth work on collaboration patterns and rationales; publication productivity’s connections to reward and dismissal systems; connections to program continuation, termination, and funding decisions; and even decision-making of graduate students in program selection. While journal publication productivity may well interest administrators and even graduate students, almost certainly the most interested constituency is the researchers themselves. Barry (1990, p. 53), summarizing the values that others have placed on such studies, cites Heck and Cooley’s (1988) observation that “productivity analyses of academic journals help to codify the contributions of a discipline, to illustrate that discipline’s maturation, and to provide for the evaluation and setting of standards for scholarly output”; further, they “can sharpen the output of both the scholars and the discipline as a whole” (Cole & Bowers, 1973, as cited in Barry) and finally, they can serve as “an appropriate surrogate for the ‘currentness1 of a department’s faculty” (Clark, 1986, as cited in Barry). As the academic discipline of higher education has evolved since its first courses in 1893, publication productivity studies may serve not only such purposes as these, but they may also allow reflection on the field’s maturation.
Barry, T. E. (1990). Publication productivity in the three leading U.S. advertising journals: Inaugural issues through 1988. Journal of Advertising, 19, 52-60.
Blunt, A. & Lee, J. (1994). The contribution of graduate student research to Adult Education/Adult Education Quarterly, 1969-1988. Adult Education Quarterly, 44, 125-144.
Clark, G. L. (1986). Leading marketing departments in the United States: Who is publishing where, and how much are they publishing? American Marketing Association, Chicago Conference, AMA Educators’ Proceedings, 149-153.
Clark, G. L. (1995). An analysis of the sources of articles in the Journal of Marketing Education: An update. Journal of Marketing Education, 77, 25-33.
Clark, G. L. & Hanna, N. (1986). An analysis of the source of articles appearing in the Journal of Marketing Education since its founding in 1979. Journal of Marketing Education, 8, 71-74.
Cole, R. R. & Bowers, T. A. (1973). Research article productivity of U. S. journalism faculties. Journalism Quarterly, 50, 246-254.
Cox, W. M. & Catt, V. (1977). Productivity ratings of graduate programs in psychology based on publication in the journals of the American Psychological Association. American Psychologist, 32, 793- 813.
Creamer, E. G. (1999). Knowledge production, publication productivity, and intimate academic partnerships. The Journal of Higher Education, 70, 261-277.
Ellman, I. M. (1983). A comparison of law faculty production in leading law reviews Journal of Legal Education, 33, 681-692.
Fabianic, D. (2002). Publication productivity of criminal justice faculty in criminal justice journals. Journal of Criminal Justice, 30, 549- 559.
Fairweather, J. S. (2002). The mythologies of faculty productivity: Implications for institutional policy and decision making. The Faculty in the New Millennium [Special issue]. The Journal of Higher Education 73, 26-48.
Hayes, E. (1992). The impact of feminism on adult education publications: An analysis of British and American journals. International Journal of Lifelong Education, 11, 125-138.
Heck, L. J., & Cooley, P. L. (1988). Most frequent contributors to the finance literature. Financial Management, 17, 100108.
Heck, L. J., Cooley, P. L., & Hubbard, C. M. (1986). Contributing authors and institutions to the Journal of Finance: 1946-1985. Journal of Finance, 41, 1129-1140.
Henthorne, T. L., LaTour, M. S., & Loraas, T. (1998). Publication productivity in the three leading U.S. advertising journals: 1989- 19%. Journal of Advertising, 27, 53-64.
Hopkins, C. J. (1979). Productivity ratings of institutions based on publication in reading journals: 1972-1978. Journal of Reading Behavior, 11, 171-181.
Howard, G. S., Cole, D. A., & Maxwell, S. E. (1987). Research productivity in psychology based on publication in the journals of the American Psychological Association. American Psychologist, 42, 975-986.
Johns, J. L. (1983). A study of institutional productivity in “Reading World”: 1978-1993. DeKaIb: Northern Illinois University. (ERIC Document Reproduction Service No. ED 248 490)
Lewis, L. S. (1968). On subjective and objective rankings of sociology departments. American Sociologist, 3, 129-131.
McCormick, J. M. & Rice, T. W. (2001). Graduate training and research productivity in the 1990s: A look at who publishes. PS: Political Science and Politics, 4, 675-680.
Rachal, J. R., Bromfield-Day, D., & German, C. L. (2000). Institutional publication productivity in selected educational research journals, 1988-1997. Educational Research Quarterly, 24, 3- 19.
Rachal, J. R., Hemby, K. V., & Grubb, R. E. (1996). Institutional publication productivity in selected gerontology journals, 1984- 1993. Education Gerontology, 22, 281-291.
Rachal, J. R. & Sargent, S. F. (1995). Publication productivity in North American institutions in selected adult education journals, 1983-1992. Adult Education Quarterly, 45, 63-78.
Rothman, J., Kirk, S. A., & Knapp, H. (2003). Reputation and publication productivity among social work researchers. Social Work Research, 27, 105-116.
Teodorescu, D. (2000). Correlates of faculty publication productivity: A cross-national analysis. Higher Education, 39, 201- 223.
Tinian, G. (2002). An exploratory study of Malaysian publication productivity in computer science and information technology. Journal of the American Society for Information Science and Technology, 53, 974-986.
Webster, R. E., Hall, C. W., & Bolen, L. M. (1993). Publication productivity in selected school psychology journals: 1985-1991. Psychology in the Schools, 30, 136-142.
West, C. K. (1978). Productivity ratings of institutions based on publication in the journals of the American Educational Research Association: 1970-1976. Educational Researcher, 7(2), 13-14.
John R. Rachal Kyna Shelley William W. V. David
The University of Southern Mississippi
Copyright Educational Research Quarterly Jun 2008
(c) 2008 Educational Research Quarterly. Provided by ProQuest Information and Learning. All rights Reserved.