Trust the Process? The Trials of Academic Publishing
And why Substack is oh so much more fun.
Note: As a special treat to ring in the new year, this post is longer than usual and indulges my cranky side. If you’re not interested in a long, salty piece about the trials and tribulations of getting work published in an academic journal, best to skip this one. Also, it starts with a brief but irrelevant discussion of Philadelphia sports. You’ve been warned. Having said that, if you do read it, don’t skip the footnotes. A few of them are funny.
And, curtain…
The expression “trust the process” has a special meaning to Philadelphians. It started with the 76ers, who adopted an extreme strategy starting around 2013 that involved deliberately fielding weak teams to accumulate high draft picks.1 This approach meant enduring painful losing seasons in hopes of building a championship contender in the long run. "Trust the Process" encouraged 76ers fans to stay patient through the downturn, having faith that the strategy would eventually lead to success. After many difficult seasons, the 76ers did, in fact, turn things around and became one of the NBA's top teams. Beyond basketball, the mantra spread to other Philadelphia sports franchises that were also focused on rebuilding. It became a city-wide motto for persistence and optimism in the face of short-term setbacks.
Working on this Substack has given me a fresh perspective on “the process.” By that I mean the process of academic publishing. This contrast came to mind because a reader asked me what it would take to turn an idea from this Substack—the function of ticklishness, for example, or awe—into a peer-reviewed article. I know a fair bit about that process, having been an academic for a quarter century, including many years as the Editor-in-Chief of a major scholarly journal.
There’s an enormous difference in the process of writing a Substack post compared to the process of publishing in an academic journal. Now, I used to blog and publish in scholarly outlets at the same time, but back in those days, I didn’t write about ideas that I thought would be publishable because I didn’t want to risk being “scooped” (i.e., someone else publishing the idea before me). In the academic rat race, priority is important so that one is properly credited with a new idea.2
While some of our readers are academics, not all are, so this post is designed to shed light on the process of publication in academia, especially in psychology. It’s an answer to the question, what is the process of publishing an idea in a reputable journal, as opposed to letting it loose here on Substack?
Ahem
I suppose I should start with some throat-clearing. Yes, it depends. The effort to publish a paper is not always cleaning out the Augean Stables. I’m sure there are people who had an idea, gathered some data, and published a paper in a span of months.3 I mean, the process must have been pretty quick to get the so-called Proximal Origins paper into Nature in March of 2020, just months after the pandemic began. My goal here is to sketch how it all works, generally. I’m sure there are exceptions and I’m sure there are people who are just better at it. Having said that, the process is… excruciating.
The Process
When I was in graduate school, I collected data investigating social categorization. Those datasets were collected in or around 1995. The paper was eventually published in 2001, six years or so after I had the idea. I gathered the data in my 2nd and 3rd years of graduate school, 1993-1995. My advisors sat on the manuscript for half a decade or so and then the journal it was submitted to accepted in within a few months. This was my introduction to the world of academic publishing.4 (By way of contrast, I had the idea about ticklishness about three weeks before we posted that one, on a very nice hike with llamas in Montana.)
So, how does it work?
Suppose I wanted to write a peer-reviewed paper proposing a new function for one of the sensations I’ve discussed up to this point, awe, ticklishness, the warm fuzzies, contentedness, sadness, or what have you, he said, brazenly advertising some of his prior posts.
The first crucial distinction to bear in mind is between empirical and theoretical papers. A theoretical paper is just what it sounds like and, most importantly, does not require—or sometimes even allow—any new data to be reported.
Now, in economics or in fields such as physics, people are divided into two groups: theorists and empiricists. Both theorists and empiricists are valued and they publish in their respective outlets.
This is not, however, the case in psychology. In psychology, you can’t be a theorist. You need to have a program of empirical research, though what counts as empirical research is very broad, ranging from, say, single cell recordings to clinical interviews to cross-cultural differences in norms. But you’re not going to get a job or tenure by being a theorist. It simply isn’t done.
Probably for this reason, there are very few outlets that take theory pieces. There are some,5 but, perhaps because there aren’t many, the editors of these theory-driven journals tend to want to publish papers that address the Big Questions in psychology. Now, when a scholar submits a manuscript to a journal, it gets routed to an action editor, who will be the person who generally has the final say about whether the piece is accepted or rejected. The action editor initially takes one of three routes: 1) rejects the paper outright, often referred to as a “desk rejection,” 2) sends the paper out to reviewers for their comments, usually between two and four, though this varies, or 3) accepts the paper as is or with slight revisions. Please note that (3) almost never happens; I had exactly one such case in my career and that paper was tight.
My guess—and it’s only a guess—is that a paper on awe or ticklishness would be seen as too narrow. If the action editor even sent it out for review, and if the reviewers thought it was the best paper on awe of all time (a very big “if” given that academics love to trash their rivals’ work), my bet would still be that the editor would end their note with, “So, while the reviewers found much to like about your manuscript, in the end I have decided not to pursue this manuscript for publication in [MY JOURNAL] and suggest you seek a home for it at a more narrow, specialty journal.” (This is just the way academics talk.)
Basically, yeah, it’s a good paper but it’s not important enough for this journal. And how is “important” defined, you may be wondering? Well, it’s whatever the editor thinks is important. It’s usually the stuff that he or she and his or her allies are doing.
I mean, that’s actually fair enough. I’m not saying the editor would be wrong. It’s not that. The issue is that psychology has so few (prestigious) theory journals because the field is obsessed with empirical work. Or, to put it the other way, the field is careless and somewhat indifferent when it comes to theory. The few theory journals that do exist tend to limit what they publish to only the ideas they view as Very Important.
Aside About Theory
Why is there such a thing as a “theorist” in economics but not in psychology?
Good question.
First, I should admit I don’t really know. One possibility is that psychology is consumed with empirical evidence—especially experiments, when possible—because the field is self-conscious about being a real science and they crave the ornamentation. Having developed out of philosophy, it worries about being seen as just so much arm-chair speculation.
Second, my guess is that it has to do with the following facts:
Fact 1: Economics has overarching theoretical frameworks from which most scholars operate. I have in mind here ideas such as scarcity, tradeoffs, supply & demand, rational expectations, the efficient market hypothesis, etc. Of course, that’s a very coarse characterization, but most economists work from one of a small number of frameworks. They all learn the same basic foundational ideas, usually in microeconomics and macroeconomics classes as undergraduates.
Fact 2: Psychology has no such framework.
I mean, name it.6 Sure, there are things that sort of look like “theory” in psychology. Some people call themselves behaviorists. Some people follow the psychoanalytic tradition. Some people say they do social learning theory. But there’s no real edifice in psychology the way there is in economics.7 If you wanted to ensure a psychologist had “the basics” of the field, what would you teach them? How a neuron works?8
My former colleague Paul Rozin I think nailed it when he told me his view of how psychologists, in practice, view theories. He said—I’m paraphrasing now, from memory—but this is close: “Psychologists think of a theory the same way they think about a toothbrush: everyone has one, but no one wants to use anyone else’s.” There are theories in psychology, but they aren’t widely shared, and they tend to be narrow. The organizing framework that I think is obvious—evolution by natural selection—is eschewed by many psychologists for reasons that, by and large, continue to baffle me. Josh has some ideas.
I think these two facts help to explain why there are no theorists in psychology. The field relies on people planting a flag in some empirical enterprise, tethered to some personal pet theory or no theory at all
The Road to Empirical Publishing
Ok, so you have an idea and you want to get it out there to publish it, but you know that your odds of getting it into a theory journal are slim. As the theory door closes, the empirical door opens. While there are precious few good theory journals, there is an embarrassment of riches when it comes to journals that will publish empirical work. What you do is gather data, write an empirical piece, and frame the empirics around your theory. Easy peasy, right?
Not so fast. The first thing you’re going to need to gather data—unless you’re someone like Aella, i.e., blissfully and gloriously independent of any institution—is… permission. There are international, federal, and local rules governing data-gathering—which, in principle, is a good thing—and research must be approved by your institutional governing body, usually called an Institutional Review Board, or IRB.
Now, the most important thing to bear in mind about IRBs is that the people on these boards desperately need to justify their salaries. Ok, that’s a bit cynical. Their first duty is, of course, to protect research subjects. However, in most cases, people submitting protocols have done a pretty good job on their proposal since it’s not their first rodeo. A lot of the language is boilerplate—subjects will receive full documentation of the study protocol blah blah blah—and often there is little for the IRB to do. For this reason, people on the IRB are worried that there will be too little work to justify their salaries, so then reliably find something, anything to object to in the protocol. This hurdle can be particularly steep if your research encroaches into politically fraught territory and might serve to support a narrative that is unpalatable (to the IRB staff).9 In any case, usually, they ask for revisions, the Principal Investigator (PI) revises and resubmits, and this process continues until the IRB feels their job is safe.10
Ok, you’ve burned a semester haggling with the IRB about the font size on the Informed Consent Form.
Now, with some exceptions, if you want to gather data, you’re going to need money. The amount of money varies a lot. If you are a neuroscientist and you use a machine that goes ping, then you need many thousands of dollars for each subject. If you do cross-cultural psychology, you might need tickets to far off foreign lands and enough money to fund your three months of “work” (on, say, romance tourism) at your field site. If you just gather questionnaire data, you might only need a small number of dollars per subject. If you have chosen your field carefully, you might be able to do “research” using publicly available datasets, in which case you might not need money at all.11
A great deal of grant-writing is to federal agencies and writing these grants takes significant amounts of time and effort. There are rules, regulations, specifications, certifications, documentation, and so on. Again, don’t get me wrong. In some respects, this is a good thing. When people receive taxpayer money, they should have to spend it responsibly in a way that comports with the public interest.12
A federal grant is usually a vast amount of work, taking months to assemble. This work is sharply reduced if, as many do, your research is really just an extension of work you have done before. I recall sitting on a grant panel once—millions of dollars were at stake—and the researcher was proposing to do some physiological research with mules. The work was identical to work they had already conducted on horses but, you know… with mules. So, most of the material in the grant was just recycled from the horse work with the word “mule” inserted.13
If you’re proposing a genuinely new line of work, it’s going to take you some time to get the grant written. Usually it’s a process of several months.14
Ok, you’ve had your idea, spent six months wrestling with the IRB, six months writing the grant proposal, and then another six months waiting for the inevitable grant rejection.
Somehow you persevere, secure some funding, and spend a year gathering data.15 You’re now two and a half years down from having your idea. You spend three months doing the statistics and then nine months writing it up.
None of these estimates are crazy. Doing statistics can be very time-consuming, depending on the details of your dataset. Sure, you might be using a simple behavioral method—reaction time or some such—but you might have gathered fMRI data, requiring very sophisticated technical knowledge—and significant time—to analyze correctly. (Or, perhaps, incorrectly.)
And when academics write up papers, it’s a whole thing and can take a while.16 In the best scientific writing, individual words—to say nothing of numbers—matter, so every co-author has to be on board with every syllable, punctuation mark, and subscript. So even if you’re a superfast writer and bang out a draft in a month, you have to send the draft to collaborators who might sit on it for weeks or even months before they return it because they are “busy” with things like teaching, other projects, committee work, or kids.
Now you can submit the manuscript to a prestigious journal. This emphatically is not simply attaching the paper to an email and shooting it off to an editor. Academic publishing companies have created “portals” that will challenge the patience of a saint. Sign in using the password that we changed three months ago and you forgot. The title page, abstract, tables, figures all must be separate files, uploaded using a file uploader from the 90s. Add coauthor information one at a time, manually.17
Now that three or four years have passed since you had your idea, you are now happily ensconced in the living hell of manuscript review. Maybe the editor will desk reject your paper. You can then send it out for review to another journal—but remember, it is unethical to submit to multiple journals at the same time. There are apparently “good reasons” for this, which somehow do not apply when it comes to legal scholarship because you can submit to basically all the law journals at the same time.
Eventually, some editor will probably send your work out for review. If they do, the editor needs to find academics who are experts in the area who are willing to review—for free—your manuscript and provide feedback. Sometimes this process is fast and easy, and takes a week. I have seen it take many months.
The editor has to actually get the reviews from the volunteer reviewers. At my journal, we aimed for six weeks. I have seen review cycles of one year or longer. A large review of thousands of papers in 201318 estimated the average time from submission to publication in the social sciences to be about 14 months, and it has probably gotten worse, not better, since then. But peer review is important because it results in changes to the manuscript that improve it, right? Well, a recent review concluded that “peer review appears to have a relatively small influence on the content of manuscripts.”19
As if this weren’t bad enough, the incentives of editors are often… odd. For example, let’s say a prestigious person—who could benefit the editor’s career—submits a crappy paper to their journal. Is the action editor going to risk damage to their career by rejecting it?20 What about a paper from the editor’s rival? Why wouldn’t the editor deliberately choose hostile reviewers to torpedo the piece?21 What about the reverse, when the paper comes from a friend of the action editor? Who wants to hurt their friend’s career?
Yes, it’s all bad. Bad papers from famous people are published all the time. Good papers from unknowns get the axe. And then there’s other kinds of politics. Do papers that advance the currently popular political narrative have a better chance than papers that seem to cut against it? At least in some fields they do. My sense is that the publication bar for narrative-consistent papers is set way lower than the bar for narrative-inconsistent papers. Could it get even worse than that? Even after publication you’re not safe. Papers can be “retracted” by authors or journals, sort of flashing the device from Men In Black and saying, this article was never published and you know nothing about it… If your paper irritates certain parties, it can be memory holed.
Anyway, if you get the result you hope for—pray for—then the editor will take the reviewers’ comments… and ask you to revise the paper. If the revisions are minor, this could be a few weeks or months. If they are major… another year or more might pass.
As a weird aside, after that whole ordeal, when the publisher sends you the proofs for your corrections and approval… they want the work done in 72 hours, “or the paper will be published in its present state.” (This tight deadline can result in awkward errors.)
In any case, all in, it’s not crazy for a theoretical insight to take 5 or more years to make it to the point at which the paper is finally accepted for publication.
You had to get permission, you had to get money, you had to gather data, you had to get interesting results, you had to coordinate with collaborators, you had to get past the gatekeepers… Even then, it is possible that you will be told by the editor that they want to publish the data, but they are skeptical of the “theoretical framing” you are proposing—the whole reason you went through all this in the first place—so please cut all that theoretical stuff out and just include your empirical results.
Or you could just propose the idea in a Substack post.
CODA
So if I’m saying that the publication process is broken, how would I fix it?
I have no earthly clue. My sense is that the process can’t be adjusted with a tweak here and a patch there. My sense is that the process, much like the rest of the academy, has to be burned to the ground and rebuilt. There are some processes that are better than they used to be. Preregistration seems to be a good idea. I could see a world in which scholars post their ideas to a public place where others can critique their theory, methods, and analysis strategy. Then the authors can proceed based on these public discussions. Everything in the open. I mean, are we sure we need journals at this point? We used to need them because scholars didn’t own printing presses to distribute their ideas and traveling to London from Rome to hear about the latest advances in physics by carriage and barge was a lengthy and possibly fatal endeavor. Now we have intertubes and such. Maybe scholars post their work, get feedback, and post their results, including the raw data for careful auditing.
Even more promising is adversarial collaboration. This process—see Cory Clark and Phil Tetlock’s work for more on this—asks scholars to articulate their own and others’ arguments about an issue and agree on the process by which the dispute can be arbitrated. My sense is that answering almost any research question would benefit from this approach, but especially those that are relevant to political issues. If nothing else, it would be refreshing to see scholars articulate what pattern of data would cause them to lose confidence in their toothbrush. Too often, my fear is that the answer to this is “none,” making “science” more of a faith-based enterprise.
Another thing I think would help is to make scientific publishing more like legal publishing. I am lucky to have had papers go through the review process at Law Reviews so I got to see this up close. As I indicate above, one difference is that all the Law Reviews see the paper at the same time because you submit through a shared portal. But more importantly, these journals carefully check every citation in every footnote—which must include a page number—with exquisite care. (In psychology, you can just cite a book and if the reader wants to know where the book says the thing it’s being cited for, well, tough nuggies. Authors only have to cite the book, not the page.) This meticulous process used in legal writing (usually) ensures that the cited work actually supports the claim in the text, something not true in the (social) sciences. Yes, checking citations in this way would add significant time in labor but… isn’t getting things right sort of what science (and each individual scientist) is supposed to care about?
At heart, the reason that this can’t happen and won’t happen is that academics crave status. Established scholars win status for publishing in particular outlets, which are owned by the big publishers, such as Elsevier, and status-craving “scholars” don’t want to give that up. As long as that’s the case, the “status" is going to stay pretty quo.
REFERENCES
Björk, B. C., & Solomon, D. (2013). The publishing delay in scholarly peer-reviewed journals. Journal of informetrics, 7(4), 914-923.
Ceci, S. J., Peters, D., & Plotkin, J. (1985). Human subjects review, personal values, and the regulation of social science research. American Psychologist, 40(9), 994–1002.
Clark, C. J., Costello, T., Mitchell, G., & Tetlock, P. E. (2022). Keep your enemies close: Adversarial collaborations will improve behavioral science. Journal of Applied Research in Memory and Cognition, 11(1), 1.
Clark, C. J., & Tetlock, P. E. (2022). Adversarial collaboration: The next science reform. Political bias in psychology: Nature, scope, and solutions. New York: Springer.
Stephen, D. (2022). Peer reviewers equally critique theory, method, and writing, with limited effect on the final content of accepted manuscripts. Scientometrics, 127(6), 3413-3435.
Translation for those who don’t know the rules of U.S. basketball: Teams with worse records in a given season are given better choices among the new players coming into the league in the next season. It’s a way to even teams out.
It doesn’t always work. Everyone has their complaint that “they didn’t cite me!” and here is mine. I published a paper in 2005 with my friend and collaborator Jason Weeden. We had this original idea to use data from speed dating to measure people’s preferences in romantic partners in a context that might not be real life, but was at least outside the academic laboratory, where it’s challenging to study dating and mating. If you enter the search terms “speed dating” into Google scholar, you get a different paper, from 2008, as the first result. In that work, the authors had the original idea to use data from speed dating to measure people’s preferences in romantic partners in a context that might not be real life, but was at least outside the academic laboratory, where it’s challenging to study dating and mating. Candidly, this is a case where you should hate the game not the player. The incentive to appear first is big and the punishment for failing to cite work you should credit is basically zero. The rules of the game need to be changed to address this because self-interest pulls strongly in the wrong direction.
Actually, another paper with Weeden was like this. To this day I look back in amazement at the timeline to publish this paper, in 2010.
This was not, in fact, my introduction to academic publishing. I had previously sent another paper, co-authored with Mark Leary, to Psychological Bulletin. I received—by U.S. mail, back in those days—the “good” result, a “revise and resubmit.” (See below on this.) I, however, had never been advised regarding the publishing process and I moped around thinking – damn, they didn’t accept my paper! – and stumbled into a colleague’s office at the University of Arizona, where I was doing a post-doc with economist Vernon Smith. I sulked about the result and my colleague looked at me like I had two heads. He patiently explained to me that the letter I held in my hand was good news—excellent news, given that it was a very good journal—rather than bad news. To this day I think about how confused he must have been by my reaction. As a complete aside, this paper made sort of an appearance in the first episode in the series Hannibal, where the title was rendered “Evolutionary Origins of Social Exclusion.” Thus my 15 indirect seconds of fame.
These include Behavioral and Brain Sciences, Psychological Bulletin, Psychological Review, and Perspectives on Psychological Science.
Readers might guess that my view is that evolutionary theory ought to provide this foundation. In 1998, I thought this would be the predominant view within ten years. In this, it turns out I was mistaken. When I was in college in 1991, my friend Dan showed me something called “Netscape” running on a terminal, a VT100. We could see the text created on something called a “web page” by people at CERN. I told him I didn’t think this was going to take. In this, it turns out I was very slightly mistaken. I think the “internet” and the “web” might just be with us for a while. I wrote a bit about the state of theory in psychology previously and still think that Gerd Gigerenzer had good perspective on this.
This is frequently one of the first units in an Introductory Psychology class.
Ceci, Peters, & Plotkin, 1985.
I’m addressing this with levity but of course the role of the IRB is important. If you have any friends who have done research at an institution with an IRB, ask them for stories about their frustrations. Everyone has them. I myself have many. When I was at UCLA, the IRB asked me to make some changes to the punctuation in the form I was using for Informed Consent. At Penn, I was asked to change the statistical test I planned to use from the correct one to an incorrect one. I didn’t budge on the latter, but on the former I agreed to make the change despite the fact that it was grammatically incorrect. Choose your battles.
You will probably still ask for some. A dirty little secret in academia is that scholars can ask organizations—especially federal agencies such as the National Institutes of Health and the National Science Association—for money to “buy their time.” This is often, but not always, in the form of summer salary. (Many top institutions hire academics on a 9-month salary.) Faculty are at liberty—and strongly encouraged, if not practically forced—to seek summer funding for the other three months. Because of the miracle of “overhead,” when an academic gets, say, $10,000 for their salary, the granting organization pays an additional $5,000 (or more) or so of “overhead” to the institution. This is why Deans encourage faculty to write grants: naked avarice. If the overhead the institution received really were equal to what costs the institution bore if the work on the grant were done, the institution would be indifferent to getting additional grants. They are, emphatically, not. So, if you want to dig into a dataset during the summer, you ask the taxpayer to pay for your time to do so even though you would even without the money since you need to publish to avoid perishing and all that.
No amount of administrative checks can really do this. Staggering amounts of grant money is wasted on ill-considered research that will ultimately prove to be of no value to the country or humanity. See, for instance, Jesse Singal’s book, The Quick Fix, for a small sampling of how the field has gone wrong, especially the eye-popping amount spent on building—or not building—resilience in the military. More recently, my former colleague Phil Tetlock explained what a waste the work on implicit bias has proven to be: “We have squandered a tremendous amount of time, money, and opportunities on implicit bias work … The supply of future opportunities may likewise be affected, for retraction of the unconscious bias story may have far-reaching reputational effects on social psychology...” The federal apparatus to fund science is not designed to purchase excellent science.
The cut and paste became obvious because in few places the applicant had forgotten to replace the word horse with mule. The grant was recommended for funding by the panel.
Many academics use their time on airplanes to write grants. I’ve never been exactly sure why this is. It might be because 1) planes are—or at least used to be—one of the few times that you can’t be disturbed and so it’s very precious time so 2) academics use this precious time to do the only thing that can increase their income, writing grants.
I mean, it’s almost certainly your post-docs and graduate students doing the actual data collection.
Most of the problems in academic publishing derive from incentives. The publishing companies have no incentive to make the portals friendly for users. What are you going to do, not submit to the journal you need to publish in to get the prestige associated with the journal? They know this. (Also, to be honest, these portals have improved. But slowly.)
Björk & Solomon, 2013
Stephen, 2022
I once rejected a paper submitted by one of my all-time academic heroes, Robert Trivers. Rejecting the piece was difficult but, in my view, the correct decision. We subsequently saw one another and he gave me a big hug, so I guess all was forgiven. It helped that I subsequently wrote a favorable review of his book, which I genuinely thought was excellent. For the record, Trivers remains one of my all-time academic heroes. Ricardo Lopes recently interviewed him.
Another trick is to send a paper out to more than the usual number of reviewers. Editors can hide behind just one negative review if they want to reject a paper for these sorts of political reasons. Four reviewers is considered a lot in my field. I once submitted a paper and got eight reviews back. The piece was criticizing the work of a prominent—and therefore powerful—person in the field. The reviews were overall positive, but the editor rejected it anyway. I sent the paper elsewhere and it was accepted, and since has attracted more than 400 citations, which is quite good. The modal number of citations of papers in psychology? Zero.
I have to agree with Michael Magoon - this article did a great job of reviewing some of the many reasons I left academia. Other reasons: toxic department politics, soul-crushing committee meetings, racism, sexism, how the WEIRD bias in psychology makes most of our findings questionable.... Once I had tenure, I was free to breathe and think and realize, "I spend most of my time doing things I don't enjoy." Thanks for this, Rob.
This article makes me so glad that I left academia. It was bad in the 90s, and it is far worse now.
I never got a single article or book through the process, and after a few years realized that I did not really care about getting tenure anyway.
So I went into digital technology, made 3x times the money, and just kept reading and writing exactly what I wanted and when I wanted in my free time. I retired early, have published two books since then, and I write for Substack. Far better!
I honestly don’t know why anyone would go into academia now, except to avoid getting a “real job.” Very sad, because we need a productive academia, but I am not sure if we can get it.