Crowdsourcing Best Practices for Experimental Journals: Transparency

test_tube_s**Updated September 16: I am grateful to JDH for publishing these posts along with various responses. Soon after my posts were published, JDH approached Roopika and I to ask if we would approve publishing all the correspondence, with identifying information redacted. We asked our contributors for their opinions, and as a group came to the decision that it would fracture our field further to make all the correspondence public.

We offered to explore the option of co-publishing a post on best practices and lessons learned with experimental forms of publication with JDH, but the option was declined. However, we remain hopeful that this option to work with JDH to work on a co-authored piece may later become available. Thank you all for weighing in. 

On Thursday, I published a controversial blog post in which I described some problems Roopika Risam (@roopikarisam) and I experienced with the Journal of Digital Humanities, which practices a form of “post-publication” peer review, where material which has been curated on the Internet is selected for more formal release. In this post, I focus on the difficult issue of transparency in new forms of peer review and several suggestions for best practices.

The problems with the customary academic peer review process are well known. The actual “blindness” of the review is often in question, especially in small fields; peer review adds an unnecessary layer which slows down the time for research to be made public; reviewing serves a gatekeeping function which often replicates conservative ideas rather than encouraging new paradigms; editorial decisions made by peer reviewed journals are often less than transparent.

For these reasons, transparency is important for any editorial process to build trust between editors and writers. Transparency is making the process of working with the journal clear and open, which includes policies such as

  1. the conditions for publication,

  2. the time frame for submissions and developmental editing,

  3. feedback and expectations for revision.

By being transparent about policies, editors and writers begin with a shared understanding of expectations to help manage possible later conflicts. Transparency is especially important for experimental journals because they might employ different processes for review for various issues. However, if the new process is not clarified and agreed upon between editor and writer (or editor and special issue editors), these journals run the risk of creating conflict between editors and their community, as Ernesto Priego (@ernestopriego) has pointed out.

Improving a journal’s transparency involves many difficult and interconnected questions. What defines “merit” in a field, particularly in a growing one with new paradigms competing to displace the old? How do editors function as gatekeepers, and what is their responsibility to be cognizant of their power? What are the politics involved in final editorial decisions? What are the advantages and disadvantages to allowing these decisions to rest with a contingent junior faculty member or a tenured senior faculty member? How do we account for the emotionally charged issues of racial, gendered, sexual and able-bodied structural privilege in editorial decisions about “merit”?

I discuss several ideas for creating more openness and transparency in experimental review processes below. Some of these are my own and some have already been offered by Roopika Risam (@roopikarisam) and Scott Weingart (@scott_bot).

  1. Self-Moderation Models for a DH journal

    1. Digital Humanities Now currently draws its content from groups of volunteer editors who bookmark current work in DH, but what ends up on DHNow lies with a final editor at the journal. What if a digital humanities journal or curation service borrowed from aggregator models such as Slashdot or Reddit? Both services take content nominated by individuals, with content being voted up or down by commenters. Slashdot gives users moderator access using a points system, where individual moderators get influence points which they lose every time they moderate a point. Moderators are not allowed to participate in a discussion which they are moderating. Slashdot also has an interesting karma system where points are awarded for users who are intelligent, informative, helpful, or even amusing.

    2. Given the inclination of the digital humanities to be open, how could the field borrow from these systems to create a online digital humanities journal that is both crowdsourced and moderated by the crowd? How could it deal with efforts to “game” these systems of moderation to get posts upvoted/downvoted? Is popularity a measure of quality? How can we make this distinction?

  1. Making the Experimental Process of a Journal Transparent (by Roopika Risam)

    1. Step 1: Tell editors the journal has an experimental approach to publication.

    2. Step 2: Propose a reasonable timeline for experiment to play out.

    3. Step 3: Ask editors if they are interested in being involved with an experimental process and get their consent in advance.

    4. In Roopika Risam’s words: “Peer review is, at its heart, based on some degree of trust – trust in the review process, trust in our peers, trust that review will improve our work, trust in the feedback our reviewers provide.”

  1. Suggestions to Retain Flexibility but Improve Transparency (Scott Weingart)

    1. Create definitive set of guidelines/mission statement with options for yearly amendments; but guidelines should not be revised more frequently.

    2. Have discussions with guest editors at start about process which will serve as a binding contract.

    3. Improve openness of journal and article selection.

As we have seen, finding viable solutions to experiment with new forms of peer review is important and much needed. But so is greater transparency and opening up gatekeeping to a larger community. What suggestions would you offer to experimental journals?

Image Credit

Crowdsourcing Best Practices for Experimental Journals: Transparency

10 Responses

  1. Adeline, this is great. Can I point you toward the MediaCommons/NYU Press study of open review practices? Our “conclusion,” such as it was, is precisely about this kind of transparency: not that any one set of practices will be “best,” but that a publication, and a community of practice, needs to ask itself a lot of questions in order to determine what its preferred way of working will be, and why — and then to communicate that way of working, and stick with it. I hope that report might be of some use here.

    Kathleen Fitzpatrick August 31, 2013 at 10:03 am #
  2. Thanks for this. I believe transparency should be embraced by all journals, not only so-called “experimental” ones. It seems to me (I say this once again) that the problem is not the experimental nature of various forms of peer review, but the lack of updated, clear, guidelines. If I know a journal does not take unsolicited manuscripts I will not complain because they reject my unsolicited submission. I may disagree with it as an editorial practice, but then I have the freedom to choose to submit to a different journal. Clear public guidelines detailing the editorial workflow of the journal or platform allows authors to take educated decisions, and be aware of what they can expect and not to expect. It also essential for publishers, reviewers and editors to maintain an editiorial vision and to help them ensure the journal is the best it can be by sticking to those guidelines.

    A glance at the Committee on Publication Ethics’ e-learning available modules, it looks like most are focused on the author (the author should not falsify, should not plagiarise, should not fabricate, should declare conflicts of interest, etc.). I’d like to see COPE focus on the editor and publisher side of things: how to solve conflicts between authors and editors and publishers, how to engage in various forms of peer review, best practices in author-journal communications, how to offer feedback, etc.

    “Transparency” is easier said than done. It takes time. Requires a lot of patient work behind the scenes, and this means amongst the members of the editorial board and the publishers, even before transparency itself can be publicly noticeable so to speak.

    It is, indeed, one of the key best practices, for any kind of journal. Clear publicly available editorial guidelines (and this means updating them whenever there is a change, with enough time for people to catch up with these changes) allows authors, reviewers, editors, publishers and readers to know what to expect.

    This documentation (that can exist in both role-specific internal form and general scope for public consumption) should be the key reference to go to whenever something strays from the declared workflow. If there is no clear documentation detailing these guidelines and editorial workflow, how can anyone ever know what to expect, or when the guidelines were not followed?

    Ernesto Priego August 31, 2013 at 10:24 am #
  3. To piggyback on Ernesto’s comment as well as Scott’s suggestion for fixed guidelines as quoted by Adeline above, I’d like to get Marxist(ish) here for a moment and just point out that in almost any context, and despite how much openness to new ideas or “freedom of speech” a community or society professes, the bottom line is that the words or ideas that get “published” are the ones that come from people who have access to the means of production. And in most normal circumstances, that’s also where the power is found.

    Over the last century we’ve seen a dramatic rise in the availability of affordable means for said publishing production, from typewriters to photocopy machines to desktop publishing to the Internet. But a ‘zine was never The New York Times and a blog is neither a published monograph with an ISBN and LC call number and a page at Amazon.com, nor is it a peer-reviewed journal indexed by major aggregators and scholarly research libraries and Web of Science. As long as a larger set of institutions, practices, and expectations surround the sharing of scholarly work through publication, journals that successfully vet, select, edit and publish scholarly work will, if they are validated by acceptance within their community of practice, have power.

    I’m not even saying power is necessarily a bad thing (if we didn’t share language and basic expectations we couldn’t communicate!), just that it is always there and that we, as a community, need to recognize it and work with it in appropriate and ethical ways.

    In an emerging and sometimes contested field like digital humanities I’d want to think long and hard (to use one of Ernesto’s examples) about creating a scholarly journal that does not take unsolicited submissions: that often has the effect closing out alternate or dissenting voices as well as first looks at cutting edge scholarly advancement. I *would* like to see structures that are inherently welcoming to a diversity of people and ideas, and that work to nurture and protect the status and reputation of every person involved. Should we consciously try not to always or only publish the usual suspects? Is there a fair process in place for resolving editorial issues? Are potentially vulnerable authors, and their ideas, treated with the utmost respect by an inherently difficult process? Do editors have the support of an editorial board and/or strong institutional presence to advise and protect them? This means open and clear policies, but also policies and structures that minimize reliance on the good will or good judgment of individuals.Yes, I do trust my colleagues to do the best job they possibly can–as authors, editors, or publishers. What I don’t fully trust are the inherent power structures that adhere in our culture(s), in our language(s), in academia, in institutions, and all the rest. We need to work carefully with and within the structures of power that surround us,

    So, I thank Adeline and Roopika for raising the issues that they have, and look forward to the positive results that will follow from this and related conversations.

    Susan Garfinkel August 31, 2013 at 2:44 pm #
    • I would say from my position on Boards of some conventional journals that the marxist critique of the editorial process may have some validity. But from another position as one of the editors of one of the oldest fully Gold online journals in the social sciences, it clearly does not. Our journal takes papers from anybody, does not enforce the theme of the journal rigorously, and does not charge authors or readers. We use conventional reviewing – you have to work in the world of the big journals and to attract decent papers. But in terms of protecting ‘potentially vulnerable authors’ we go pretty far out there, way beyond was journals usually do, in terms of language and structure assistance. If there is little or no content, however, there is little to be done and there will be rejection.
      On reviewing strategies, Social Geography http://www.social-geography.net/ used post-publishing peer review. Unfortunately it failed and has been absorbed/closed as an independent journal. The site is an archive but you can still see the commentary process. I am not sure the world is ready. Also, as an editor, I am not sure I could handle the complexity of such a process. It is hard enough to sleep at night when production is falling behind for conventionally reviews articles…
      My golden rule is to say ten years behind with technological innovation in this and other fields. That’s the lag time for the best and worst innovations to work themselves out. Hence my cathode ray tv, dumb phone, and conventional review process.

      SP September 2, 2013 at 10:44 am #
  4. Thanks for this, Adeline, Roopika, and Scott. I thought your first post was a reasonable and well-articulated critique of a process that went wrong.
    This post goes even a step further in suggesting ways to engender fairness and embrace the conversations that ought to rightly arise when new ideas like #dhpoco come about, especially when those ideas or methods are treated as somewhat “controversial” in one area, while they are common practice in others. I think the issue that struck me most forcefully in your first post was the addition of a reviewer who would be “blind” only on one side of the author-reviewer relationship, as the #dhpoco work was already widely available and commented upon online. Whatever the motivations of the editors–be they intentionally discriminatory or not (as someone else pointed out, this is not really the issue as bias and discrimination often work unconsciously)–the relationship set up by a review that is blind only on one side reads as a lack of investment or honesty in the post publication model at best, and a discriminatory and biased move to squelch trenchant critique of an ostensibly “open” and “nice” community at worst. Whatever the reasoning, which is no doubt complex, we must have these conversations if DH is going to become a community that is radically open, decentralized, and a model for shifting the way we do work in the humanities, values I think many people in this conversation share. This piece and your previous one add an important thread to the conversation, as does #dhpoco itself. I hope we can have an honest conversation about power and gatekeeping in DH, and you and Roopika are to be commended for speaking up. Thank you, again.

    Rebecca Harris (@HRH_QueenB) September 1, 2013 at 10:26 pm #
  5. excellent ideas & topic, thanks for writing this. I like the Slashdot/Reddit angle.. you may know there’ve been several interesting Reddit+scholarly projects already, like r/scholar for requesting papers, and the Arxaliv reddit front-end to arXiv repository (now defunct I think).

    There’s an good study, or series of, to be done examining such new moderation structures across popular & academic peer-review systems, especially as new ones keep emerging all the time (like the just-announced Libre platform, new features in 3rd-party commenting networks, etc).

    But more specifically about the experimental / transparency point, and without intending any judgment about any part of the JDH case, I was thinking about what general patterns here — or in the projects/orgs I’ve worked on, which have been all over the map in their processes.

    It seems to me, and seems to be observed by others here, that ‘experimental’ and ‘transparent’ are really separate axes, and it’s helpful not to assume they will coincide or one lead to the other.

    For example, well-established & stable processes can be highly transparent (legislative process in Denmark or Finland, say) or highly opaque (how to get a city contract in Naples?).

    Experimental (or relatedly, in engineering, ‘agile’) processes can be transparent — open participation, all docs public, etc. — or they might be opaque, or opaque to some points of view — such as an agile team or experimental/skunkworks lab that deliberately operates under less supervision, doesn’t work to pre-approved specifications, etc. In the latter type of case, experimentalism is may well be opposed to transparency.

    There are valid reasons various combinations of experimental or not, transparent or not might be chosen; but better they be chosen deliberately, and that one knows the arrangement.

    Perhaps among the best ways to combine experimental & transparent is by iterative process. So a journal issue, or a software development cycle, or a year, might be set up with well-understood procedures and goals (as suggested by Scott, et al) but be very revisable for the next iteration. So this is, in a sense, transparent within the cycle, possibly opaque/negotiable across cycles.

    Another idea one hears in software/project land is that a mixture of formal and informal guidances is needed, the latter being based more on factors like shared values, social behaviors, joint experience, & shared environment. These latter can’t be all spelled out, but evolve organically, and are in a sense non-transparent.

    That’s a reminder of perhaps the first rule of process: do not talk too much about process.


    Tim McCormick
    @tmccormick tjm.org Palo Alto

    Tim McCormick (@tmccormick) September 4, 2013 at 1:34 am #
  6. I think the more conservative approach of ‘bind review’ towards any models of ‘open review’ derives potentially from the fear of the failure to maintain the ‘quality’ of the writings. Obviously, we want credits for our scholarly writing, and we cannot, at least till now, negate that an article published in a ‘blind review’ journal is considered to have more credibility in contrast to the ones that did not go through ‘blind review’ process. And, this is where the conventional journals tend to practice some extent of ‘power’ in exchange of some degree of transparency in the editorial/review process.

    While rethinking the notion of transparency in the world of scholarly ‘peer-reviewing’, we also need to keep in mind that the general philosophy behind the anonymous reviewing which presumes that anonymity, both of the author and the reviewer, would prevent any judgment due to the background of an author. Therefore, any bias in this process rises potentially due to the administrative process. So, as emphasized by Ernesto Priego, the matter of ensuring transparency is important in the administrative handling both in case of the ‘experimental journals’ and the traditional ones.

    While completely agreeing to the points made in the proposed model of the ‘experimental review process’, I feel the urge to integrate the vital strengths of conventional review process into the more flexible ‘open review’ process. People often tend not to accredit open review because it does not ensure that ‘only the experts’ are involved in the review process (which ‘bind review’ process is believed to ensure always). Therefore, affiliation of the reviewer/editor matters.

    We need to think of a model for open review which would tend to ensure this very idea of ‘quality’. The point-based system seems to be a nice idea; however, the weight of the observations by the reviewers should be higher if they have stronger affiliation (i.e. expertise) to the subject concerned. We can also think of an intra-community model which could be like a limited version of the ‘fully open-to-the-crowd’ review. Here, scholarly writings, submitted to a journal, might be open for review on a platform accessible only to the registered members (people with affiliation) of the community, ensuring a collaborative atmosphere where reviewers can comment on other reviewers’ comments. Though this might not seem awfully nice, this kind of models can minimize the limitations of traditional review process (time consuming, less encouraging, feedback from only one expert, etc.) and maximize the benefits (like credibility, etc.) so that an ‘open review’ journal does not feel the need of ‘blind reviewing’ for any of its articles.

    Jahurul Islam November 29, 2013 at 11:33 am #
Trackbacks/Pingbacks
  1. The Journal of Digital Humanities: Post-Publication Review or the Worst of Peer Review? | Adeline Koh - August 31, 2013

    […] those just jumping into the discussion, I’ve published a follow-up blog post here, “Crowdsourcing Best Practices for Experimental Practices: Transparency.” I hope you will join in and offer […]

  2. Can the Digital Humanities Be Decolonized? | Indigenous New England Literature - September 2, 2013

    […] (curiously) only a handful of people posted comments, despite Koh’s express desire, in this and a follow-up post, to stimulate discussion. Unfortunately, LOTS of people weighed in, instead, on Twitter. Most of […]

  3. Bend Until It Breaks: Digital Humanities and Resistance - Hybrid Pedagogy - February 19, 2014

    […] of thing in academic work, but to open up the field of possibilities. Further, we must be open to critique that points out unintended consequences, and be wary of the “old wine in new bottles” problem in which forms that seem innovative at […]