More Hack, Less Yack?: Modularity, Theory and Habitus in the Digital Humanities

One of the most prevalent debates within the Digital Humanities (DH) is the idea that practitioners should just go about doing rather than talking, or to practice “more hack, less yack.” In other words, instead of pontificating and problematizing, DH scholars should be more concerned with making stuff, and making stuff happen. The “more hack, less yack” mantra has been going on for a while now, and has brushed up against some challenges; notably Natalia Cecire’s (@ncecire) argument for the need for a ThatCamp Theory to uncover the theoretical leanings of the digital humanities, in Alan Liu’s call for the need to integrate cultural studies into dh approaches, and in the recent TransformDH collective, set up by Anne Cong-Huyen (@anitaconchita), Moya Bailey (@moyazb) and M. Rivera Monclova (@phdeviate) to bring race/gender/class/disability criticism to the digital humanities.** In many of these debates, it seems as though the “theoretification” of DH is viewed with suspicion as it disturbs the implicit good nature of much of the DH community. Roger Whitson (@rogerwhitson), for example, mused on whether the digital humanities “really needs to be transformed,” arguing that: “It seems to me that the word “guerilla” reappropriates the collaborative good will of the digital humanities, making it safe for traditional academic consumption and inserting it into the scheme Stanley Fish and William Pannapacker (@pannapacker) highlight.”

I’ve been musing on the “more hack, less yack” issue recently, and it seems that Tara McPherson’s (@tmcphers) essay “U.S. Operating Systems at Mid-Century: The Intertwining of Race and UNIX” in Lisa Nakamura (@lnakamur) and Peter Chow-White’s (@pachowwhite) recent collection Race After the Internet may offer some important insights into this ideological impasse. In her essay, McPherson argues that in the mid-twentieth century, a common structural logic developed due to computerization, one which argued for the importance of “modular thinking”, “common-sense” and disciplinary hyperspecialization. By focusing on processes which work via the modular form—simple blocks by which a complex system is broken down and analyzed by individual groups—the rationale of this system appears “common-sensical”, thereby obscuring the actual political and social moment from which it emerges.

McPherson sees this modular logic manifest in both the development of UNIX as well as racial formations in the United States, and expands this to argue that this might be a hallmark of the Fordist moment of capitalist production in the United States, and finds its manifestation in the hyperspecialization of late capitalism, extending to the specialization of disciplines in higher education such as Area Studies departments. This mode of modular thinking, she argues, is a type of “lenticular logic” which undergirds both the structures of UNIX as well as the covert racism of color blindness:

“A lenticular logic is a covert racial logic, a logic for the post-Civil Rights era. We might contrast the lenticular postcard to that wildly popular artifact of the industrial era, the stereoscope card. The stereoscope melds two different images into an imagined whole, privileging the whole; the lenticular image partitions and divides, privileging fragmentation. A lenticular logic is a logic of the fragment or the chunk, a way of seeing the world as discrete modules or nodes, a mode that suppresses relation and context. As such, the lenticular also manages and controls complexity.” (25)

Reading McPherson makes me think: to what degree does lenticular logic underlie the DH imperative for “more hack, less yack?” How much does digital humanities work, through the way it is processed and organized through computational models, actually follow the Fordist logic of modularity? In the same way that UNIX engineers extolled programmers to “common sense and notions about simplicity to justify their innovations in code,” (28), neglecting how his common-sense is similarly constituted by their historical specificity as a class of workers in the 1970s, how has this sentiment actually provided the language behind “more hack, less yack?”

In other words, common sense is never simply “common sense.” What is “common sense” comes out of a particular socio-historical moment, just as “hacking” has derived from a very specific social context. And, just as UNIX programmers relied, in McPherson’s argument, on a common-sense modular “lenticular logic” to avoid speaking about the socio-political origins and conditions that allowed for their “common sense” to come into being, perhaps the same logic has underwritten our resistance to theory within the digital humanities. Where does our “common sense” in the digital humanities come from? How is it implicated in structures of privilege which remain invisible to us? Why are we so resistant to speaking about it, and how does the language of modularity aid us in this silence?

It appears to me that much of the “more hack, less yack” issue circles around the problem of modularity and common-sensical “form” that McPherson outlines in this essay. I see this in Bethany Nowviskie’s (@nowviskie) recent post, Don’t Circle the Wagons:

“Software development functions in relative silence within the larger conversation of the digital humanities, in a sub-culture suffused — in my experience — not with locker-room towel-snaps and construction-worker catcalls, but with lovely stuff that’s un-voiced: what Bill Turkel and Devon Elliott have called tacit understanding, and with journeyman learning experiences. And that’s no surprise. To my mind, coding itself has more in common with traditional arts-and-crafts practice than with academic discourse.Too often, the things developers know — know and value, because they enact them every day — go entirely unspoken. These include the philosophy and ethos of software craftsmanship and, by extension, the intellectual underpinnings of digital humanities practice. (If you use their tools, that goes for your DH practice, too.)”

Nowviskie’s elaboration of a “tacit understanding” that derives from “journeyman learning experiences” makes me wonder how much of these learning experiences dovetail with McPherson’s notion of modular, lenticular logic that structures UNIX and other mid-century structuralist Fordist systems. This “tacit understanding” creates a common-sense notion of simplicity, but one whose structure and “common-sensical” nature similarly allows for a significant amnesia towards its own socio-political origins and context.  In the same way that UNIX engineers extolled programmers to “common sense and notions about simplicity to justify their innovations in code,” (McPherson 28), neglecting how his common-sense is similarly constituted by their historical specificity as a class of workers in the 1970s, how has this mode of thought provided the language for the “more hack, less yack” sentiment?

McPherson’s argument recalls Paul De Man’s  Blindness and Insight, where De Man asserted that all critical readings are ultimately predicated upon a “negative movement that animates the critic’s thought, an unstated principle that leads his language away from its asserted stand… as if the very possibility of assertion had been put into question.” De Man argued that we needed to return to engaging how a certain type of form made certain readings possible. At the same time, he asserted that the blindness to that very form was critical to structuring our insights. While De Man’s metaphor is problematically ableist***, it still makes a critical point: that we need to interrogate how the logic of form tends to erase the perspective of its own creation. As literary theory given critics insights that hide their own foundations, the logics of computation have given us a certain type of structure, a type of tacit understanding, a sort of visible logic and knowing that have simultaneously obscured their own foundational assumptions.

I do not mean to suggest that tacit understanding equates to a certain type of blindness. That said, I do mean to recognize that all forms of shared, cultural understandings, whether they come under the umbrella terms “common sense,” “tradition” or “ritual,” are founded upon an important obscuring of their own particular socio-political specificity, and that to ignore this specificity is troubling. As Pierre Bourdieu observed, all cultural practices exist as habitus, a set of learned dispositions, skills and ways of acting that appear simply natural, but which are rooted in specific social-cultural contexts. My call, then, is for us to interrogate the habitus that makes up the Digital Humanities community.

Let me be clear. I get annoyed by jargon and obfuscation as much as the next person, which is why I am so attracted to the digital humanities community. But I do think that we need to invest in the creation of a metalanguage that will allow us to see the ideological foundations that undergird our “common sense.” And sometimes that comes hand in hand with theory. Also, theory doesn’t always need to be annoyingly grating, especially if it allows us to understand how our implicit systems invisibly privilege and disenfranchise certain groups of people. We need to question the forms that make us see “common-sense”, and to see value in the converse “less hack, more yack” proposition.

If computation is, as Cathy N. Davidson (@cathyndavidson) and Dan Rowinski have been arguing, the fourth “R” of 21st century literacy, we very much need to treat it the way we already do existing human languages: as modes of knowledge which unwittingly create cultural valences and systems of privilege and oppression. Frantz Fanon wrote in Black Skin, White Masks: “To speak a language is to take on a world, a civilization.”  As Digital Humanists, we have the responsibility to interrogate and to understand what kind of world, and what kind of civilization, our computational languages and forms create for us. Critical Code Studies is an important step in this direction. But it’s important not to stop there, but to continue to try to expand upon how computation and the digital humanities are underwritten by theoretical suppositions which still remain invisible.

** Alexis Lothian’s article, “Marked Bodies, Transformative Scholarship and the Question of Theory in the Digital Humanities.” Journal of Digital Humanities 1:1, November 4, 2011, gives an excellent history of the #TransformDH group, and the call towards Theory within the Digital Humanities. Thanks Alexis (@alothian) for pointing me to this!

***Thanks to Natalia Cecire (@ncecire) for reminding me of this.

Image Credit

Edited to Add: Some interesting responses to this post 

Tags: , , , ,

More Hack, Less Yack?: Modularity, Theory and Habitus in the Digital Humanities

58 Responses

  1. Interesting post! My sense is that modularity is also at work in identity politics and much of the guerilla language I criticized in my blog post. McPherson also discusses the problem of modularity in the responses to protest movements of the 60s and the 70s — that a politics of separation and isolation replays this same modular logic.
    Interrogation is important, and you can probably already know that I incorporate interrogation in my own DH work. But I also feel that the focus on making can add new dimensions to older questions. Besides, to me this whole binary assumes a zero sum game between hacking and yacking, which considering the MLA panels and THATCamps planned since then, clearly doesn’t have to be the case.

    Roger Whitson May 21, 2012 at 4:23 pm #
    • Hi Roger, thanks for your comment! I think the difference is that movements like #TransformDH don’t think of themselves as ‘common-sense,’ but rather try and interrogate some invisible assumptions within what just appears to be “obvious.” The problem of modularity that McPherson points out manifests more when disciplines become ‘silo-ed’ eg. Area Studies, rather than a critical comparative type of study. I’m not quite sure #TransformDH has those issues, particularly because they seem to emphasize the need to talk about the intersectionality of various positions.
      I agree, though, that there doesn’t need to be a stark division between hacking and yacking. But as Bethany Nowviskie points out in her post, this division does occur between “digital humanists” and software programmers, and between some within DH. I’m interested to see how this division is going to evolve in the next few years.

      admin May 21, 2012 at 6:06 pm #
  2. awesome writeup…thank you for putting into words the thangs that been goin on in my head.

    DocDre May 21, 2012 at 4:25 pm #
    • thanks for your comment :)

      admin May 21, 2012 at 6:08 pm #
  3. This is a marvelous essay. And, to really mix it up, I spent the weekend in NY with digital pals hand sewing reverse applique (I blogged about it on hastac and on my author blog: . We literally sat around a large table on one day and in a smaller room at the back of a fabric shop another day stitching and talking about lots of things, but mostly technology. We were hacking with handsewing–reparing worn objects and talking about what NYU prof and politics of fashion theorist @jessamynhatcher so wisely and wittily calls human-object love. One thing we discussed was the more hack/less yack conversation and, good psychoanalytic folks all, wondered why those two had to be binaries and, if hacking was belligerently denying the yack, what about itself did it want to hide! In other words, what is the counterforce of hacking-making culture that “yack” makes visible. Conversely, what in “yacker culture” is made visible by actual trying to put that practice into play. In other words, stitch by handmade stitch, we were talking about the interplay of macro- and micro, hacking and yacking. Reverse App is the metaphor I came away with, cutting away the surface of one thing can reveal the beautiful (or not!) underside of the other and, as the Japanese proverb goes, “the reverse side itself always has a reverse side.” Thank you again for such a wonderful discussion.

    CAthy May 21, 2012 at 4:48 pm #
    • Thanks so much Cathy! I’d love to read the post you were thinking about writing about more yacking…

      admin May 21, 2012 at 6:09 pm #
  4. Adeline, thank you for this excellent post! I think there’s much more fruitful discussion to be had on the role of tacit understanding and knowledge transfer in DH (both what it enables and what it can obfuscate), and I also find Tara McPherson’s work on the mid-century development of operating systems to be really relevant and helpful. (I haven’t read the one you reference yet, but her piece in Debates in DH is great, too.)

    I’ll just make a couple of quick comments. It’s interesting to me how often I hear variations on the “less yack, more hack” theme voiced as if that were a slogan representing a fundamental stance of the digital humanities community, somehow in opposition to deeper engagement with the theoretical basis of our work. I think in many ways “more hack, less yack” got away from its originators, the founders of THATCamp, who offered it not as a commentary on DH and the broader humanities generally, but on the let-me-read-a-paper-at-you nature of typical humanities conferences. They were trying to push against this, by driving the community more toward unconference formats that encouraged teaching, problem-solving, and exchanges that felt more productive and inclusive to a broader segment of the academic community (including the #alt-ac crowd). In other words, I always took “more hack, less yack” as a rallying call for a different kind of conversation-having — not a desire to shut down conversation.

    And I hope it’s clear enough in my Don’t Circle the Wagons post, that my goal in foregrounding unspoken or (as Cathy mentions, above, craft-culture) understandings in DH software development is not to perpetuate or even necessarily to celebrate them, but instead to raise awareness that they exist. I do worry that it’s difficult for humanities scholars who haven’t had much concrete experience collaborating with developers to grasp some of the alternate modes and sites in which this side of DH discourse takes place. I also raise the issue because — even as we’ve been focusing on grad student training in programs like Praxis — there has been much conversation in the Scholars’ Lab about creating more professional development opportunities for DH developers to (as I say in the post) “grow as programming practitioners, to interrogate and articulate their craft, and to build and sustain a thoughtful and engaged culture of code-work in the humanities.”

    We think this is critically important for bridging what you find here as a hack-vs-yack gap — and believe it has to do with a broader and more pragmatic set of concerns than are necessarily the topic of critical code studies or most other sides of the current DH conversation online. But again, there are many useful ways to approach this set of problems! For our part, at the SLab, we have been bringing some terrific people together to formulate a new program called “Speaking in Code?”, which we hope will make a positive contribution in this arena — so stay tuned. I don’t often make any kind of pronouncement before securing funding, but in this case, our partners and the Scholars’ Lab crew feel so strongly about the value of the program, that we’re committed to going forward with “Speaking in Code?” no matter what — and are now just working to see how much travel support we can garner for bringing in a diverse group of participants.

    Bethany Nowviskie May 21, 2012 at 5:39 pm #
    • Thanks so much for this thoughtful response Bethany. And for your contextualization of the ‘hack/yack’ debate via the founders of THATCamp. Despite the origins of this debate, though, I have seen ways in which the ‘hack/yack’ issue has been used to shut off conversations about critical theory. I hope that this discussion will help contribute to furthering more discussion about this division.

      I’d love to hear more about what you’ll be doing with SLab, particularly in how you will be bringing together a community to bridge the ‘hack/yack’ divide through pragmatic concerns. It sounds really exciting! Do you envision pragmatic concerns which will also be self-aware about their social positions?

      admin May 21, 2012 at 5:57 pm #
    • Hi, Adeline — I should probably gloss “pragmatic concerns” a little: I mean to say that there’s a need, too, in these conversations, to be thinking about how day-to-day collaborative practice in DH software development (methods, tools, frameworks, assumptions, aesthetics, pedagogy, etc.) intersects with interpretive practices in scholarship. So the answer to your question (“Do you envision pragmatic concerns which will also be self-aware about their social positions?”) is yes! Of course! And I hope the conversations we’ll be fostering will also help people to see the degree to which some of this self-awareness is already part and parcel with DH practice — even if it’s not conveyed in ways that have typically been accessible to the broader community of humanities scholars.

      Bethany Nowviskie May 21, 2012 at 6:11 pm #
  5. In reading through your response to Roger above, Adeline, I find myself wondering the degree to which theory itself is lenticular? That it makes things modular in order to see/read them better?

    In other words, if efforts like “#TransformDH don’t think of themselves as ‘common-sense,’ but rather try and interrogate some invisible assumptions within what just appears to be ‘obvious,'” I sometimes wonder to what degree the approach to finding invisible assumptions is itself “common-sense” for academics, the obvious approach. Not that it shouldn’t be done, but that it needs to consider itself open to its own charges against other epistemologies.

    In the end, I suspect that theory has always thought of itself as similar to hacking. It’s a way to do things with words, much like coding itself is.

    Brian Croxall May 21, 2012 at 9:26 pm #
    • That’s a fabulous comment Brian. And I think De Man would agree with you as he argues that all insights are predicated on a certain blindness due to the theory’s own limiting foundation. However, I think the issue is that some strands of theory are more amenable to uncovering their own intellectual and socio-economic baggage than others.

      But your point is well taken, and very intellectually honest–it’s incredibly important to unpack why all theories make certain ideas seem more “common sense,” even theories of the most progressive or radical variety.

      And I completely agree with your last point–theory is very analogous to hacking! It’s about using a metalanguage to describe/deal with/do things. Maybe that’s a good way of conceptualizing a bridge between the hack/yack debate–theory as another form of hacking!

      admin May 21, 2012 at 9:39 pm #
    • Language as hacking…it’s like we should go read some Neal Stephenson, right?

      Brian Croxall May 21, 2012 at 9:43 pm #
    • I’m sorry, but I do not think constructing theory — especially humanities theory — is like hacking. There is in my mind a crucial difference: the constraints on what constitutes “acceptable” code are much stricter and much more clearly defined that limits on acceptable “theory.” Code is written in a particular language and executes, ultimately, on a particular machine. If it’s not syntactically well formed, it won’t run at all. If it has an error in logic, it will exhibit various kinds of unexpected and mostly undesirable behavior. There is a “physics” here, because we re dealing with, ultimately, physical machines. You run the code, and it works (in the sense of functioning as intended) or it does not, and it’s usually easy to tell which.

      Contrast this with the question of whether a “theory” is “good.” Maybe. Maybe not. It depends heavily on, among other things, the opinions of one’s colleagues, and in general the entire social context. A “bad” theory can be rendered “good” if enough people change their perspectives/worldviews/understandings. There is a social reality, but social reality is mutable in a way that physics — which ultimately underlies the execution of code — is not.

      It is of course essential to ask about the aims and effects of code. I think this is a major area of “critical” digital humanities studies. But “more hack, less yack” captures the fact that theory and code are fundamentally different types of objects. No amount of theorizing will fix a bug.

      Jonathan Stray May 25, 2012 at 5:38 pm #
  6. @briancroxall a #transformdh version of snow crash would be awesome!

    admin May 21, 2012 at 9:45 pm #
  7. Love the post, and even more so for prompting such a wonderful conversation in the comments!

    I agree that we most not postpone a robust stock-taking on the products of our hacking and the social spaces that enable them. I see plenty of goodwill around me to know a beneficent reckoning is in the offing, and I look forward, like the rest of us, to the results of the Scholars’ Lab initiative. Bryan’s point is also well taken. Words are tools in a way that tools are not, and many among us have already built wonderful and intricate machines without a lick of Python.

    Before returning to my half-code, half-argument dissertation, I wanted to share 4 brisk tunes for/with Adeline, wearing 4 different hats:

    [puts on the po-co hat]
    My beef with the social spaces of humanities computing and its new marketing makeover as DH, comes from its erstwhile tendency to focus on the low-hanging fruit of the canon, ironically concurrent with the progress made by those who can’t code to save their lives to provincialize Europe —reason and heritage. The alibis for that insularity are perfectly understandable when you consider the uphill institutional battles that the digital has had to fight to hitch the humanities. Until I see evidence to the contrary, though, I will hold that race and gender has played a channeling role in the development of our existing digital archives and tools. A theory-inflected, volume-length historical account of the development of DH along racial and gendered lines will surely make it to the top tier of my reading list, but I fear I will not be overly surprised by the insights of the brave writer. In the interest of sharing (lenticulate?) our labor to good effect, I call dibs on creating a digital wonderland around the anti-colonial work of Aimé Césaire, which brings me to,

    [puts on the philologist hat]
    the intersection between words-as-tools and tools-as-tools. The emergent philology I inherit from Jerome McGann —a Euro-American phenomenon thus far— places us at the intersection of yack and hack always-already and forever more. Text-image-sound, that “massively addressable at different levels of scale” quid, encompasses all in a catastrophic feedback-loop. The machines we build—arguments, editions, visualizations—come at the beast from many angles, and we err most when we fail to locate ourselves in the playing field, especially when that field is our collected memory. I subscribe to the idea that our methods and ideas can be refined—as if by fire—, through an encounter with the ambiguities of the material and the punctilious obsession of algorithms! Which reminds me,

    [puts on the newbie coder hat]
    I owe a massive debt of gratitude to the digital humanists at the Scholars’ Lab, and I can see clearly now the fragility and drudgework attending the orbits closer to the center of code. As my tío Paco would say, not all of us are called to do that work —and that’s a good thing! The digital humanities can be either lenticular or a circus, depends on how you look at it. I prefer the latter, as long as I get to play clown. We also have room for lion-tamers, jugglers, and dare-devils!

    The ones who worry me the most are those who are called to play ring-masters for our growing publics, and I’m betting a solid $20, they will continue to be those who orbit closer to the yacking sun. I hope you can agree with me that they/we should build such arguments, and design such spaces, where those who have been called to the pleasant solitudes of the watch-maker can shine to the benefit of the party-formation. I, of course, will like to see a bit more color in those spaces, but in order to do that,

    [puts on the teacher hat]
    We must nourish a generation of bodies of color who can come join us, and in most cases, it is all the more difficult when we wait until college to recruit. Many of the white digital humanists I know who have taken leadership roles are working hard to make spaces for the rising generation to which you and I belong, but they wade in molasses, some inherited, some Kafkaesque (i.e., of their own making). We must beef up our own efforts to pass the baton to the party-formations of the future.

    I just taught a class of 16 young women and 2 young men. 3 of my best 5 were women of color. One took to code, another one thrived in design, but the third brought them all together. The power to theorize social spaces is our best tool. How are we going to use it?

    elotroalex May 22, 2012 at 12:18 am #
    • Hi Alex! Thanks for your comment. Your post makes me think of Peter Kerry Powers’ blog post on the conference Rethinking Success, that simultaneous writing/coding literacies are going to be extremely desirable skills in tomorrow’s workforce:
      And I agree, part of empowering people of color is getting them involved in the infrastructure of coding, and not simply about representation. One of my favorite novelists, J. Nozipo Maraire, writes in her novel “Zenzele”: “Before the lion learns to write, tales of hunting will always glorify the hunter.” So it is in code too! There was a really interesting article by Trevor Owens about Sid Meier’s Colonization about this recently:

      admin May 22, 2012 at 10:07 am #
    • Alex, I’m interested in your assessment of past DH work as having focused on the “low-hanging fruit of the canon.” When I think of the most long-standing, resource-intensive, and respected digitization projects in humanities computing, I think of Orlando and Brown Women Writers. And I know that even as we were working on a project like Rossetti in the mid- to late-1990s, it was with the sense that a digital archive could bring renewed attention to a non-canonical (though admittedly male and British) writer/artist who had been deeply neglected. Having just read Michelle Moravec’s response to Adeline’s post, I’d be interested to hear you or others reflect on what Julia Flanders has to say about canonicity and the Productive Unease of 21st-century Digital Scholarship in DHQ.

      Bethany Nowviskie May 22, 2012 at 11:44 am #
    • well obviously I’m not Alex, but as I was invoked and Adeline tweeted link to me I’ll jump in here. I do think there is an “unease” for me, due in part from my age (almost 44) and the ways in which methodological transitions in history and the humanities have created difficulties around identity. Obviously I don’t see the “digital” in contrast to the “humanities” or I’d simply do what the majority of historians are doing and ignore it. My recent foray into google books has left me with some unease re: reliability, but I realize that there are many fine digitized projects with far higher reliability. I still question, even tho I admire, how digitizing the premodern women’s writing from the archives takes precedent over say modern stuff from the archives in terms of what gets done first. I think it is all about scholarly legitimacy of subject matter (same re: Rosetti, Woolf). I do see the unease as productive (and not simply as a residual effect of academic humanities people doing what we do, which is question everything, which is quite true as well).

      Michelle Moravec May 22, 2012 at 12:07 pm #
    • Loved the essay by Julia Flanders, thanks for that!

      I also share Michelle’s use of the word ‘unease’ which I don’t think departs from Flanders’. I almost included a line or two acknowledging Orlando and the Women Writer’s Project. I have always admired those projects, and I think they are a model to follow in many respects, but a model also to avoid in others.

      I feel I have to make a couple of clarifications about my (unfair) summary of DH history in one line. First about gender and race: To me they always intersect in important ways. When I read ‘Women Writer’s Project,’ I feel uneasy about the universal ‘Women’ in the title. When I look at the list of the women covered in the project, I find a few scattered colored women, Norma Elizabeth Boyd, Phillis Wheatley and a few others. The selection criteria of the WWP reflects a vision of the archive (even pre-copyright) that is limited to a provincial (Euro-American) vision of ‘women writers.’ This is the ‘common sense’ that Adeline questions, the vision of history that is taken for granted and universalized. If Julia Flanders were a Jamaican woman, the WWP would look very different, despite the obstacles of copyright. I’m not trying to detract from a project and I team I admire. I’m calling attention to the need to supplement that work with different ‘common senses’ and ‘tacit practices’ that for complex historical reasons have not become part and parcel of DH… yet. If we all aspired to the discipline, ethos of collaboration, intelligence and awareness of labor issues that Julia Flanders exemplifies into these new spaces, we will be all the better for it! If my vocabulary and my thinking is alienating in any way to anyone of us, I encourage you to tell me, because we do this together or fail.

      The other issue I feel I need to clarify is my use of the word canon. While I think you are right to consider Rossetti extra-canonical, he is only so when we imagine the canon to be a restricted version of Harold Bloom’s already hyper-restricted canon, or perhaps Mortimer J. Adler’s vision of the great books. To consider Rosetti extra-canonical keeps the continuum within a normalized version of British and Anglo-American cultural history. I use the word canon to mean overlapping and contending visions of our heritage. Using the word this way, Rossetti, Woolf and Shakespeare belong to the same canonical sphere—which I admit has formed me and continues to inform much of my own thinking—but which is, as all other ones, provincial and not universal. To America and la otrAmerica, we come from all directions of the wind; to do that fact justice, our collected vision should reflect an all-encompassing, multi-provincial memory. To locate ourselves in a social topology, as Adeline suggests, sounds like a promising, if avowedly risky, way to keep us honest and get us going. I, of course, will only participate in reckonings and revolutions as long as dancing is involved!

      Ted Underwood recently pointed out (in a comment section I can’t recall right now), in response to similar language coming from me, that most data is useless right now anyway, not just that of marginal archives. As Michelle, Julia Flanders, and most of us have noticed already, the quality of most large scale digitization projects—how shall I say it?—oh yes, sucks. Nonetheless, we should not underestimate the impact of this massive digitization on scholarship and pedagogy, no matter how crappy it is from a philological point of view or how forbidding to data-mining. I have heard arguments citing Google’s digitization of a small library in Costa Rica as evidence that the problem is not as bad as I make it out to be. (I’ve never even seen these digitizations). The Google-Rican defense is the best argument for why we need to double-step into a more auto-critical future.

      In that spirit, I confess I speak from a very troubled and precise locus within the habitus. I have had my fair share of accusations of machismo, and I have many real blind-spots and alibis. That Aimé Césaire himself, to whom I dedicate most of my current work, has a problematic relationship to gender does not escape my attention either.

      I am also aware of the continuum between content agnostic tools and content-tool combos.


      For all my talk of provinciality (which I borrow from Dipesh Chakrabarty) I believe that tools CAN have universal application—Neatline, Juxta, Zotero are promising in this regard—, but only insofar as we can recognize their historical grounding in provincial practices. When I do an admittedly anecdotal survey of the DH land, most of the tools I see, though, are closer to the right end of the spectrum above. Together with mass digitization, here’s where I locate much of the “low-hanging fruit.”

      Copyright alibi acknowledged as a true obstacle, but… The colored archive—image, text and sound—predates the 1920′s. Oftentimes it’s not even a matter of geography. One of the most important collections of the world, for example, lies here in the US, at Brown University of all places, containing among other things much of the Haitian thinking in the 19th century on democracy and freedom. Considering Haitian Independence represents the first time in history when the phrase “all men are created equal” meant something close to the letter, I think we can agree we have work to do. Again, here’s a bit of auto-critique: I interviewed for a DH position at Brown and did not get it. Do I say what I say because I secretly resent it? I wouldn’t put it past me. Should we ignore the example because of it? No.

      I’m aware that I have much to learn about the history of DH and the state of the union, and anyone who knows me can vouch that I can be corrected and swayed. I also realize now that I owe some of my research time to learning that history and surveying the land more rigorously. But first… that digital wonderland on Césaire!

      elotroalex May 22, 2012 at 6:56 pm #
  8. Interesting post, Adeline. Thanks. You indicate in one of your comments that certain kinds of theory can give us a purchase on our hidden assumptions. I wonder which forms of theory you think can do this and why? I admit that coming out of Duke back in 1991, I was filled with theory hope, to use Fish’s term, but I am less and less hopeful one might say :-). I still love “doing theory,” but I think I’ve started to think of it increasingly in those terms–doing something rather than theorizing about something: theory is its own kind of practice or language. To my mind it doesn’t give me a unique access to my assumptions about literature and culture, but it gives me a different language. Thus, I am not critical about my practice from the standpoint of theory as something that stands outside practice, but I am able to get purchase on one linguistic practice by virtue of inhabiting more than one linguistic space at once. IN some sense I think this amounts to an internal (I hope not schizophrenic) internal conversation, whereby the different languages I speak–theory-speak, litcrit-speak, fiction writer-speak–engage one another critically. I don’t know how this applies to things like coding since I don’t do it, but I take it that coding is treated like practice in the theory/practice divide.

    Peter Kerry Powers May 22, 2012 at 8:45 am #
    • Hi Peter! Thanks for your comment. I think that various strands of postcolonial theory and new historicism have tried to reveal our hidden assumptions. One of the most compelling examples has been Ranajit Guha of the Subaltern Studies Group. In his earlier work, Guha criticized dominant Indian historiography as it did not consider religious peasant uprisings to be serious revolutions. Guha argued that the Anglicized educations of many of the powerful Indian historians led them to think that serious revolutions had to follow a more Western model. Religious peasant uprisings did not fit into this model, hence they were left out of this historiography. By calling attention this invisible intellectual bias, Guha returned agency to the religious peasant revolt, the “subaltern” within Indian history.

      I believe that the most convincing strands of theory do not simply focus on formalist abstraction, but use these abstractions to uncover how we conceal our own assumptions from ourselves. I think Foucault’s notion of genealogy is another great instance of this–locating the intellectual movement of ideas in their various transmutations.

      admin May 22, 2012 at 10:19 am #
  9. Very thoughtful post, Adeline. The comments demonstrate that you have identified a key issue in digital humanities.

    I just thought I would add a very Stockton-centric tangent about lenticular. Although I of course saw lots of lenticular products growing up, I didn’t know what they were called and first encountered the word “lenticular” when our colleague Hannah Ueno displayed her lenticular prints at the faculty art exhibit in our new gallery. See

    john theibault May 22, 2012 at 9:49 am #
    • Thanks John! And thanks for the reference to Hannah’s work–I wasn’t aware of it!

      admin May 22, 2012 at 10:59 am #
  10. The contextualization of “more hack/less yack’ from Bethany is important, and I’ll add that different THATCamps themselves take on varying ratios of hack/yack — after all, unconferences are self-organized, and so will follow different courses based on who is there.

    But I can also see some of the principles echoing beyond the unconference setting as the conversation moves more into the interaction of DHers and coders (no binary intended there! lots of grey area!). I think the role of modularity there will be interesting to look at — I clearly need to catch up on McPherson — as will the idea of ‘common sense’. From my limited knowledge, one essential aspect of the modularity of Unix is the principle that each module should “do one thing, and do it well”, and that principle is widely applied to programming in general.

    Thus, people with more of a hacking/coding bent are often turned off by the more yack-oriented sessions at THATCamps, because they so often range over a wide array of interrelated topics and ideas (doing many things), and so are less likely to produce something actionable or working in the end (not doing it well). I’ll wager that there are many bridges to make between valuations of both “doing one thing” and “doing it well” among people involved in DH in various capacities.

    To take one example — I think a really important one because it rankles against good humanities work — there are objective measures of “doing it well” in computer science. Sure, there is room for personal coding style and style guides for projects. But in general, a 5 line code snippet that does the same thing as a 10 line code snippet is often, to coders, objectively better. Or, measures of speed and efficiency also exist as objective measures. There is an investment in objectively better code that I think is hard to reconcile with humanities’ uncomfortableness with using phrases like “this is objectively better than that”. Those objective measures make part of the “common sense” of programming.

    Just one more connection, this one to the similarities of coding and theory. Each programming language might have slightly different ways of accomplishing the same task. Thus, when snarky fights about which programming language is “better” than another, the first examples to come out are often along the lines of “Ruby can do in 3 lines what takes 8 lines in PHP! It’s obviously superior!” But there are often more choices at work in the choice of a language for a project that are often unexamined. Is there an implicit theory embedded in programming languages to be unpacked from the perspective of a humanist, rather than a computer scientist?

    Patrick Murray-John May 22, 2012 at 10:54 am #
    • Thank you for such a thoughtful response Patrick. Your comment make me think of how the hack/yack debate parallels many conversations in my own home. My husband is a network engineer, and he often likes to tease me about how humanists simply sit around, pontificate and “feel” whereas in his field results and progress are distinctively measurable.

      I am hopeful though, that as you suggest, there is much fertile ground to develop here, especially in relation to the similarities of coding and theory. If the value of coding languages are evaluated on the basis of efficiency, how do we evaluate theory or “human” languages? Mandarin Chinese, for example, is more efficient than English (more meaning is conveyed quickly in less words/syllables), making it (to my mind) a better vehicle for poetry; but it is often less precise than English.

      McPherson also points out in her work that as much as we need to historicize computerization, there is important work to do with theory, as theory is insufficient as it currently stands. In her words: “Just as the relational database works by normalizing data–that is by stripping it of meaningful context and the idiosyncratic, creating a system of interchangeable equivalences–our own scholarly practices tend to exist in relatively hermetically sealed boxes or nodes. Critical theory and post-structuralism have been powerful operating systems that have served us well; they were as hard to learn as the complex structures of C++, and we have dutifully learned them. They are also software systems in desperate need of updating and patching. They are lovely, and they are not enough. They cannot be all we do.” (35)

      admin May 22, 2012 at 11:09 am #
    • “just as the relational database works by normalizing data–that is by stripping it of meaningful context and the idiosyncratic, creating a system of interchangeable equivalences…”

      I think the focus on “relational database” is a red herring here. “Normalizing” data is not a property of relational databases, though it does occur there. It’s a property of mathematical logic in general, starting with the propositional calculus developed by the greeks. All such schemes are “abstraction” processes that map reality symbols, which, yes, discards context and information. That’s the point, that’s why abstraction is powerful. This type of abstraction both far predates the relational database and is found just about everywhere else in technology. And while scholars are focusing on the relational database, practice is already moving rapidly past them, as key-value stores and other NoSQL data stores become increasingly common.

      In other words, I feel there is a weakness here in discussing a particular historical embodiment of a system for storing abstracted symbols (“the relational database”) rather than the underlying process of formal symbolic representation and reasoning. The former will disappear, the latter is fundamental, and has thousands of years of development, especially the strand of work starting in the 19C as developed by Frege, Pierce, Russel, Godel, Turing, etc.

      Jonathan Stray May 26, 2012 at 5:52 pm #
  11. I have had some similar thoughts lately about the tools digital humanists use to do their work. The structures used in programming (iteration, sequence, objects) favor categorization and the ability to break down tasks into discrete steps. Even polymorphism assumes that objects that aren’t exactly alike, but are close enough, can be categorized together. This does not only apply to software, but hardware as well, where switches are either off or on with no room for nuance.

    I think that it is important to remember that computing, if you take it back to Babbage, emerged from and suppored liberal capitalism and empire. Not that other tools that humanists use did not (professionalization, the monograph, etc.), but there does seem to be an awareness surrounding these other tools that the computing tool lacks.

    Michelle Davison May 23, 2012 at 11:38 am #
    • You are not wrong, but it might be fairer to say that it is engineering, not “programming” per se, that favors categorization and the ability to break down tasks into discrete steps — this provides the clarity and structure necessary to know for sure that a machine, once built, will actually function (and of course, writing software is the act of directing a physical machine.)

      However, while I appreciate that a single bit is either on or off, and perhaps this encourages black-and-white true/false thinking, there’s plenty of room for nuance in the digital world. I think you may be fascinated to investigate the “statistical turn” that artificial intelligence and related fields such as natural language processing have taken in the past two decades. There are actually quite a lot of ways to represent ambiguity, uncertainty, polysemy, etc. in a computational system. Examples include fuzzy logic, interval arithmetic, non-monotonic and modal logics, neural networks, non-deterministic algorithms, emergent systems including cellular automata and fractals, and my personal favorite, Bayesian belief networks.

      Jonathan Stray May 26, 2012 at 6:07 pm #
  12. This discussion alternately delights and enrages me. Delights, because as both a practicing journalist and a practicing computer scientist, I love to see the humanities mixing it up with CS. (See also Dan Cohen’s observations on the strong similarities between DH and digital journalism.) But I am also hugely frustrated by what feels like, to me, an failure to grasp some important technical basics by some of the more humanities-oriented practitioners — leading to some very strange sounding arguments.

    First, all of this analysis of “modularity” and such misses some key points. Modularity is a fundamental response to complexity, so fundamental that it is widely found in nature — this is more or less why your body evolved to be made of individual organs, which are in turn made of individual cells. The same types of logic, which is fundamentally a logic about counting the number of interactions between parts, apply to human-constructed systems. There is a brilliant 1962 paper by Herbert Simon called The Architecture of Complexity which elucidates this point in detail. It is worth reading, because this is an essentially mathematical argument about how anything with millions of separate parts can possibly function as a whole. The points he makes are not negotiable; they are “physical” truths as opposed to “social” truths.

    To wit: without modularity, the number of possible interactions between parts increase quadratically, while the probability that all are individually working properly decreases exponentially (and here I actually mean the mathematically definition of exponentially, “being multiplied by a constant factor at each interval”, rather than the metaphorical/colloquial “very fast.” Communication between science and humanities scholars is forever clouded by such misunderstandings.)

    The upshot of this is that the designers of Unix were forced by fundamental physical constraints to build their system in modular form. We simply do not know any better way to build a large system than to break it up into pieces with limited interaction, and neither, apparently, has billions of years of evolution surfaced an alternate solution in nature. McPherson is right that this can obscure larger patterns, and thereby perpetuate or justify oppression. But you cannot simply choose not to think and design in modular parts; your mind and mine is not capable of pondering that kind of complexity. Human working memory is simply not up to the task of considering how the line of code I just added interacts with the ten million lines already present (a typical number for a large piece of software) and so the coder must restrict their gaze.

    But that doesn’t mean that the coder’s gaze must be focussed solely on individual modules. Simon’s paper also notes that hierarchy is one solution to complexity. The word “hierarchy” has a dirty flavor in the humanities, but really this is just the concept of “levels of description.” It is possible to focus our attention on a single cell, an organ, a human, a family, an institution, or an entire civilization — and the ideal inquiry moves between these levels as needed. There are strong parallels here to ideas from systems theory, a 20th century discipline which has, from the start, endeavored to understand the whole as something more than its parts. Ideas such as “emergent phenomena” come from systems theory.

    Another commenter noted that there is a notion of “objectively better” in computer science. This is another word that makes humanities scholars nervous, but I claim that within computer science it is very often totally unproblematic. The major constraints in software engineering are the computer’s time, the programmer’s time, and the amount of data storage required. If you can minimize any of these without compromising the others, that solution is better. Similarly, there are many problems for which there is a very clear “objective function.” If we are designing a GPS application, then it should direct us on the shortest path and not one that is longer. The ambiguities arise in the higher-order goals of software.

    The GPS example is simple. But what about a search engine? How do we evaluate whether or not search results are “good”? Now this is a complex question than humanities scholars can get their teeth into! It is social, normative, and involves a powerful institution (e.g. Google) that recreates patterns of power relations. But there is a catch. Humanities scholars can, with appropriate theoretical tools, describe what a search engine does in society; who it benefits, etc. But without an understanding of the engineering constraints involved in building a search engine, they will not be able to envision or prescribe a better solution! These technical constraints are subtle, and like the mathematical constraints on complex systems they are non-negotiable because they are grounded in physical realities. As an example of first-rate sociological analysis that is also technically informed, I give Seth Finkelstein’s “Google, Links, and Popularity versus Authority.”

    The fundamental point I am making is this: the craft of coding interacts very strongly with physical constraints and laws. Critiquing software without understanding those constraints can get scholars into situations which would not be unlike critiquing the moon for the inconvenient tides it generates. In many cases, there is simply no choice, because that is how the universe appears to work. Separating what must be from what is contingent is of course a key task of the sociologist, but I fear that many scholars are insufficiently familiar with the “hard” empirical disciplines to appreciate how much of computer science is beyond human choice. Although the two interact hugely, there is a fundamental difference between the social and the physical.

    Jonathan Stray May 25, 2012 at 6:38 pm #
  13. Hi Jonathan, thanks so much for a long and thoughtful post. I am happy that you took the time to engage with this dialogue. I agree that some of this has to do with how we may not be accurately understanding each other’s terms.

    To help our discussion, I would like to clarify a few areas of my argument:

    1. I don’t think that humanists have an issue with something being ‘objectively better,’ but with the idea that a single theory can produce something which more universally ‘true.’ You write that there is a key difference between universal, “scientific” truths and social ones, and that when describing the physical world, modularity is simply a more appropriate tool.
    But McPherson also notes in her essay that modularity was used to structure both UNIX as well as systems of social division such as race in mid 20th century America. In other words, she shows that modular systems of thought have been used to interpret the social world, not simply the physical one, as you claim. This she points out is dangerous–and from your comments, it appears that you agree with this.
    McPherson also argues that the philosophy of modularity has structured higher education, and sees this in the hyperspecialization of disciplines. And what exactly is so bad about hyperspecialization? As you say, the world is simply too large and too complex to be understood, which is why computer scientists tried to break it up into smaller, more understandable units.
    The answer: when we hyperspecialize, people studying a problem are prone to missing what is going on within the larger issue as they are so focused on a small part. Cathy Davidson brings this up at the start of her book “Now You See It,” where she brings up the “Invisible Gorilla Experiment.”
    By focusing so hard on counting the passes between the basketball players, the bulk of the observers miss the man in the black gorilla suit who weaved through the basketball players.
    This shows us something: if we are trained to focus only on one thing, no matter how well we do it, we will miss some important points in the whole picture. For these reasons, thinkers such as Davidson and McPherson have critiqued theories like modularity. To summarize: one of the limitations of modular thought is that it discourages interrelatedness and connections.

    2. However, my main argument is not against modularity itself, but rather the culture that stems from modular thought and its unspoken assumptions and ethos. By the “culture” of modularity, I am referring to the belief that this is the best and most descriptive way of understanding the world around us, because it appears to be “transparent,” or common sense.
    Now, this tension that you and I are uncovering actually has a much older genealogy, one which stems from the humanistic and philosophical rejection of scientific positivism as the “best” way of understanding natural and social phenomena.

    3. Finally: what I was trying to get at in my post is this: we need to understand that all theories emerge out of a particular social historical context–as does the philosophy of modularity. I am not saying that modularity should not be used as a tool, but rather, that we should recognize that like all tools, it has its limitations and biases. But the culture of modularity discourages this historicization of itself.

    admin May 25, 2012 at 8:57 pm #
    • All right then. A few comments before I try to pinpoint where I think we disagree.

      1) I wonder if I come off as a die-hard positivist because I certainly don’t believe I am. The various flavors of post-positivism, relativism, social constructionism etc. seem to me to be an important strand of thought because it indicates and acknowledges that the universe and the universe of experience cannot be fully understood by “science” as classically conceived. For example, positivism can be a poor theoretical and methodological choice for the study of anthropology. (For an excellent discussion of limitations of a scientific program in social endeavors, see Rittel and Weber’s marvelous paper “Dilemmas in a General Theory of Planning“.)

      However, the positive program is still yielding tremendous results in essentially every technological field. That means scholars are going to need to appreciate how to successfully apply it if they wish to engage with modern technical capability as it is currently developed. I say “capability” intentionally here; one of the key embedded concepts that I am trying to ensure we understand similarly is the idea that, at any given point in time, certain things are (technically) possible and certain are not.

      Do we agree on that much, just in the way of foundational epistemology?

      2 ) I think we also agree that it is possible to focus on the parts and neglect the whole. Truly this is an aspect of the positivist tradition, but I don’t think you’re giving the lab-coated crew enough philosophical credit here. By bringing up systems theory, I am trying to show you that this problem was well appreciated by the mid 20th century. Weiner’s “Cybernetics” was 1948.

      3) And now the crux of my objection to what I understand as McPherson’s argument (I haven’t been able to find the original paper — link?) She claims that “a common structural logic developed due to computerization.” She sees parallels between modular software design and a specialized society. She indicates how this perpetuates “common sense.”

      I agree on the common sense thing. Yes, operating always at the level of parts will discourage questions about the whole. No argument.

      But I am doubtful on the other points. First, does she really show that “a common logic” existed between computer scientists and other elements in society? And how are we to asses who influenced whom here, or take into account the fact that modularity is a basic property of the physics of complex systems, including living systems? A beehive and a pig are complex modular systems too, so could we argue that this idea actually rose with the 19th-century biologists? (I have no idea, I’m just sketching why I believe hers is an extraordinarily difficult claim to establish.)

      It also looks to me like she defines the essential content of “modular thinking” as something like: looking at the parts and not seeing the whole, a strictly reductionistic gaze. Ok, sure, but do modular software engineers actually think this way? Computer science is steeped in the vocabulary of levels of abstraction, emergent systemic effects, and cross-cutting concerns.

      In short, it seems to me that her argument really goes like this:

      a) computerization produced a mindset in society
      b) the mindset of modular software design is focusing on the parts to the exclusion of the whole
      c) focusing on the parts to the exclusion of the whole perpetuates systems of oppression

      I take c), but I would need a strong historical arguments to believe a), and b) seems to me a mischaracterization of what modular software design entails. It’s not possible to engineer something big (like an operating system) without imagining systemic effects.

      Jonathan Stray May 26, 2012 at 2:41 am #
    • @jonathanstray:

      1. Your reading of where we disagree is correct–in that you disagree with McPherson’s main thesis, or your points (a) and (b). Unfortunately her essay isn’t online yet, but you can read part of it via Amazon here: (It’s chapter one). You might try to request a copy from McPherson herself (@tmcphers on Twitter); and Ben Brumfield (@benwbrum) just asked her if she could put it online this morning.

      2. My point in my blog post is about (c) and its applications. I use McPherson’s explanation of (c) to try and cast some light on a current debate within the digital humanities. On that level, I think you agree.

      3. Finally, I’d like to respond to a point in your earlier comment where you argue that coding is not theory. I see where you are coming from, but I think that coding and theory do have some similarities because they are both methods we use to describe (or create) “real” or virtual worlds. Both can be seen as metalanguages which describe or create the assumptions of these worlds. Trevor Owens (@tjowens) has an interesting post on Play the Past on this where he goes into the code of Sid Meier’s “Colonization” to unpack the cultural assumptions written into the gaming code:

      admin May 26, 2012 at 8:14 am #
    • Thanks for the link to Taylor Owens’ analysis of the code for the game Colonialization. I read it with great interest, but in general I have to say I was not impressed by his arguments. My response to his critique is here:

      And thanks for letting me write all over your blog with my many comments!

      Jonathan Stray May 26, 2012 at 7:23 pm #
    • Tara McPherson has kindly offered to check the terms of her contract and post the paper if possible.

      There’s an interesting issue here about Open Access and the nature of this conversation. Saturday morning I was trying to explain the discussion to my wife, a software engineer who worked on IBM’s UNIX development team in the 1990s. She thought the race-modularity-UNIX-1970s argument was ridiculous — or rather she thought my half-remembered summary of Adeline Koh’s paraphrase of McPherson’s argument was ridiculous, which is not the same thing. Obviously we can’t evaluate whether McPherson’s argument is subtle or silly without being able to read it.

      That said, what kind of conversation does McPherson’s article drive in its current location, and how is that conversation different from the one that might follow open publication? Her essay occurs as part of a deep intellectual tradition–one as challenging and laborious to acquire as software engineering–and she is able to assume her readers’ mastery over the references, allusions, and terminology she uses. It is unlikely that her readers will make wild mis-readings of her arguments — so long as those readers are drawn from people for whom the time and expense of acquiring this particular $32 book are justifiable.

      But what if this filter is dropped, and the article becomes readable beyond the self-selecting sample that make up the current readers? What happens when the readers have extensive knowledge of the history of operating systems and are able to easily compare the “modularity” of UNIX with that of CP/CMS, TOPS-10, VMS, and MVS? Will those readers apply the same standards to evaluate McPherson’s argument that her reviewers did, or will they look first for technical accuracy? Perhaps more important, will readers from the broader public, for whom the specialized meanings which the tradition has given to “common sense” terms are unknown and mis-readings are likely — will they be able to contribute meaningfully to the conversation?

      I feel like this question goes beyond Open Access (and the universal desire not to be the subject of a Slashdot comment thread or similar criticism). In some of the Hack vs.Yack/Code vs. Theory discussions in the DH blogosphere, what starts as a dialog between theorists and coders turns into a monologue by theorists about coders. However, if we require the theory folks to give up their rhetorical tools and require the programmers to write without reliance on their technical knowledge, are we left talking about the weather? Adeline Koh is right that without Open Access, cross-pollination can never happen. This morning I’m wondering whether Open Access is enough.

      Ben Brumfield May 29, 2012 at 11:31 am #
  14. “U.S. Operating Systems at Midcentury: The Intertwining of Race and UNIX” is now online in the Open Access version of Debates in the Digital Humanities and can be read at

    Ben Brumfield April 5, 2013 at 10:32 am #
  1. Digital Humanities as Culture Difference: Adeline Koh on Hacking and Yacking | Read, Write, Now - May 22, 2012

    […] Adeline Koh over at Richard Stockton College (and this fall at Duke, I think), has a sharp post on these kinds of issues, focusing more on the divide between theory and practice or yacking and hacking in Digital Humanities.  Adeline has more theory hope than I do, but I like what she’s probing in her piece and I especially like where she ends up: If computation is, as Cathy N. Davidson (@cathyndavidson) and Dan Rowinski have been arguing, the fourth “R” of 21st century literacy, we very much need to treat it the way we already do existing human languages: as modes of knowledge which unwittingly create cultural valences and systems of privilege and oppression. Frantz Fanon wrote in Black Skin, White Masks: “To speak a language is to take on a world, a civilization.”  As Digital Humanists, we have the responsibility to interrogate and to understand what kind of world, and what kind of civilization, our computational languages and forms create for us. Critical Code Studies is an important step in this direction. But it’s important not to stop there, but to continue to try to expand upon how computation and the digital humanities are underwritten by theoretical suppositions which still remain invisible. […]

  2. TheoryFan » Roger T. Whitson, Ph.D - May 22, 2012

    […] this is a preface to the, in my mind, important conversation surrounding hacking and yacking that Adeline Koh has recently resurrected. I can’t help but echo both what Bethany Nowviskie, Brian Croxall, and Patrick Murray-John […]

  3. Theory, WisCon, chronicling: loosely linked observations « queer geek theory - May 23, 2012

    […] writings about modularity, race, and knowledge production, Adeline Koh recently posted about habitus in the digital humanities. She’s making an argument similar to mine in “Marked Bodies, Transformative […]

  4. Digital humanities, tacit knowledge, and (re)making the world | White Heat - August 20, 2012

    […] first looks at the implications of tacit knowledge and the “commonsense” divisions thrown up between being, thinking, doing, and […]

  5. BABEL 2012: doingness, interdisciplinarity, and the digital humanities (as an afterthought) « Jen Boyle - October 2, 2012

    […] the true digital humanities was about doing things and making things (for a thoughtful overview, “More Hack, Less Yack?: Modularity, Theory and Habitus in the Digital Humanities” and Debates in the Digital Humanities).  Yet, for me, the digital humanities has yet to embrace […]

  6. Developing a Common Language across Race Studies and the Digital Humanities » THATCamp Theory 2012 - October 12, 2012

    […] and the digital humanists. What are some common terms that we use that we think in different ways? (Modularity comes up as one.) What are some of the assumptions that we share/do not share about how cultural […]

  7. Developing a Common Language across Race Studies and the Digital Humanities | Adeline Koh - October 12, 2012

    […] and the digital humanists. What are some common terms that we use that we think in different ways? (Modularity comes up as one.) What are some of the assumptions that we share/do not share about how cultural […]

  8. Every tool is weapon: Why the digital humanities movement needs public history | History@Work - November 1, 2012

    […] and theorizing about that work (for a great take on this debate see Adeline Koh’s blog post “More Hack, Less Yack? Modularity, Theory and Habitus in the Digital Humanities”). While others have surely said it better, I walked away from THATCamp Theory understanding that […]

  9. resistance in the materials « Bethany Nowviskie - January 4, 2013

    […] inaccessible to most scholars or so coded as “unscholarly” as to be ignored by them. We’re doing what we can, from our end, to fix that. But will it matter? Maybe not to this discipline. Literary critics and […]

  10. Learning by Doing: Labs and Pedagogy in the Digital Humanities | historying - February 5, 2013

    […] based on immediate feedback. As much as I’ve soured on the term “hacking” and all the privileged baggage it can carry, it is a useful term to describe the type of learning I want my students to engage in. […]

  11. Postcolonial Digital Humanities | test - March 19, 2013

    […] class, gender, disability, and for building tools with this in mind (Earhart, McPherson, Cecire, Koh), Amy Earhart has also noted that the 1990s saw a wave of DIY “recovery” digital projects by […]

  12. Postcolonial Digital Humanities | Founding Principles - March 19, 2013

    […] class, gender, disability, and for building tools with this in mind (Earhart, McPherson, Cecire, Koh), Amy Earhart has also noted that the 1990s saw a wave of DIY “recovery” digital projects by […]

  13. Definitions That Matter (Of “Digital Humanities”) - uncomputing - March 21, 2013

    […] politics, interpretation, analysis, and close reading are at best playing second fiddle–where people seriously say “more hack, less yack” as if they are watchwords for the discursive humanities, and declare that building a database is so […]

  14. Rethinking Digital Badges - The Blue Review | The Blue Review - April 24, 2013

    […] intellectual environments. (Witness the exhortation that everyone should learn to code, including humanities scholars like myself.) Eric Landrum […]

  15. Toward a Student-Centered Collaborative Approach to DH Scholarship | EarlyAmericanistDHer - May 23, 2013

    […] of v who’s apart from the count of the field — e.g., #transformDH, #DHPoco, Adeline Koh (“More Hack, Less Yack?: Modularity, Theory and Habitus in the Digital Humanities”), Alan Liu (“Where Is Cultural Criticism in the Digital Humanities?”), Amy E. Earhart […]

  16. Hyacking, Intersectionality, and Other DHPocoSS Musings | Public Feminism - July 10, 2013

    […] we’re having awesome discussions about how to change our digital practices and further challenge the hack/yack binary, then, I feel inspired to remember that the issues within the digital humanities stem not just from […]

  17. Together-Alone in Digital ?Humanities? | Digital Modernisms - October 14, 2013

    […] leads us to the hack and yack long-lasting debate in the field of DH. Koh, considers herself a Profhacker. Although they are not […]

  18. Hybrid Pedagogy | Digital Humanities Made Me a Better Pedagogue: a Crowdsourced Article - January 2, 2014

    […] collective, notably Adeline Koh, have critiqued “more hack, less yack,” as a form of tacit understanding. According to Koh, “we need to invest in the creation of a metalanguage that will allow us to see […]

  19. Bend Until It Breaks: Digital Humanities and Resistance - Hybrid Pedagogy - February 19, 2014

    […] take with their scholarship are every bit as important to the project of resistance as are theoretical and institutional advocacy that help to justify such work as scholarship. If the digital humanities […]

  20. reading list: refracting DH - July 7, 2014

    […] Koh, Adeline. Less yack more hack: Modularity Theory & Habitus in the Digital Humanities… […]

  21. course readings | Refractive Mapping - August 8, 2014

    […] Koh, Adeline. Less yack more hack: Modularity Theory & Habitus in the Digital Humanities… […]

  22. A Close Reading of The DHThis Cat: Policing/Disrupting the Boundaries of the Digital Humanities and Strategic Uses for Cat GIFs | - January 9, 2015

    […] what a digital humanist is and is not (for critiques of this post, see Miriam Posner’s post and mine; for Ramsay’s followup which expanded definitions of coding/building, see here). For many digital […]

  23. Working Toward a more Global Digital Humanities | An Aca-Geek Girl Speaks - May 28, 2015

    […] should be tempered with some more cultural criticism such as suggestions offered by Alan Liu, Adeline Koh, or Roopika Risam, for example. The problem is not simply a case of humanities versus sciences. […]

Leave a Reply