Interview with Miriam Posner

© Miriam Posner and Figure/Ground
Dr. Posner was interviewed by Justin Dowdall. October 8th, 2012.

Dr. Miriam Posner coordinates and teaches in the Digital Humanities program at the University of California, Los Angeles. When she is not teaching, strategizing, or working with students, she’s writing a book on medical filmmaking about the way doctors have used film to make sense of the human body. Her Ph.D. from Yale University is in Film Studies and American Studies. She lives in Los Angeles with her partner and their dog.

How did you decide to become a university professor? Was it a conscious choice?

No, I never really set out to be a university professor. I went to grad school for reasons that now seem to me really naïve (but which I suspect are actually not that uncommon): I liked college, I was good at it, I had nothing else in particular to do, and it seemed glamorous to me to get paid to read and write. I imagined that being a professor would be one possible outcome, but that never seemed like the inevitable, or even most desirable, choice. I was 23; what can I say?

It didn’t take long before I saw that many of my colleagues at Yale had their sights set firmly on the tenure-track, and I came to understand that my priorities weren’t the same as everyone else’s. I loved the work I did in grad school, and I learned about beautiful, life-changing things, but I still didn’t feel that being a professor was necessarily the right choice for me. I often felt isolated. Academic protocol sometimes frustrated and confused me. I took little pleasure in the performance of scholarship at conferences and lectures. Much as I loved the ideas, I was exhausted at the jockeying for position and obsessive credential-measuring. I held a number of jobs and internships in grad school — most notably at a museum and at an instructional technology group — and was delighted to discover that I could find intellectual fulfillment in something other than traditional academic work. I wanted, then and now, to do something challenging, intellectual, fun, that made use of my Ph.D. and was basically good for the world. I never felt like “professor” was the only job that matched that description.

Nevertheless, it was personally important to me to go on the academic job market, so that I never felt I was consigned to a different path out of necessity. I ended up with a choice between two postdocs: one a more conventional academic job, and the other a sort of hybrid job, in which I’d help to build a new digital humanities center. This was a great outcome for me, because it gave me a chance to really think through what, in the long run, would make me happy. After a lot of soul-searching, I took the digital humanities postdoc, and ended up very glad that I did.

About a year ago, this job at UCLA surfaced, and it seemed right for me for a lot of reasons. I teach a course per quarter, do my own research, and am considered a member of the core faculty of UCLA’s Digital Humanities program. But I also do a lot of other stuff I find interesting: building the program, counseling students, collaborating on projects, planning events, wrangling for funding. This kind of job wouldn’t be for everyone, but I really love the combination of hands-on and intellectual work.

Who were some of your mentors in university and what were some of the most important lessons you learned from them? 

I was fortunate to have two wonderful dissertation advisers: Charlie Musser, in the Film department, and John Harley Warner, in History of Medicine. They’re very different people, but they worked together beautifully, and I always felt well-supported. Charlie taught me a lot about doing one’s homework and avoiding shortcuts in historical work. The insights Charlie’s known for — about the intermediality of early film, for example — he gleaned from reading trade magazines and newspapers obsessively. It wasn’t magic; it was diligence. John Warner is really wonderful about seeing (and encouraging me to see) both the details and the larger historiographic picture. He also takes risks with sources and disciplines, and inspired me to do so as well. They both taught me a lot, too, about being both an academic and a real, decent human being.

In your experience, how did the role of university professor “evolve” since you were an undergraduate student?

That’s a difficult question to answer, because I wasn’t really privy to what was going on in my professors’ lives when I was an undergrad. Clearly, different kinds of technology have become important within the classroom, and I’ve become very involved in some of these technologies. But I don’t actually see the arrival of new technology as itself constituting a fundamental change in the role of the professor. My best professors always did what I hope technology helps us do in the classroom; that is, engage students deeply, empower them to take charge of the conversation, honor different students’ various entry points, and give them tools to continue discussion and thought outside of the classroom. There are many ways to do this, with and without technology.

In my mind, the more drastic change has been to the working conditions of many of the professors students come into contact with. (This change was underway while I was an undergrad, but I was shielded from awareness of it.) Remember that about 70% of the people who teach at universities do so off the tenure track. And many of them teach under conditions that would shock their students. This is really astounding, a sea change in the composition of academic labor. We can opine at length about the place of intellectual labor in today’s society, about the decline in popular respect for expert knowledge — but the fact is that the people who are most deeply affected by these changes often don’t have the luxury of participating in these discussions, because they’re cobbling together a livelihood from four different teaching gigs.

What makes a good teacher today? How do you manage to command attention in an age of interruption characterized by attention deficit and information overload?

You know, when I hear arguments that young people today are drastically different from their forebears, or that millennials just can’t relate to olden-days technologies like writing and reading, I frankly tend to tune out. I think that these kinds of generalizations elide all kinds of complexity and nuance in the ways that young people think about and use technology. These students are, after all, complex human beings, with backgrounds, agendas, priorities, and feelings that are as diverse and complicated as anyone else’s. Scorched-earth proclamations about “kids today” are for TED talks, not for interacting with actual human beings.

Of course, cultures change, and we now see students who interact with technology in ways that might be unfamiliar to older people. But my guiding principle in the classroom is pretty simple: people are interested in other people; they always have been, and they always will be. A classroom is a naturally charged atmosphere. Students are interested in and want to engage with each other. They want to try out ideas on, impress, even flirt with the student sitting across from them — or the student on the message board. They enjoy working things out together. I like to be attuned to and mobilize these very human characteristics, whether it’s in person or remotely. Little things like a laugh, a gesture, a moment of grandstanding: these are absolutely key dynamics in a classroom. They create drama and excitement, the same things that interest anyone in anything.

Finally, I assume my students are curious, intelligent, empathetic people who can demonstrate respect for each other, appreciate ideas, and carry on a high-level conversation, and I treat them that way. I’ve never been disappointed.

What advice would you give to aspiring university professors and what are some of the texts young scholars should be reading today?

If you’re an aspiring university professor, you need to understand the way the academy works, and particularly the ways that it’s changing right now. We all need to be aware that what we persist in calling the “job market” is really a game of musical chairs masquerading as a meritocracy.

When you’re a grad student (or at least when I was a grad student), it can be easy to absorb the value system that prevails in much of the academy: that we mustn’t share our scholarship openly, that publishing is more important than anything else in the world, that the academy has a monopoly on wisdom, that a tenure-track professor is the only thing to aspire to be, that those of our friends who don’t hold these jobs have somehow failed. Adhering too closely to these values makes us risk-averse, short-sighted, limited. We become paranoid, anxious, and, I would argue, too fearful to speak out against the ways that the academy is failing its mission.

On the one hand, it seems cruel to tell a junior scholar to do the risky thing — to share your sources, write a crazy dissertation, do a digital project, publish your stuff somewhere nontraditional — because the onus for change should not fall upon the most professionally vulnerable scholars. On the other hand, I also worry when I encounter grad students who study job-advice manuals like scripture, because I’ve seen too many smart, hardworking, passionate people, who’ve dutifully followed all the rules, get washed up on the rocks.

So I say, do what you’re passionate about; know that you are more, and more important, than your CV; and give the Chronicle advice columns a wide berth. We should take joy in our ideas and intellect, not limit them to please a short-sighted notion of what the academy might be.

In 1964, Marshall McLuhan declared, in reference to the university environment that, “departmental sovereignties have melted away as rapidly as national sovereignties under conditions of electric speed.” This claim could be viewed as an endorsement of interdisciplinary studies, but it could also be regarded as a statement about the changing nature of academia. Do you think the university as an institution is in crisis or at least under threat in this age of information?

Well, the university is always and has always been in crisis. “The university in crisis” has long been a way to justify profound, far-reaching, often damaging alterations in the configuration of the academy, such as the casualization of academic labor, the shuttering of academic departments, and the rising cost of tuition.

I do not lose sleep at night worrying about the effect of the “age of information” on the university. We exist to try to make sense of the world; a surfeit of information just makes our job more important. Will Wikipedia or “massive, open, online courses” like Udacity and Coursera put the university out of business? I don’t think so. Universities are, among other things, prestige-machines. For better or worse, they have a crucial function in endowing their graduates with cultural capital.

do lose sleep when I think about some of the economic forces shaping the university today, particularly the drive toward privatization, the increasing unaffordability of college education, and skyrocketing student-loan debt. These factors constitute a crisis, not for the university, which is, after all, an abstract entity, but to the people to whom the university should be accountable.

In 2009, Francis Fukuyama wrote a controversial article for the Washington Post entitled “What are your arguments for or against tenure track?” In it, Fukuyama argues that the tenure system has turned the academy into one of the most conservative and costly institutions in the country, making younger untenured professors fearful of taking intellectual risks and causing them to write in jargon aimed only at those in their narrow subdiscipline. In a short, Fukuyama believes the freedom guaranteed by tenure is precious, but thinks it’s time to abolish this institution before it becomes too costly, both financially and intellectually. Since then, there has been a considerable amount of debate about this sensitive issue, both inside and outside the university. What do you make of Fukuyama’s assertion and, in a nutshell, what is your own position about the academic tenure system? 


I agree with Fukuyama that scholars are often too risk-averse, but I disagree that the tenure system is to blame. On the contrary, I’d blame the lack of tenure-track jobs, or at least the lack of secure, decently paid academic jobs. Scholars of my generation often live in situations of extreme financial precarity. Many of us are saddled with debt, living paycheck to paycheck, and keenly aware that achieving tenure is a rarity. In this situation, who could blame an untenured scholar for not taking audacious risks?

I confess to frustration that tenured scholars don’t speak up for their more vulnerable colleagues more often. Many do, of course, but not as many as I’d like. My guess, though, is that this reticence is a symptom of the same problem that afflicts untenured academics: people are tired, they’re strung out, they’re doing the best they can, and they’re fighting battles on a lot of different fronts. Moreover, as I’ve said, a lot of us have absorbed a value system that condemns the less fortunate, or people who’ve made different choices, as incompetent.

Nevertheless, if you look at someone like Siva Vaidhyanathan, who spoke outagain and again when Teresa Sullivan was pushed out of the University of Virginia — that’s why tenure exists, and you could argue that it’s more important in this period of academic retrenchment than ever. I just wish we saw more of that.

I’ve heard many people say that the death knell for tenure has already sounded. I don’t claim to know, but if that’s the case, it’s more important than ever that we look at the way that the majority of professors are actually working, in these poorly paid, contingent jobs, and recognize that these conditions are unconscionable.

In an age when information is coming at us in all directions (i.e. wikis, blogs and prosumer content), it seems that the line between truth and opinion is often blurred. How do you see the study and practice of Digital humanities functioning in helping us better understand collective intelligence, and furthermore, do you believe that we are moving towards a future society that has a better ability to validate and define truth?

Well, I’m a digital humanist, it’s true, but I’m also a historian, and like any historian, I’m suspicious of anything that sounds like teleology, or any straightforward notion of historical progress. Our culture is changing, our relationships with each other and with technology are changing, but we’re not evolving toward anything in particular; we’re doing our best to make sense of a complicated world, just as we always have.

So why engage in digital humanities if I don’t think it’s going to lead us to any kind of higher plane? For the same reason anyone studies anything: because it’s interesting, because there’s beauty in it, because it helps me understand the world. One mistake that people often make about digital humanities (one that I made myself when I first encountered it) is to assume that the field has a more naïve understanding of truth than, say, history or literature. When you see people getting excited about big numbers, computer analytics, and datasets, it can be easy to assume that they think they’re uncovering objective truth. Not so. The excitement that accompanies a digital humanities “discovery” comes from finding a new angle on a humanities question, a new way to think about a body of work. The seduction of digital humanities is not about ceding authority to a computer; it’s about the productive friction of a humanities question — which is inherently irresolvable — against a different kind of logic entirely. Given that both these epistemologies govern our lives, how do we begin to resolve their differences?

Ian Foster in his article “how computation changes research” states “Every field of research may be changed by computation in two distinct ways: first, computation enables broader access to both raw material and research products, and second, computation enables new research approaches based on computer modeling, data analysis, and massive collaboration”. He further suggests that, “in the case of the humanities, computation also has the potential to transform research in a third way, namely changing how humans communicate, work, and play, and thus-to some extent-what makes us human.” Do you agree or disagree with Foster analysis?  

As you can probably tell, I’m not generally one for wide-ranging forecasts about the human condition. I don’t believe in technological determinism, although I do believe that technology can have unforeseen implications in the way we live our lives. If we’re changing the way we communicate with each other, if we’re becoming different kinds of human beings, I’d ask, “What’s driving this?” Technology might be enabling these changes, but it’s not magical; it’s a tool. We’re at a cultural moment where things like iPhones and Facebook have prominence; why is this? Could it have to do with the kinds of behavior are economy rewards? With the increased value we ascribe to certain kinds of information? These kinds of explanations are more persuasive to me than imbuing technology with a mind of its own.

I was wondering if you could talk a bit about some new projects/research coming out of the nexus of technology, medicine and the arts that have recently excited you?

You know, the area you’re pointing to is, I think, still developing, but it has a lot of potential. For example, I’ve been closely following the development of FemBot, a network of feminist scholars in technology, media, art, and sciences. Feminism is what initially drew me to the history of medicine — I was fascinated to learn about all the different ways the body could be interpreted — and I’m excited to see what happens as FemBot develops. I also noted with interest that the National Endowment for the Humanities Office of Digital Humanities has begun apartnership with the National Library of Medicine, and I’m very curious to see what emerges from that. I think the Medical Museion, at the University of Copenhagen, has been doing some really interesting work that highlights and interrogates medicine’s natural kinship with technologies of display. And Barbara Maria Stafford, whose work I really admire, is up to something intriguing, looking at the connections between neuroscience, cognitive science, and the humanities.

What are you currently working on?

I’ve got a few things on my plate. I’m writing a book on medical filmmaking, in which I look at the ways that physicians have used film to make the human body legible. It turns out that filming the anatomical body is incredibly hard to do — things are too bloody, too messy, too dark — plus, how do you convey a process like circulation on a live body? Physicians have turned to this incredible array of assistive devices, like special effects, animation, and dissection, to capture what “should” be self-evident. I talk about why film helps us see how narrative plays a key role in helping us understand our bodies as coherent and manageable.

I’m also very interested in medical ways of knowing, a topic I grew interested in while I was researching the lobotomist Walter Freeman. I was fascinated to see how his clinical logic (which relied heavily on visual evidence) proceeded step by step, until it reached its logical conclusion, lobotomy. I think technology might help us explore questions of epistemology in interesting ways, and so I’ve been thinking about what might happen if we used unusual, nonlinear interfaces to explore the history of medicine. Working with photos of Freeman’s lobotomized patients also got me thinking a lot about the ethics of medical images, and about how to reclaim the individual histories of these people. So I’ve been thinking a lot about what kinds of interfaces and projects might help us to do that.

I’m writing three different articles (sort of) at the same time. One is on the place of feminism in digital humanities, which is an area that digital humanities is struggling with right now. Another is on the World War I anti-prostitution film The End of the Road, and the way that it posits various kinds of sexual commodification. And I’m also working on a piece about how libraries can support digital humanities scholarship.

And then, of course, I’m working on a lot of things here at UCLA aside from my own scholarship. We’re developing and refining our curriculum for teaching digital humanities here, and it’s been really fun to experiment with pedagogy. We have really wonderful, adventurous students, and we’ve had great success running the classroom as a studio: assigning them novel problems and watching them solve them together. I’m also really interested in administrative questions that might seem boring, but which I find really interesting: How do we sustain and scale a program like this in the long term? How do we indulge our students’ excitement and desire to learn, given our limited resources? Can we build a program in which students feel truly supported? How do we ensure that our students are truly grappling with humanities questions in their digital work?

 —

© Excerpts and links may be used, provided that full and clear credit is given to Miriam Posner
and Figure/Ground with appropriate and specific direction to the original content.


Suggested citation:

Dowdall, J. (2012). “Interview with Miriam Posner,” Figure/Ground. October 8th.
<  http://figureground.org/interview-with-john-lysaker/  >


Questions? Contact Laureano Ralón at ralonlaureano@gmail.com

Print Friendly, PDF & Email