Interview with Ian Bogost

© Ian Bogost and Figure/Ground
Dr. Bogost was interviewed by Laureano Ralón. June 25th, 2011.

Ian Bogost is a video game designer, critic and researcher. He is a professor at the Georgia Institute of Technology and a founding partner at Persuasive Games. His research and writing consider video games as an expressive medium, and his creative practice focuses on games about social and political issues, including airport security, consumer debt, disaffected workers, the petroleum industry, suburban errands, pandemic flu and tort reform. He is the author of Unit Operations: An Approach to Videogame Criticism and Persuasive Games: The Expressive Power of Videogames as well as the co-author of Racing the Beam: The Atari Video Computer System and Newsgames: Journalism at Play. Bogost also recently released Cow Clicker, a satire and critique of the influx of social network games. He holds a bachelor’s degree in philosophy and comparative literature from the University of Southern California and a master’s degree and Ph.D. in comparative literature from UCLA. He lives in Atlanta.

How did you decide to become a university professor? Was it a conscious choice?

I became a professor by accident. This probably requires a bit of explanation.

I was a student during the very start of the 1990s technology boom. When I started undergraduate school, I knew I wanted either to study computer science or philosophy. At the time it was clear that this was a choice—one or the other—although I now realize that unseen institutional and disciplinary structures somewhat artificially amplified the separation. Eventually, somewhat dramatically, I chose philosophy over computing, at least for my scholarly pursuits.

See, this was precisely the moment when the World Wide Web had transitioned form a the curious invention of a physicist to a fledgling global communications infrastructure. It wasn’t very hard to get involved with it, and the unique access to servers and workstations afforded by a university made it even easier. So I decided to pursue my interest in computing professionally, working for a variety of interactive development and advertising firms in the Los Angeles area through most of undergraduate and all of grad school.

Despite the incredibly trendy and lucrative nature of that moment, I nevertheless went to graduate school. I don’t remember why—it seemed like the right thing to do, the best way to realize the aspiration of intellectualism. I’d be lying if I didn’t admit that pride was also involved. I’ll probably earn no empathy for saying this, but working in the tech industry was easy, whereas rising to the level of my scholarly mentors was hard.

Clinging too tightly to my background in philosophy and comparative literature, I spent an abortive year as a student at Cornell, where the oppressive greyness of Ithaca along with the disconnectedness of the scholarly climate made me realize that it wasn’t an intellectual life I was after so much as one that was of the world, not disconnected from it. Cornell became a symbol of this detachment for me, even more than it deserved: an Ivy on a hill, looming over a hopeless, failed rust-belt town. It lorded its aristocratic status like a medieval sovereign with one hand while stroking intractable tomes about populism and revolution in the other.

I returned to Los Angeles with the intention of completing my PhD at UCLA, but I quickly became embroiled in the technology and entertainment industries again. Part of that draw came from my earnest interest and commitment to computing as a medium—that had always been a part of me. But another part was pragmatic. I had a young family and my graduate stipend seemed unworkable in an expensive city like LA. I also feared I was missing a moment, choosing to run from rather than face my then-contemporary reality. It was too easy just to read poetry. So I decided to try to have it both ways—to do a humanities PhD and to be a creator of computational media.

That may seem very ordinary now, but it wasn’t then, even if then wasn’t so long ago. There were all manner of practical obstacles, not the least of which was my own rising level of truculence, a virtue necessary to contend with the reality of business in business and especially in Hollywood. This only got worse as I became more professionally successful. But more complex was the question of what it would mean to complete a PhD in my unique circumstance.

Eventually I concluded that it wouldn’t be possible, and I tried to punt, submitting a wonky dissertation prospectus about European modernism. Mercifully, my committee called bullshit and failed me—the best thing they could have done for me at the time. My mentors Emily Apter and Ken Reinhard in particular told me they’d support my pursuit of the degree only if I’d take seriously the connection between computation and media that they somehow knew I was capable of. Thanks to the two of them as well as Kate Hayles, I managed to eke out what felt like a novel comparative media take on computing, much of which became my first book Unit Operations.

But the most important factor in my becoming a professor was plain dumb luck. Even as I was finishing the PhD I was convinced that finding a place I’d be happy as a scholar was unlikely. I wasn’t willing to take any old job no matter the location just so I could call myself Professor, and I was fortunate enough to have other options. That’s not so uncommon in the sciences and engineering, but it’s pretty rare in the humanities. Up to the end I suspected I’d go back into industry.

The fact that Georgia Tech happened to be hiring that year, and that its program was so weird and unique, so unlike anything else at the time (but not anymore, mercifully) made me think there was hope. Again I’m going to sound churlish and ungrateful for saying this, but I think I was a viable candidate to be a professor partly because I didn’t have a fixation over whether or not I pursued such a life.

In retrospect, I’m immensely grateful—and very, very lucky—that I’ve been able to find a way to blend my interests in philosophy, media, and computing in a truly synthetic way. But I also want to resist claiming that “I can’t imagine doing anything else” or that I was “meant to be” a professor. There are so many contingencies in our lives, who are we to say that we are their masters?

In your experience, how did the role of university professor evolve since you were an undergraduate student?

Even in the short time I’ve been in the academy as a student and a professor—less than two decades—the undergraduate degree has become an assumed asset of the middle class—even more than it already was when I entered school. At the same time, primary and secondary education in the United States was contorted to focus on simplistic outcome-based education on the one hand, while productive vocational programs have been all but obliterated on the other. All of these factors makes it increasingly unlikely that college freshman have considered the question of whether or not they even ought to go to college, and if they ought to why they should. More recently, Silicon Valley’s incredible power over contemporary culture has resulted in a perverse techno-libertarian overcorrection that suggests that college is useless and young people should just invent pointless technologies for financial leverage—or alternately to work as servants for the wealthy, presumably.

Unfortunately, many professors in the humanities have responded to these circumstances by decrying the “neoliberalism” of the contemporary university, particularly after the global economic crisis of 2008 only further accentuated the extent to which it seemed that free-market capitalism risked driving all decisions within our institutions.

But that’s a particularly facile answer. It’s also an answer that allows its respondents to resort to the same old, tired critiques of contemporary culture that have festered since the 1960s. It’s an answer that justifies endless glimpses back over our shoulders, realizing our jealous desperation to restore “a respect for the liberal arts” a notion I’ve come to believe is as mythical as it is seductive.

Overcoming this obsession with the endless idea of the ending of a particular historical moment—surely that must be the most dramatic role-shift in the job of the university professor. We have been forced to strip the tweed from our elbows and descend from our keeps, and to re-enter the world we have for so long criticized from our safe havens. We are now more than in a very long time of the world rather than sheltered from it by the shield of the “low faculties.” This is a cold and soggy and terrifying feeling.

There are dirty aspects of being forced into the outdoors, having to root for sustenance in its wilds and adapt practices that seemed simultaneously unbroken and beguiling. Few are ready to face it, and few realize that facing it doesn’t mean accepting censure or reverting to old ways. But it is time to stop “critiquing” and “interrogating” and to begin inventing something new.

What makes a good teacher today? How do you manage to command attention in an “age of interruption” characterized by fractured attention and information overload?

I teach computational media, which is a hard task, because expertise demands mastery of so many different skills: close reading, critical writing, technical proficiency, and background knowledge in a diversity of areas. Unlike more traditional disciplines, students enter our field with widely varying preparation and interests. It is tempting to allow each to work to his or her skills and interests, but this is a wrong-headed approach. For example, game studies and design require both breadth and depth, and the only way to inspire desire for such knowledge is to set the bar very high, to incorporate earnest questions into every classroom, and to see one’s students as colleagues rather than as disciples. Three values then: expectations, problems, and apprenticeship.

Expectations: Good teachers demand extremely high performance.  Every field is different, but for me, that involves critical writing, history, technical adeptness, creative lucidity, and public speaking. It’s easy for faculty to gripe about grade inflation and attention deficit and privilege, but it’s fairly easy to combat these challenges just by setting high expectations. When you do, students respond by reaching beyond their abilities, learning to seek help when they require it, and iterating on ideas rather than settling for the first one. I offer students a clear path to “doing fine”—a perfectly reasonable goal—and give them a strong incentive to strive for excellence. I have a whole lecture about this that I use on the first day of my lower-division courses, in which I explain why I will award the grade of “C” to students who do what I ask them for.

Problems: universities are stupidly structured, suffering still under the disciplinary separateness of departments and colleges that compete under nested shrouds of complex institutional politics. Kate Hayles once suggested an alternative in which students might declare “problems” instead of majors, and in fact I suggested a related approach to “networked research” in the final chapter of my book Unit Operations. In the absence of a solution like the ones Hayles and I (and others) envision, I’ve tried to put this practice to work as best I can within the existing infrastructure of the institution. At a place like Georgia Tech there’s a lot more flexibility to do this—one of the benefits of working at a technical institute over a traditional university.

Teaching has to center around problems rather than material, and good teachers inviting students to ask a question along with them. Whenever possible, I purposefully design my courses  around questions for which I myself do not have definitive answers, but about which I have earnest curiosity. The approach not only helps me learn from my students, but also allows my students to learn how scholars approach research—and how professionals approach creative and technical problems.

Apprenticeship: The best teachers strive to pursue an apprenticeship model for teaching and advisement. In addition to problem-based instruction, I work closely on research with my graduate students, including extensive collaborative writing and publishing. While it is common to collaborate with the students one is bankrolling in the lab environments of the science and engineering disciplines, it is less common to work productively with students in the humanities and social sciences. And even in the sciences, graduate students are treated more like employees in a research lab than like equals.

I’ve tried to develop close working relationships with my graduate students, collaborating with them on published research as much as possible. This is a serious pursuit and not merely an occasional sideline; for example, I have recently co-authored a book on games and journalism (Newsgames: Journalism at Play, MIT Press 2010) with two of my doctoral students, Simon Ferrari and Bobby Schweizer. Such efforts pay off particularly well early in a graduate student’s career, since the student takes away concrete lessons on producing extensive, professional scholarship. Teaching becomes a process of developing colleagues, not of training up underlings.

What advice would you give to young graduate students and aspiring university professors?

Be contemporary. Have impact. Strive for it. Be of the world. Move it. Be bold, don’t hold back. Then the moment you think you’ve been bold, be bolder.

We are all alive today, ever so briefly here now, not then, not ago, not in some dreamworld of a hypothetical future. Whatever you do, you must make it contemporary. Make it matter now. You must give us a new path to tread, even if it carries the footfalls of old soles. You must not be immune to the weird urgency of today. This lesson applies no matter the subject of one’s interest and expertise, whether it be videogames or Hittite or chansons de geste or whatever else.

Some will object that to respond to current trends assures instrumentalism, a foolish desire to remain ever more current at the expense of true values and virtues. But why must it be an all or nothing gambit? I often wonder why scholars in the liberal arts seem so reviled by the tiny slice of the universe fate has cut for them that they want so desperately to escape back into a favorite yore or up into a notional abstraction.

A piece of specific advice in this regard for graduate students in particular: if you’re not experiencing tension with your advisor, you’re doing it wrong. To succeed, you have to scrape off some tiny sliver of novelty and whittle it into treasure. Paying homage to a committee’s collective comfort may appease them, and it may even ease your burden in completing the degree. But it won’t lead to success. Success comes from breaking free of the past—even the very last sunset—and forging on elsewhere. While some considerable measure of pragmatism must be mustered in order to get through the whole ordeal, don’t shy away from the discomfort of disapproval. Embrace it.

You are a Professor at The Georgia Institute of Technology, where you work in the School of Literature, Communication and Culture. Is video games theory typically housed within Literature and/or Communication Departments across North America? Should video games be regarded exclusively as a “text”?

Perhaps the weirdest and loveliest thing about game studies is that we really have no disciplinary home. Unlike film or literature or art history, there are very few departments of game studies, and those that do exist are usually agglutinated of a variety of semi-related areas under the overall rubric of interactive media or computation or something similar. In my case, the Graduate Program in Digital Media, of which I am the director, is housed in the School of Literature Communication and Culture, which is already a pretty unusual academic unit.

On the one hand, this produces a tremendous anxiety of place and of method. But on the other hand, it prevents game studies from collapsing into a singular practice. We come from philosophy, literary studies, sociology, media studies, computer science, anthropology, electrical engineering, film, performance studies, and many more backgrounds. We’re always sort of scraping by, trying to make room for ourselves, to establish ourselves, to stave off accusations of prurience, to chase that eternal dream of institutional legitimacy. But maybe the beautiful thing about games comes partly from their prurience, their illegitimacy. They force us to focus on specific problems instead of on fields or objects or disciplines.

What theories and methodologies are commonly used in video game research generally and in your own research specifically?

Within game studies, our failure to agree upon methods has produced a number of historical quarrels. The most famous involved a dispute over whether games ought to be treated as a form of narrative, so as to accommodate literary or filmic methods of analysis, or treated as its own subject “ludologically.” Another involved the question of whether the game artifact or the players’ behavior ought to be the subject of inquiry. As I write this a number of scholars find themselves disagreeing about whether the experience of the game is of greater concern, or if its material structure is of greater import. Yet another, ongoing concern asks what the relationship between game theory and game design and development. On the technical side of things, there’s a question about whether new methods of design arise from computational invention, or whether the reverse is true. Within computer science specifically, there’s a tendency to use games to “sexy up” more mundane research in AI or graphics or other areas, an approach the computer scientists focused explicitly on games find instrumental and insincere. And since we’re dealing with a computational medium, there’s the ongoing question about what (if any) level of technical expertise is required to study these works effectively.

I’m sure an outsider will quickly realize that none of these disputes are really either/or propositions, but of course scholars need something to dispute, and conflict is a sign of a healthy, productive topic of inquiry. But sometimes we do get stuck in discpline-on-discpline brawls, and it can seem silly and frustrating.

As for my own methods, I’m particularly interested in video games, which is just to say, a particular kind of media that runs on computers rather than on cards or boards. There’s much for the game scholar to learn from non-digital and folk games, just as there is from the plastic arts and architecture and literature. But I tend to focus on the ways the computer has facilitated the particular features of this medium. In that respect, I’m not only interested in the representational aspects of games and how those resemble and differ from other media like television or novels (the subject of my first two books, Unit Operations andPersuasive Games), but also on the unique material construction of videogames—including the differences between particular hardware and software platforms.

The latter subject is the topic of a book series I co-edit at MIT Press with Nick Montfort, called Platform Studies. It’s not limited to games, either, but to any computational platform. The series invites books that look at the relationship between the design of hardware and software platforms and the kinds of creativity that those platforms make possible. Nick and I wrote the first book in the series about the Atari Video Computer System (Racing the Beam, 2009). We’ve got forthcoming books in the series on the Wii and the Amiga, with even more coming soon. 

I am aware that you are currently working with Levi R. Bryant on a book about Marshall McLuhan. What can you tell us about this project? How does your interpretation of McLuhan differ from that of Media Ecologists?

Levi and I are still really planning this project, but we’re both very excited about it. The gist of the book is simple: we’re offering a perspective on McLuhan as a first principles metaphysician rather than “just” as a media theorist.

It’s well known that McLuhan thought of media in a very general way, as anything that extends the human senses. We’re making both an interpretation of and a revision to this premise. First, we understand a “medium” simply to name any thing that exists—an object. Then, we understand extension not only to refer to humans, but to any other object whatsoever. In other words, a medium is just a thing, and a thing can extend and influence other things in numerous ways. We frame this theory primarily through a new reading of the tetrad.

Writing about this makes me look forward to really digging into this project with Levi. He and I both have so many irons in the fire, it might be tempting to see us as bricoleurs who are promising more than we can deliver. But really we just have a lot of work in us  yet to come out.

Did you read McLuhan systematically as a graduate student? What attracted you to his work in the first place?

Not systematically at all. McLuhan was certainly not on the syllabi of the theory and methodology courses I took in philosophy and comparative literature. I do sometimes wonder how different the world would be if all the critical theory types read McLuhan instead of, say, Deleuze and Guattari. Maybe better?

I think I only came to McLuhan through Neil Postman, and only by accident at that. I’m not sure when I finally returned to McLuhan in earnest, but I suspect it was only after I’d transitioned my trajectory from philosophy and literature to media theory, where McLuhan is held more dearly.

But anyone who studies popular culture eventually discovers that they owe a debt to McLuhan. He’s got to be the most underrated thinker of the twentieth century, even with the enormous following his work has deservedly earned. I suspect some day we’ll look back on the period between the 1960s and 2000s and wonder why we spent so much of it reading French theory.

Well, actually, McLuhan wasn’t too impressed with computers, although the so-called “computers” of his time were quite different from what they are today. Still, we know that the man was concerned primarily with the TV medium/environment. How do you think McLuhan’s general media theory bears upon computer science research generally and video game research specifically?

McLuhan made some bad bets about which media forms would be most influential in the twentieth century, but he got a lot right too, even if in a slightly transmuted way. In any case, it doesn’t really matter because he left us with such a useful general theory of media: that the properties of a medium are important objects of interest. That’s a premise that’s very easily extensible to computers, as I think Nick and I show well in Racing the Beam. In fact, we go to great lengths in that book to show how closely the computer was designed around the operation of the television.

The problem has been that very few scholarly or popular writers have taken that charge seriously enough, either because they don’t possess the technical expertise to really understand the properties of computing writ large or specific computational apparatuses in particular, or because it’s so easy to get positive attention for celebrating or decrying a medium for its effects.

In fact, I begin my next book (How to Do Things with Videogames, forthcoming in September from University of Minnesota Press) with a discussion of just how ubiquitous the media ecological approach has become, whether or not those who deploy it draw inspiration from McLuhan specifically. While that might seem like the greatest possible victory for media ecology, it’s also a liability: we have become so focused on looking for the macroscopic effects of media on society that we don’t bother to look at the more particular effects of specific media on specific practices. I offer an alternative in the book, which I call “media microecology.”

In one of your books, you talk about the “expressive” power of video games. What exactly is that power and where do you see the future generation of video games heading toward?

That’s the subtitle of Persuasive Games, which is a book about how video games (and computational media more broadly) make arguments and express ideas. In that book I offer a theory I call procedural rhetoric, which argues that video games and other computational media express in the unique and powerful form of the computational model: games make claims about how things work by building computer models that depict the operation of simple or complex systems. Given that our world is full of complex systems—from climate to economics to health—and given that we’re so inclined to ask for simple answers to the complex problems those systems generate, I argue that games offer themselves up as a medium uniquely positioned to increase rather than decrease our understanding of and tolerance for complexity.

In How to Do Things with Videogames, I expand on this theory, suggesting that in addition to procedural modeling, video games have the secondary properties ofroles—putting ourselves in someone else’s shoes, someone subject to the rules the model constructs, and of worlds—providing meaningful context for the operation of a model and the experience of it in a particular role.

As for the question of where games are going, a great deal of attention has been paid to the “pro-social” uses of games, for education, for activism, for “changing the world.” I’m myself responsible for some of those arguments, and they’re all well and good. It’s certainly easier to get good press and good grants by waving those flags.

But there are far weirder futures for games. In How to Do Things with VideogamesI argue that the breadth of use of a medium is one way to measure its impact and maturity, and the book is comprised of short accounts of those various uses. I’m hopeful that this perspective may yet win out. Video games aren’t just entertainment nor are they just education. They’re capable so many different things—from education to art to pornography to tools to marketing to music—just like any other medium.

What are you currently working on?

I’ve already mentioned How to Do Things with Videogames, which I hope will hit the streets by late August. The book offers a tiny McLuhan-influenced media theory accompanied by twenty short pieces that characterize the many different uses of video games.

I’ve also recently finished a more unusual work, A Slow Year. It’s a set of “game poems” for the Atari VCS which also includes a set of essays about games and poetry and a series of computer-generated haiku that relate to the themes of the games. I was fortunate to have it selected for both of the major independent gaming festivals, and it even won two awards at last year’s Indiecade festival. The work has been available in a paperback edition with CD-ROM since late 2010. I’ve just recently finished a hand-made, signed, numbered limited edition that comes with Atari cartridge, hardbound book, and is packaged in a fancy leather box. This is my go at creating an “art game,” a topic far too charged to say much more about this late in the interview.

I have another book coming out late this year or early next, my contribution to the corner of Speculative Realism known as Object-Oriented Ontology. That book is called Alien Phenomenology, and it’s about the ways things perceive and encounter one another. The book also offers a theory of philosophical “carpentry” (a term I borrow from Alphonso Lingis via Graham Harman), which proposes a new approach to philosophical creativity beyond (but not excluding) written texts.Alien Phenomenology is certainly the most traditionally philosophical book I’ve written, but it also includes a great many interpretations of specific objects, including media objects.

As a follow-up to the Newsgames book, I’m working with my UC Santa Cruzcolleague Michael Mateas on an authoring system for small-scale current event games, which are like the videogame equivalent of editorial cartoons. That work is supported by the John S. and James L. Knight foundation, and we should start testing the system with news organizations later this year. It will be released as an open-source platform in 2012.

Apart from those, I’ve got a number of other projects in their early stages. One is a collection of video game criticism I’ve been squirreling away over the years. Another is a video game about arbitrariness and choice that features computer-generated characters and gameplay. Another is a book on sports video games. And yet another is my attempt at a popular non-fiction trade book, the details of which I’m going to leave mysterious for now.

Oh, and then there are the cows.

© Excerpts and links may be used, provided that full and clear credit is given to Ian Bogost
and Figure/Ground with appropriate and specific direction to the original content.


Suggested citation:

Ralón, L. (2011). “Interview with Ian Bogost,” Figure/Ground. June 25th.
< http://figureground.org/interview-with-ian-bogost/‎ >


Questions? Contact Laureano Ralón at ralonlaureano@gmail.com




Interview with Steven Galt Crowell

© Steven Galt Crowell and Figure/Ground
Dr. Crowell was interviewed by Laureano Ralón. June 22th, 2011.

Steven Galt Crowell (PhD, Yale) is Joseph and Joanna Nazro Mullen Professor of Philosophy at Rice University, where he has taught since 1983. He is the author of numerous articles on issues and figures in what is called “Continental” philosophy, but his primary fields are transcendental philosophy and phenomenology. Areas of particular interest include philosophy of mind, meta-ethics, philosophy of art, and philosophy of history. He is the author of Husserl, Heidegger, and the Space of Meaning: Paths Toward Transcendental Phenomenology (2001), which lays out the neo-Kantian background of the phenomenological project of investigating intentionality and argues for a closer relation between Husserl and Heidegger than is usually recognized. Subsequent work has explored the dependence of meaning or intelligibility on our responsiveness to the normative, or “measure.” He is co-editor, with Jeff Malpas, of Transcendental Heidegger (2007) and editor of the forthcoming Cambridge Companion to Existentialism. Crowell also serves as co-editor of the journal Husserl Studies.

How did you decide to become a university professor? Was it a conscious choice?

As a child I always planned to follow in the footsteps of my maternal grandfather and become a doctor. I had a keen interest in (though not much aptitude for) science. When I went to UC Santa Cruz as an undergraduate I pursued those studies for a couple of years but I also fell, so to speak, “under the spell” of some excellent philosophy teachers there: Paul Lee, first, who taught the Greeks and also Kierkegaard, and then definitively Maurice Natanson and Albert Hofstadter, who got me going on phenomenology. I also studied (if that is the right word) with Norman O. Brown and put all of these things together into an independent major, which I called “Ephemeral Studies.” At some point I had let the science courses lapse, but I never really thought about what I would do when I graduated. I didn’t really think of my major as “philosophy” since it was quite interdisciplinary. But as the date for graduation neared I realized I did not want to stop doing what I was doing – reading, thinking, and writing about philosophy and related issues – so I thought it might be good to go to graduate school in philosophy, where I could do just that. I was very late in applying, but Natanson pointed me in the direction of the Masters program at Northern Illinois University, where they offered me a fellowship. I thought, “well, that is a pretty traditional program, not really like UC Santa Cruz. If I still enjoy philosophy while I’m working there, then perhaps it might have longer-term potential as a career.” And so it did. The “decision” to become a university professor, in other words, was just more or less a consequence of the fact that I had found something I very much enjoyed doing; it was just a natural extension of that.

Joshua Meyrowitz’ thesis in No Sense of Place is that when media change, situations and roles change. In your experience, how did the role of university professor evolve since you were an undergraduate student? 

Interpreted quite broadly, Meyrowitz’s thesis seems right (though I have not read the book): situations and roles change when media change, if for no other reason than that one has to deal in some way or other with those changes. And while it seems that “situations” change essentially when media change (since situations are just defined by what elements are in them), it is not clear that “roles” change essentially, as opposed to changing some of their contingent features. Thus, for instance, it is certainly true that the role of a university professor now involves a lot more interfacing with technology than it did when I was a student, but has this changed the role essentially? Maybe it’s just me (and I started my career as a professor just as the first Macintosh was hitting campuses), but I don’t see my role as all that different from that of my teachers. Sure, I have to navigate the latest “interfaces” – a lot of course administration is done on line now, and email has made me more accessible to students than my teachers ever were – but my role is the same, at least as I see it: provide a context in which students can learn something about philosophy and grasp its importance to their lives and concerns. If I were to say where the role has threatened to change essentially, it would have less to do with new media than with larger economic forces: there is extraordinary pressure on professors to conceive their role as that of a “content provider,” where students are positioned (and are encouraged to think of themselves) as “consumers” or, indeed, “customers.” And we know that the customer is always right in a market economy. I reject this model. What the professor has to offer is not a product and students are not consumers. But I certainly understand why such ideas have taken root in the university.

What makes a good teacher today? How do you manage to command attention in an “age of interruption” characterized by fractured attention and information overload? 

I may not be the best person to answer these questions. The good teachers that I had in college had a kind of integrity (in the sense of being obviously passionate about their subjects and able to communicate it with depth and mastery) that drew me to them, made me want to be where they were, inhabit the world of what they were teaching. I have tried to exemplify those virtues, but this is a very indirect thing. In the Platonic model, this would be called the “erotic” character of pedagogy: as the student participates in thinking about what is being said by the teacher, in questioning and discussing it, he or she is confronted with not only with a cluster of teachable “information” but also with what it means for him or her to be. To “learn” in this sense is to trust one’s partner (the teacher) to care about one’s own involvement in the learning process, to care about one’s “self” – not in the sense of being all concerned with a student’s personal problems, or being friends in the usual sense, but in believing in the importance of the common enterprise. Thus, in my own experience as a student, there was never a question of how my teachers (who all had very different styles) could “command my attention.” It was not a strategy with them; they were just who they were, and they were not teachers to everyone. Beyond trying to exemplify the kind of integrity I mentioned above, I don’t know what more it might take to be a good teacher. Certainly, this model is a far cry from the demand that one try to overcome the sorts of distractions that you mention: “attention deficit and information overflow.” I don’t think there is any recipe for this. One thing I do, perhaps, is to take a stand on some point that is diametrically opposed to what passes for obvious these days, argue it as though the opposite were obvious. This can sometimes dislodge students from a certain complacence, as can “irony” (as Socrates knew well). Some of my colleagues are great entertainers – and I don’t mean this in a pejorative sense (though sometimes mere entertainment substitutes for teaching). In the best case, such teachers are able to cut through attention deficit and get students to focus during the class period. Whether any real teaching goes on after that, it’s not for me to say. I’m not an entertainer, and it has been my great good fortune not to have to capture the attention of students in large lecture courses filled with those who would rather not be there. I’m not sure I would be able to function well in such an environment, but my hat is off to those who succeed there. It is by far the hardest thing a teacher can be asked to do.

What advice would you give to young graduate students and aspiring university professors?

Follow your bliss, and don’t try to over-strategize about a “career”. These days – when graduate school is far more “professionalizing” than it was when I went – this can be hard to do. Students are asked to publish earlier and earlier, and there is more demand on them to teach their own classes and give papers at conferences. All of this is important, and is certainly part of the profession. And most students enjoy doing it. However, it can detract from time spent really digging deeply into the subject and also from time that would otherwise be available for random exploring, reading, and thinking about stuff that seems not directly related to your subject, or thesis, or whatever. But unless you devote time to such explorations in grad school – to really reading widely and deeply in ways not immediately tied to some paper you are writing for a class or a conference – you may never learn what the “freedom” of the mind is. In philosophy, at least, the most important thing is to find your own voice, your own stance, an ability to deal with whatever comes your way from a flexible, yet coherent, point of view. And this is a skill that is not necessarily well developed by writing for publication and thinking always in terms of the next line on the vita.

In 1964, Marshall McLuhan declared, in reference to the university environment that, “departmental sovereignties have melted away as rapidly as national sovereignties under conditions of electric speed.” This claim can be viewed as an endorsement of interdisciplinary studies, but it could also be regarded as a statement about the changing nature of academia. Do you think the university as an institution is in crisis or at least under threat in this age of information?

This is a big question. One might be tempted to say that the university is always in some crisis or another, but I do think that the question of “departmental sovereignties” (as McLuhan put it) is particularly perplexing today. The modern research university has, roughly, the disciplinary structure bequeathed to it from the 19th century model developed in Germany by Wilhelm von Humboldt, in which there was something like an ordering of the sciences, with philosophy providing both the principle of order and an overall account, in terms of the “Idea,” of the various scientific domains. Today this is all but gone. As Lyotard pointed out (inThe Postmodern Condition), the “grand narratives” that traditionally served to organize knowledge no longer command acceptance, and there is nothing that has replaced them. Lyotard himself suggested that a new conception of knowledge – roughly, one that values making “connections” between various far-flung bits of information rather than going deeply into traditional problems in traditional fields: a “network” model that dispenses with ideas like progress, growth of knowledge, etc. – is emerging, and maybe he is right. But if that is so, it has not stabilized itself into a form that is easily managed in the university as an institution. Without reverting to the cliché that before you can be interdisciplinary you need to know something about the disciplines, it does seem to me that an awful lot of work done in the university (or at least in the humanities, which I know best) now is done without any clear standards (or understanding) of what would constitute success. There are countless interdisciplinary formations which change all the time, but the sense of what needs to be mastered in order to do such work in a rigorous way seems elusive, at least to me. Such studies often seem to borrow very selectively from traditional fields and methods – history, sociology, philosophy, and so on – and produce ideas that are “suggestive” and perhaps empowering or “critical,” but in the end quite transient. It all reminds me of things like Wikipedia: lots of people adding their two-cents, which is all fine and good until it becomes a battle over which version of Paul Revere’s ride is the right one. In the 1980s culture wars, people were urged to “teach the conflict,” which is fine as far as it goes. But without the idea that there is something like a “right” version of the story (or at least a much better one), I don’t really see the point of having an institution – the university – in which to focus on the conflict. The real world provides plenty of space for that. And at least so far, the turn to anthropology or history to replace philosophy as a kind of organizing discourse for university studies has not provided a clear alternative.

Let’s get technical. In one of his books, Guerrilla MetaphysicsGraham Harman, one of the co-founders of the philosophical movement known as Speculative Realism, makes a powerful critique of phenomenology. First, he identifies some inherent contradictions: “The cumulative lesson of this book so far is that phenomenology is caught at the midpoint of two intersections: (1) On the one hand, we deal only with objects, since sheer formless sense data are never encountered; on the other hand, an “objects-only” world could not be tangible or experienceable in any way, since objects always elude us. (2) On the one hand, phenomena are united with our consciousness in a single intentional act, while on the other hand they are clearly separate, since they fascinate us as end points of awareness rather then melting indistinguishably into us.” Second, he accuses phenomenology of remaining a “philosophy of access” and neglecting to recognize what his colleague Levi R. Bryant has called a “Democracy of Objects.” Harman writes: “Of any philosophy we encounter, it can be asked whether it has anything at all to tell us about the impact of inanimate objects upon one another, apart from any human awareness of this fact. If the answer is “yes,” then we have a philosophy of objects. This does not require a model of solid cinder blocks existing in a vacuum without context, but only a standpoint equally capable of treating human and inhuman entities on an equal footing. If the answer is “no,” then we have the philosophy of access, which for all practical purposes is idealism, even if no explicit denial is made of a world outside of human cognition.” What do you make of Harman’s critique of phenomenology and his new brand of realism? 

Having not read this book (though a very good grad student in the English department who was taking my phenomenology seminar introduced me to some of its ideas), I don’t think I can comment responsibly on it; but the characterization of phenomenology seems insensitive to the crucial distinction between transcendental-phenomenological idealism and metaphysical or subjective idealism. In simplest terms: I reject the idea that phenomenology does not give us the world as it is. It is indeed a “philosophy of access,” but it is access to the world as it is. And I would also argue that it is a standpoint “equally capable of treating human and inhuman entities on an equal footing,” if by “equal footing” one means: attending to the things themselves, not setting up one entity as the measure of all the others, but letting entities show themselves as they are. However, I find the idea that one could do this without any concern for “access,” in a broad sense, very naive. For instance, it seems plausible to say that physics tells us about “the impact of inanimate objects upon one another, apart from any human awareness of this fact,” but presumably this is not what the author means. There are the standard examples from quantum mechanics about the influence of the observer, and the like. But beyond that, there is the fact that physics is a theory and a set of practices which provide normative conditions that allow for distinctions to be made between genuine interactions and mere “artefacts” of one’s standpoint, etc. Do these theories and practices count as a mode of “awareness”? If so, then physics must still be too idealistic. But I doubt that any scientific or philosophical position is conceivable that does not involve theories and practices that establish such normative conditions, and if that is so, then Speculative Realism will also involve some reference to conditions of our “awareness” of the objects it references. Transcendental phenomenology strives to do justice to this fact, and if that is a kind of “idealism,” it is one I can live with. As Husserl pointed out, the “transcendental subject” is not the “human being” as this is envisioned in the question, and I would argue that the same holds for Heidegger’s position. I am not impressed by positions that try to circumvent this point by appeal to primordial “events” or to a kind of post-humanism that most often merely borrows – very selectively – from biology and the like to answer philosophical questions. One does not need to make a fetish out of method to believe that certain questions need to be approached differently than others; in particular, philosophical questions have a reference to access built into them, and there’s nothing wrong with that. As for a “democracy of objects,” where does the “subject” fit in? If it is just another object, then we have lost our grip on the distinction.

Toward the end of his life, McLuhan declared: “Phenomenology [is] that which I have been presenting for many years in non-technical terms.” Do you think phenomenology is still relevant in this age of information and digital interactive media?

I actually think it is more relevant than ever. I think a lot of work in philosophy is phenomenological even if it doesn’t fly under that banner – work in moral psychology, philosophy of mind, and epistemology, to give just a few examples. Phenomenology is committed to the analysis of first-person experience, and while this is not the only approach possible to the problems of “this age of information and digital interactive media,” of course, there are certain questions that it is best in a position to address. For instance, what is meant by “information”? There are a great many theories out there that appeal to this notion, but can it really do the work it is expected to do? There are theories that try to account for our awareness of a world of meaningful things – that is, intentionality as consciousness of something as something – in terms of information processing, but phenomenology has developed some trenchant criticisms of this project: information is not intrinsically norm-governed, whereas meaning is. To study the conditions of meaning, then (which are also the conditions that make us able to recognize something as “information” or as a “digitally interactive medium” and to appreciate their essential relations to one another, such as they are), is to stumble, inevitably, into phenomenology at some point. And at that point, everything depends on whether one does it well or badly. Of course, one might want to be reductive about the concept of meaning, but phenomenology has also laid out some pretty good reasons why such a project must fail.

The following question was drafted by Iain Thomson“What do you see as the future of phenomenology in a world that seems to be increasingly dominated by naturalism?” 

Here again, I see some very promising convergences. The kind of naturalism that most concerned Husserl and Heidegger, and against which phenomenology distinguished itself, was both reductive and scientistic. As Husserl put it, the “naturalization of consciousness” – by which he understood the attempt to conceive consciousness as entirely embedded in the nexus of nature understood as a system closed under causal law – entailed the naturalization of all “norms and ideals,” and therewith their reduction to mere matters of fact. Husserl held this to entail both scepticism and relativism – that is, such naturalism, as aphilosophical theory, undermined the very conditions for the validity of thescientific theories upon which it depended for its premises. Under such circumstances, one must either argue that such epistemological paradoxes are of no importance (this was something like Rorty’s conclusion and, in a somewhat different way, informs movements like the Social Studies of Science program) or else find a way around them. That is what Husserl tried to do in developing a “phenomenology” of consciousness, and I believe that Heidegger followed him in this in distinguishing the inquiry into Dasein from all “psychology, biology, and anthropology.” But the situation is somewhat different today, partly thanks to developments in phenomenology itself. For one thing, the idea that nature can be identified with a system closed under causal law (the basis for what McDowell calls “bald naturalism”) is no longer as prevalent as it once was. Whereas in Husserl’s time physics was the paradigm of scientific rationality and the positivistic model of unified science based ultimately on physics held sway, post-Kuhnian philosophy and history of science has challenged this view of things. Our concept of nature is being shaped more by work in the biological or life sciences, and our sense of scientific rationality is itself more complicated, more informed by phenomenological accounts of the interplay between experience, language, practices, and so on. I’m thinking here of the work of Joe Rouse and of my colleague in the history department, John Zammito, but this is only the tip of the iceberg. In such works, a concept of nature is projected that attempts to overcome the gap – between the normative and the natural, fact and value – that constituted the basis for the phenomenological critique of earlier naturalism. Whether this can be accomplished without taking seriously the claims of transcendental phenomenology to a certain priority – one based on the fact that all meaning and normativity, including that involved in the practices and discourses that go to constitute the new approach to “nature,” must in the last analysis be “owned” – remains to be seen. There is still a tendency in post-positivistic philosophy of science to privilege third-person points of view and to believe that the first-person perspective is little more than a function of such practices and discourses, whereas I (following John Haugeland) believe that the phenomenon of commitment is normatively (and so philosophically) primary. But this sort of challenge is not specifically a naturalistic one. In any case, once the concept of nature has grown so capacious as to include what McDowell calls “second nature” and all the (quasi?) teleological processes and (quasi?) rational bootstrapping that the post-positivist philosophers of science find there, there is little point in phenomenology defining itself in opposition to “naturalism.” The word has at this point pretty much lost whatever clear contours it had.

Your current research centers on the relation between intentionality and normativity, i.e., the relation between phenomenological experience of a meaningful world and the ability to respond to norms (standards, ideals, measures, rules, etc.). Well, Slavoj Žižek once claimed that ideology is the “unknown-knowns” – those things we don’t know that we know. I wonder if you think there is a difference and/or similarity between Žižek’s definition of ideology and the notions of “skilful coping” and “mindless everyday coping,” which are often invoked to illustrate the state of being playfully absorbed in the task at hand, of being solicited by the world to make use of its affordances, following Heidegger… 

The ideas of “ skillful coping” and “mindless everyday coping” derive from Hubert Dreyfus’s work, and there the point is, among other things, to show how our perceptual (and cognitive) awareness of objects is made possible by a kind of practical engagement in the world that does not have a propositional, conceptual structure. Dreyfus distinguished between Searle’s idea of “satisfaction conditions” (which he construed as conceptual and thus as, at least in principle, expressible as rules) and “conditions of improvement,” thanks to which we are able to adapt our behaviour to the circumstances in “better and worse” ways without being in possession of any expressible standard of what would count as the best. I find this distinction elusive, but quite important. But I think that Dreyfus (and I) would resist the equation of such conditions with ideology in Zizek’s sense. To call them “unknown-knowns” seems to bring it back into a cognitivist model: “rules” I follow without knowing what they are. On the other hand, if by “ideology” one means certain prevailing assumptions, in a given social formation, about the way things are, assumptions that reflect and reinforce structural power-imbalances within that social formation, then I do think that much of what we say and think – and so also, much of how things appear to us as this or that in an everyday way – is “ideological.” This, I think, is the import of Heidegger’s concept of the One (das Man). The implication is that there is no “ideology free” standpoint, but this does not entail that rational criticism is not possible.

Your most recent book is provocatively entitled Transcendental Heidegger– a title which suggests a position somewhat contrarian to that of, say, Dreyfus, and his attempts to distinguish Heidegger from Husserl and Sartre. Recently, however, that divide has been called into question by a number of critics. In a recent interview with Figure/Ground, for example, Andrew Feenberg declared: “There is a tendency to construct an idealist straw man out of Husserl in order to make Heidegger seem more original than he was. I don’t buy that.” Should Heidegger be read as a “realist” corrective to Husserl’s idealism, in your view? 

I agree with Feenberg, though I’m not sure that the “realism/idealism” debate is the best context for locating what Dreyfus is doing in his attempt to distinguish Heidegger from Husserl and Sartre. The latter two represent what might be called a “philosophy of consciousness,” where individual consciousness has a certain priority in the constitution of meaning (or disclosure of a meaningful world). Dreyfus rejects that appeal to consciousness, but he also acknowledges the dependence of a meaningful world on Dasein’s capacity for disclosing worlds on the basis of its practices. To my mind, this is not all that different from the kind of “idealism” that I think is both defensible and indispensible (see my answer to question 6), and Heidegger too claims, in Being and Time, that (traditional) idealism has the advantage over (traditional) realism in that it at least recognizes that Being (intelligibility, meaning) “cannot be explained through entities.” But that’s a long story. Where your question gets a grip on this matter is, however, precisely in relation to the transcendental. The book you mention is one that Jeff Malpas and I edited in order to explore the various connections between Heidegger and the tradition of transcendental philosophy. This is not something that interests Dreyfus, for whom the very notion of the transcendental carries an essential reference to a kind of foundationalism which he rejects in favour of a more “hermeneutic” or “existential” conception of phenomenology. I don’t think that transcendental philosophy as developed by Husserl involves a dubious foundationalism, though it does preserve strong essentialist and a-prioristic elements. But that’s another long story. And I do think that Heidegger continues the kind of transcendental phenomenology inaugurated by Husserl. From the point of view of that project, the differences between them are quite subtle and require careful explication. They cannot be captured by standard oppositions like “internalist/externalist,” “idealist/realist,” and so on.

What are you currently working on?

The main project on the horizon is a book on Heidegger and reason. Heidegger’s criticisms of rationalism in all its forms, and of the over-valuation of reason in philosophy and culture generally, are well known. But what might be called the “place” of reason – positively considered, and not merely in connection with Heidegger’s negative stance toward how reason has been understood in the tradition – in Heidegger’s account of Dasein (and then also in the later works) has not been adequately explored or developed. I think that much can be learned by following up the question, in detail, of how ‘care’ is supposed to be ‘prior’ to reason, how the ‘rational animal’ is a function of that being in whose being that very being is an issue for it.

© Excerpts and links may be used, provided that full and clear credit is given to
Steven Crowell and Figure/Ground with appropriate and specific direction to the original content.


Suggested citation:

Ralon, L. (2011). “Interview with Steven Crowell,” Figure/Ground. June 22nd.
<  http://figureground.org/interview-with-steven-galt-crowell/ ‎  >


Questions? Contact Laureano Ralón at ralonlaureano@gmail.com




Interview with Gordon Gow

© Gordon Gow and Figure/Ground
Dr. Gow was interviewed by Laureano Ralón. June 21th, 2011.

Gordon Gow is Associate Professor of Communication and Director of the Graduate Program in Communication and Technology (MACT) in the Faculty of Extension at the University of Alberta. From 2003-2006 he was lecturer in the Department of Media and Communications at the London School of Economics, where he was Director of the Graduate Programme in Media and Communications Regulation and Policy. Dr. Gow is also affiliated with the Centre for Policy Research on Science and Technology (CPROST) at Simon Fraser University. Dr. Gow’s research interests revolve around the impact of social media and other new communication technologies in the areas of public safety, public health, and community engagement. His current projects include a SSHRC-funded study on emergency alerting at Canadian post-secondary institutions, as well as a KIAS-funded study on the use of information technology to support sustainable farming practices in developing countries; Dr. Gow has also been involved with an IDRC-funded study on the use of mobile phones for health surveillance in Sri Lanka and India. IN 2009 he received a grant to develop a facility at the University of Alberta in order to examine the potential for mobile phones and other wireless devices to support scholarly as well as community-engaged research projects. His research projects typically involve close collaboration with community stakeholders, and he has organized several workshops around the theme of communications technology and public safety. Participation at these events has included representatives from community and industry organizations, as well municipal, provincial, and federal agencies. Dr. Gow is the author of two books and numerous journal publications. He currently teaches a graduate level introduction to social media and supervises graduate students in the MACT program.

How did you decide to become a university professor? Was it a conscious choice? 

I decided after completing my MA at the University of Calgary that I wanted to do a PhD and continue working in academia.  I enjoy the intellectual autonomy that comes with the position, as well as the opportunity to work with students in areas that otherwise would not come to my attention.

In your experience, how did the role of university professor evolve since you were an undergraduate student?

The Internet was not widely known when I was an undergraduate. One big change that I have seen is that the university and the professoriate really were the arbiters of knowledge prior to the Internet. That role has not diminished per se but it has changed significantly. It used to be that one had to go to the library or go to class to simply gain access to knowledge. Now of course, the situation has reversed and the students with laptops and wireless can be fact checking and challenging claims in the middle of a lecture. That is a game changer. I think the professor now has to be more accountable to the student in terms helping them to negotiate/navigate the knowledge that is available to everyone. The other aspect is that professoriate has to be willing now to engage in that debate about the social construction of knowledge itself.  It is no longer acceptable to simply delegate it to a closed group of peer reviewers and wash your hands of it.

What makes a good teacher today? How do you manage to command attention in an “age of interruption” characterized by social media and information overload?

It is, and always has been, about how one tells the story. Good story tellers will always command attention. How that story is told may vary depending on the technology of the day but it is still about telling a compelling story. The professor has to make the student want to learn if they are to be effective. Humour always helps too.

What advice would you give to young graduate students and aspiring university professors?

Learn the art of storytelling. Listen to your students, don’t lecture them (Socrates, of course, is the model here). Realize and accept that you can’t know everything in your field but that your experience and guidance is what students are seeking out.

From 2003 to 2006 you were a lecturer in the Department of Media and Communications at the London School of Economics, where you were also Director of the Graduate Program in Media and Communications Regulation and Policy. How was your experience in London compared to previous experiences in Canada?

Good question. Well, I have to say that London is an extremely cosmopolitan city, and there is a richness that comes out of that that, just generally as a context to begin, is extremely stimulating. The place being what it is, and with its history, really brings in a diverse range of people and intellectuals, and particularly when working at LSE, the opportunities of “brushes with greatness” abounded. So I found that environment extremely exciting to be in, and the combination of the kind of intellectuals that came through the LSE to give presentations, the quality of the faculty there, the diversity of their research interests, and then as well the students. There were graduate students from a diverse range of countries: North American students, Chinese students, students from Africa and other parts of Europe, and it made for a really diverse set of perspectives and engaging discussions in the classroom.

I remember, for example, with regards to these moments with these great intellectual figures, having Lawrence Lessig drop in and give the graduate students in the Law class a seminar at lunch; and then him coming by later that year at some point and giving a talk on copyright reform in the UK, so these kinds of opportunities really make that place really vibrant from an intellectual standpoint. I did find that it was a very competitive environment, and in terms of balancing teaching and publishing and supervising, that was an ongoing challenge. As I understand from others who are still there, it continues to be a challenge balancing that load.

You just spoke of the quality of the faculty at LSE. Now, going back for a moment to our previous question about what makes a good teacher today – and you already mentioned the art of storytelling as a useful pedagogic technique –, how did your experience in a place like LSE contribute to your own grow as a university professor?

Well, I think this idea that in a place like LSE, faculty members are immersed in the latest domain of thinking in an area; being in the cutting edge and being exposed in casual ways to different perspectives is great. In other words, the beauty of being at LSE is that, by virtue of your position, different opportunities often show up at your desk: invitations to seminars, events in London, conferences, workshops, these kinds of things, which means that you find yourself in this really interesting space with interesting people doing interesting things that are at the leading edge of their field – whether that is information technology, intellectual property, and so on. The key is, of course, really to be able to impart that onto the students, as a teacher; and that’s where storytelling comes in – to thread it together as an interesting and relevant account. Good teachers have that ability: they make it easy to follow the story and get excited about it.

In what respects is the European conception of communication studies as a discipline similar or different from the way in which we think of communication studies in Canada and the USA?

This is interesting. Coming at it from my background, which is the look at the social impact of technology as a branch of communication studies, will impart of course a certain perspective, as compared to other people who have different scholarly interests. From my research perspective – that is, the social impact of technology, technology policy and philosophy of technology to some extent – there are some interesting differences. One is that there is a different emphasis around the role of the state in public broadcasting; that was something I noticed, because in the UK for example, the BBC is such a strong presence both domestically and internationally, a lot of students are drawn into the regulation and policy areas with an emphasis on broadcasting and public broadcasting. I don’t think that we see that to such a great extent here in North America, because the predominance of the US in the field, and the state of public broadcasting, isn’t as prominent. And certainly, I think, from a policy perspective, you might see a little more openness to ideas that look at state intervention and the role of the state in communications policy and public polity – there seems to be a little more openness to that in the European context that you don’t typically find in North America because we are driven by a free market conception.

The other point, I suppose, and this relates probably to Canada, is conceptualizations around space, which are also quite different. In Europe, you deal with large populations, so public sphere and audiences are different, but in North America and particularly in Canada, the role of space and physical space plays out in our policy debates and thinking about communications policy in a much more prominent way than it does in Europe. This makes sense given Canada’s geography and the fact that we are always dealing with huge tracks of space in a relatively small population – this plays out continuously in our discourse around communication studies in Canada. Also in Canada, of course, there is the issue of our proximity to the US. We see a little bit of this in the EU discourse, but certainly in Canada this notion of cultural policy and the need to carve out a unique niche for Canadian culture within the North American culture is still there, even today with the Internet; we still see that debate being played out in broadcasting and telecom. I suppose in some ways there are some similarities along those lines to the European prospectus, because within the EU there is a discourse about the importance of culture. But I don’t think we see the kind of concerns that we see in North America, because the US is such a strong influence on Canada; I think in Europe is a little more balanced, but clearly there is an underlined discourse around cultural policy as well.

In the case of Canada, you spoke of there being a distinguishing emphasis on space: You spoke of the challenge of keeping the country together and the role of communication technologies in that unifying process. Would it be fair to say that we have a conception of communication studies here in Canada that is much more ‘ontological’ than in other countries? I may be awfully wrong here, but when I was in Europe, given the importance of mass media and state propaganda in their tradition, I got the impression at times that when they spoke of communication studies they were really thinking about information studies…

How would you define the difference between information studies and communication studies?

Well, I’d say that communication studies – Canadian communication theory at least – amounts to a “transformational theory,” as opposed to information studies which, to paraphrase McLuhan, is really a “theory of transportation” with epistemological underpinnings, concerned with moving information from point A to point B with minimal interference. I’d say that information theory has a stronger emphasis on the dissemination (coding and decoding) of messages, whereas in the work of somebody like McLuhan at least, there is a lot more being said about embodiment, the sense, and mediation in a broadest sense…

Well, it’s an interesting distinction. I would not necessarily characterize it as more ontological, but the ontological basis of the two may be different. I guess my observations would be similar in some ways. Definitely in Canada, looking into Innis as the fore-father, there is a greater sense of awareness of the physicality of communications than there may be in other places, and that is in part because of the distances. So in setting up a telecommunications network, for example, you can’t ignore the “physicality” of the Canadian landscape – and that is a particularly ontological focus. Also, geographically, because the space is so huge – we cover about five time zones in Canada – that again too reinforces that physicality. You see all this of course played out in Innis’ work. If you trace communication studies back to Innis, you realize that it comes out of a physical manifestation of communication – the fur trade and staples theory – and then it gets gradually translated by Innis into that world of information.I think in the Canadian tradition that is always grounded with this great sense of awareness of the physicality of how it is that these symbols and signs move around.

McLuhan, of course, takes this and embodies it in the body; he is interesting because he takes the idea of physicality and locates it in our sensorium. But also, when he talks about The Medium is the Message, he sees media as environmental; and that plays out in his dimensions of visual and acoustic space. So here is that lineage right back to Innis – space, physicality and how we relate to information in spatial terms – I think that connection can be traced back to the dimensions of North America being huge with a relatively small population in Canada. One of the experiences that one has in Europe as opposed to other countries is the intensity of the information environment, because of the relative density of the population, particularly in large cities: Paris, London, Shanghai, etc. It is a very intense information environment, and because spaces are smaller, you de-emphasize the spatial dimensions because there are not obstacles in the way they are in North America.

This leads nicely into our next question. One of your areas of expertise is precisely Canadian communication theory. What attracted you to the works of Innis, McLuhan et al?

Coming out of Simon Fraser University, at least at the time I was there, McLuhan and Innis had a prominent place in the curriculum. In part because the faculty legacy was there, you were presented with Innis and McLuhan early as an undergrad. I was intrigued by this idea of moving beyond the content and looking at the structural dimensions of media as being influential. And while content is obviously important, I got a sense of excitement by thinking that we could look at media and go a little deeper to try and find more persistent influences that they might have; I find that continuously fascinating.

Once I got hooked into McLuhan – his approach, his way of writing, his way of characterizing history, his way of conveying idea – for me it all generated a lot of excitement and enthusiasm. I found him unconventional; I found it irreverent, and also because he was Canadian, I felt I could connect on those terms as well. But certainly, I as a student found a mode of delivery, McLuhan as a medium, to be one that really inspired questioning and learning.

Somebody to think with perhaps?

Absolutely!

Now, how would you say background in media ecology and Canadian communication theory informs your empirical research?

Well, my work has always exhibits and awareness of the role of the medium, so I carry that forward into all of my work – a strong sense of awareness that the medium itself has a structuring effect on social relations. I find that, even though sometimes this principle will operate in the background on certain research projects, it remains a set of ground rules.

Canadian communication theory, media ecology, the Toronto school of communication – is there a difference between these terms? 

Conceptually, in terms of their lexicon and their concepts, they blur together. If I had to separate media ecology and the Toronto school, I would distinguish them historically. In my view, to speak of the Toronto school of communication is to speak of Explorations, and the ground that came together around Explorations: McLuhan, Carpenter, Innis, that gang. And then, if you want to push it and include the generation that followed that, you can include Postman in there, Ong, that generation of scholars that emerged under the strong influence of McLuhan and the Toronto school.

Media ecology, in my view is a more diffuse body of thought that the Toronto school gave birth to.

How so? 

Well, you can look at McLuhan as the forefather or grandfather. Postman picks it up in N.Y. and media ecology is more centered in N.Y. – NYU was the home of it for a while – so physically is not in Toronto. Furthermore, the group of scholars, even though they may be rooted in McLuhan, it has become a more ambitious and diffused project. So I think media ecology it is an outgrowth of the Toronto school, an evolution of the thinking that first crystallized in the Toronto school. I characterize it in that way because I see it as a growing field, and as such, the future for it is quite bright; in fact, there are a lot of opportunities to connect it as an interdisciplinary field.

Speaking of this interdisciplinary emphasis – and I believe I mentioned off the record the McLuhan/Heidegger/Merleau-Ponty interface that I am currently working on – what other possibilities do you see in terms of advancing or moving beyond McLuhan?

I think the four laws of media, as a kind of encapsulation of his thinking, there is a lot more work that can be done with that. So, taking those four laws and seeing where they go in terms of their interpretive approach. And this ties back to media ecology: I think there is a lot of opportunity in that contribution for us to begin to explore in a more relational way the impact of digital and social media and understanding the social impact of technology. One of the things that I always thought and I have been meaning to put down on paper is this notion of McLuhan studies and media ecology as a branch of constructive technology assessment. So I can see some very interesting opportunities to bring together the work that has been done around constructive technology assessment in Europe, such as Johan Schot’s work from the early 1990s. I like this notion of trying to disrupt path dependencies by early engagement with technology, which is McLuhan’s project writ large. Another element that I would include in there is the study of metaphor. I think there some rich ore to be mined in McLuhan’s work and around technologies as active metaphors. I don’t think that that’s all been played out yet. Lakoff and Johnson sort of did some early work in the book Metaphors We Live By, and I wrote to something about that in a piece on McLuhan and spatial metaphor in the Canadian Journal of Communication. I think there is something deep and significant about technology and metaphor that has yet to be fully appreciated. And I think those two come together with this notion of technology assessment as an applied outcome of this intellectual pursuit.

Let’s move on. This coming June, the University of Alberta will be hosting the The Twelfth Annual Convention of the MEA. This will be a very special occasion because Marshall McLuhan was born in Edmonton, and this year he would have turned 100. Do you think McLuhan has finally received the recognition he deserves? 

I suspect the centenary is going to reinvigorate scholarly awareness and to some extent public awareness of McLuhan’s unique and prescient insights into the impact of media on society. To the extent that it inspires students to pick up Understanding Media and scholars to go back and revisit McLuhan, or at the very least re-inject those questions into the discussion around the impact of technology today, I think that would please McLuhan. And to give you another perspective, my summer reading includes two books: one is Nicholas Carr’s bookThe Shallows, and the other is James Gleick’s The Information, which is a fascinating account of the history of the concept of information. Both books are prominent publications in the field, and both authors mention and draw on McLuhan as an intellectual framework for their studies. That’s an indication that McLuhan retains a certain relevance, and although his insights are often mischaracterized, he remains a key figure in terms of trying to understand what the impact of ICTs on society. He remains a touchstone for many scholars.

It is interesting how you think of McLuhan himself as a medium, as somebody to think with. My impression of a book like Understand Media, for example, is that it is such a classic because it is essentially inexhaustible, interactive, engaging and inviting…

Absolutely, and on a couple of levels: When I re-read it now and read parts about the telephone, for example, there is some interesting historical work that he did in writing that, which comes out as your awareness of the history grows; but at the same time, you can bounce it off things like Twitter, and like you say, it brings new life into his observations. And I think his style of writing, being proto- or pseudo-poetic, lends itself to that kind of reading.

Definitely, and the feeling when I read it is that it’s never complete, always in the making, a stretching-and-awaiting sort of experience…

Yes, there is a fascinating study of McLuhan as a media pundit (Marchand calls him an intellectual comet – and notice that he uses the word comet, because a comet returns, whereas a meteorite burns out). It’s interesting how he was picked up in the 60s by Madison Avenue and turned into this figure at a very interesting time historically, right around 1968. If you think about what was going on, it shows parallels in some ways with where we are today: there is this new medium making a new impact on society – at the time it was television –, everybody is trying to make sense of it, and here comes McLuhan who can capture in a mythic way these complex dynamics and give expression to them; his insights may have been perplexing but also very stimulating and somehow compelling, and that is an interesting study onto itself. Then you have McLuhan as the poet, which is a more enduring reading of him around how he picks up these lines of thinking and how he draws together these influences from Joyce and Thomas Nash and the early modernists – it’s fascinating there. So I see him as figure with those two different qualities, and he can be explored equally on both accounts.

What are you currently working on?

I am currently working on a number of research projects, and what I am really interested in now is particularly the impact of mobile communications technologies around communities of practice. One of the questions that I have been pursuing with my colleagues is the sustainability of technological interventions in areas around public safety and public health with a focus on the developing world. I have been involved in a couple of projects where we looked at introducing mobile phones as a way of addressing a need, but one of the persisting concerns is how you sustain and advance the use of those technologies in settings where there are structural challenges in the form of either economics or political effects that inhibit the sustained use of new technology. So along those lines I am quite interested in understanding how peer production and social networks themselves can be forces for sustaining these kinds of projects; rather than simply throwing money at them or mandating them politically, is there a way that you can draw on social and peer influences as a way to maintain an interest in the technology.

© Excerpts and links may be used, provided that full and clear credit is given to Gordon Gow
and Figure/Ground with appropriate and specific direction to the original content.


Suggested citation:

Ralón, L. (2011). “Interview with Gordon Gow,” Figure/Ground. June 21st.
<  http://figureground.org/interview-with-gordon-gow/  >


Questions? Contact Laureano Ralón at ralonlaureano@gmail.com




Interview with Julian Young

© Julian Young and Figure/Ground
Dr. Young was interviewed by Laureano Ralón. June 19th, 2011.

Julian Young is the Kenan Professor of Humanities Wake Forest University, where he specializes in Continental (nineteenth- and twentieth-century German and French) philosophy, philosophy of art, environmental philosophy, and philosophy of religion. Prior to moving to the USA, Professor Young taught at all levels at the universities of Auckland, Pittsburgh, Calgary and Tasmania, the following: Introduction to Ethics, Introduction to Metaphysics and Theory of Knowledge, Introduction to Theories of Human Nature, British Empiricism, Quine and Sellars, Wittgenstein, Plato, Kant, Hegel, Nietzsche, Schopenhauer, Heidegger, Sartre and Camus. He has supervised and examined numerous MA and PhD theses at Auckland and throughout Australasia. He is the author of ten books, mostly on nineteenth- and twentieth-century German philosophy. He has also written The Death of God and the Meaning of Life. His most recent work is Friedrich Nietzsche: a Philosophical Biography. He has appeared on radio and television in Ireland, New Zealand and the USA, and has written for the Guardian, the New York Times and Harper’s Magazine. He is currently completing The Philosophy of Richard Wagner, and plans a book on tragedy.

How did you decide to become a university professor? Was it a conscious choice?

Well it was certainly conscious, but I have to confess it was not a very positive affirmation of the academic life. The main problem, as I peered cautiously over the walls of academia at the world of work, was that anything I could possibly do looked like drudgery. And so I decided that, if I could, I would stay where I was. For all the multiple disappointments I am about to detail, being an academic still strikes me as the best job in the world.

Who were your mentors in graduate school and what did you learn from them?

My first, and almost only intellectual mentor was my medieval history teacher as boarding school, Michael Cherniavsky. (A brilliant man, he was the son of the court cellist to the last Tsar and the Canadian heiress to the Benjamin Tingley Rogers sugar fortune). Michael had studied at Oxford during the excitement of A. J. Ayer’s introduction of Logical Positivism to Britain, and always taught us historians that philosophy was the Everest of the intellect. (The political philosopher, Alan Ryan, was another of his pupils.) And so, after two years of history at Cambridge, I changed to philosophy (‘moral science’). Although it provided a great university experience in other respects, Cambridge taught me nothing about philosophy. Wittgenstein had died only a few years previously and most of the professors were still wandering about in a shell-shocked condition. They offered vaguely to show ‘the fly [me] the way out of the fly-bottle’ whereas what I wanted to do was to get into the fly-bottle. From my present vantage-point I would say that I learnt nothing in graduate school either, apart from the ‘technology’ of thinking – logic and the application of logic to philosophical argumentation – which still seems to me essential to philosophy of any sort. What I was trained in at Wayne State and Pittsburgh was ‘analytic’ (Anglo-American) philosophy. A couple of years into my first job I realized that I did not want to spend the rest of my life wondering why, if Tom knows that Ortcutt is a spy, and the man in the brown hat is Ortcutt, Tom could still not know that the man in the brown hat is a spy. And so, to the ire of most of my Auckland colleagues, I jumped ship, crossed over to the ‘continental’ (Franco-German) side of the great cultural divide. What continental philosophy discusses – sex, death, and boredom I tell people when I am feeling facetious – did seem to me something I could spend the rest of my life reflecting upon. That being said, I was, at Pittsburgh, greatly impressed by Wilfred Sellars. Though Sellars had little to say about sex, death or boredom he did have a system – a ‘continental’ characteristic.

In your experience, how did the role of university professor evolve since you were an undergraduate student?

At Cambridge, I caught some of the tail-end of the nineteenth-century university. The nineteenth-century ideal – in Germany it was known as the ‘Humboldt’ model – called for the integration of teaching and scholarship. It was necessarily ‘elitist’ since only highly talented students could profit from such a high intellectual level. The fundamental aim, however, was not merely to communicate knowledge and develop the intellect. It was to nurture the flourishing of excellent human beings who would come to occupy leading positions in society. The fundamental aim, that is, was to inculcate moral as much as intellectual excellence. Partly this was to be done by the teacher himself being an inspirational model of excellence. But more importantly, it was to be achieved through the teaching of the humanities. History was especially important. It was to be taught in an unashamedly selective manner in order to identify certain historical movements as progressive – the ‘Whig view of history’, for example – and certain figures as heroes, positive role models. The most disastrous intellectual wave to hit the humanities in the latter part of the twentieth century was so-called ‘deconstruction’. Under its almost totalitarian dominion throughout the humanities the aim became to destroy the entire idea of the life and literature of the past as a repository of positive value. (To be fair to analytic philosophy, it must be said that, alone among the humanities, it resisted this wave of cultural vandalism.) There are other reasons which make it almost comical to think of the aim of the modern university as the production of excellent human beings – the transformation of the university into a factory for the production of economically viable ‘human resources’ is a major factor – but deconstruction is the way in which academics have themselves contributed to the destruction of the university, the way in which they have shot themselves in the foot.

What makes a good teacher today? How do you manage to command attention in an “age of interruption” characterized by fractured attention and information overload?

It is very difficult to give a general answer to this question, for teaching, like love, is an intuitive business that cannot properly be articulate in rules and procedures. (That is why one should never go to a ‘teaching-improvement workshop’.) One thing to do is to stop complaining about students. Sure, they suffer from ADD but one needs to get into the habit of liking them, of not regarding them as the enemy, patients, cannon fodder, or a necessary evil. Students tend to respond well to someone they sense wishes them well. Never let students think that your real life is research-work that happens out of the classroom – try to make it the case, so far as possible, that (as in the nineteenth-century) your research and teaching are one and the same. Do not pander too much to the demand for ‘visual aids’. Do not teach in a darkened classroom and, especially, do not structure your lecture around a set of ‘bullet points’ projected onto a screen. Remember that bullet points are discrete while thought is continuous so that what bullet points represent is, in fact, the death of thought. Address what interests students – sex, death, boredom, technology and the meaning of life.

What advice would you give to young graduate students and aspiring university professors?

Don’t. Not in the current job-market. Not unless you are extremely good and can get into a top graduate school. They still have reasonable employment records. But if you can’t get into a top school, forget it.

The following question was drafted by Professor Jeff Malpas“Academic work in the Humanities, especially in countries like Australia and New Zealand, seems to be increasingly under threat. Having recently moved from a Professorship in New Zealand to one in the US, do you think the situation is any better than that in New Zealand? What would you say are the major differences, if any, regarding support for Humanities research in the two countries?”

Modern Western universities have, to varying degrees, been taken over by the ‘business model’. They construct a ‘strategic plan’ which aims to satisfy ‘key performance indicators’ (KPIs) that are set by either important business interests or, which comes to the same thing in the end, the state. The strategic plan is then administered via ‘line management’: requirements reach the workers at the coal-face – the professors – via the president, the provost, deans and then chairs of departments. To make sure the workers do what the strategic plan (i.e. the business community) demands they are required to complete an on-line ‘annual performance review’. The categories into which the professors are required to slot their life and thought is determined, ultimately, by the KPIs. This general management scheme was invented in the Harvard and Yale business schools in the 1980s. Among universities, however, it was first implemented in Britain followed by New Zealand and then Australia. In my experience the scheme is still in its infancy in the US – which is paradoxical since it was American universities that invented it in the first place. But it is starting and soon the circle will be completed. So in terms of the integration of the university into the business world the US is perhaps six or seven years behind Australasia. The effect of this is that there is a time lag in the process of downgrading the humanities in favor of disciplines that make a more visible contribution to GNP. At Wake Forest, unlike Auckland, the business school is not the biggest building. And of course there is a great deal more money in the US, so that even though the humanities’ slice of the pie is shrinking it is still a lot larger than in New Zealand. Basically, however, all trends are, like Starbucks, global.

Your main area of expertise is nineteenth- and twentieth-century German philosophy. What attracted you to Continental philosophy?

‘Philosophy’ comes from philo-sophia not from philo-theoria. It means ‘love of wisdom’ not ‘love of theory’. Philosophy is about living wisely, that is well. Although theory can make an important contribution to living wisely, philosophy is not about theory for theory’s sake. Theory is a means not an end. The ancient Greeks understood this as did the Hellenistic and Roman philosophers – the Stoics, Epicureans, Sceptics, Cynics, Neo-Platonists, and so on. Theory was important but only to the extent it was relevant to the art of living. In the Middle Ages, however, the question of how to live well became the exclusive province of religion, and so philosophy was reduced to scholastic logic-chopping by monks. The ‘analytic’ tradition inherited the scholastic conception of philosophy whereas nineteenth-century German and twentieth-century French philosophers returned, in the main, to the original idea of philo-sophia. I was never interested in theory for theory’s sake though it took me some time to realise this.

Toward the end of his life, Marshall McLuhan declared that “Phenomenology [is] that which I have been presenting for many years in non-technical terms.” Are phenomenology and existential philosophy still relevant in this age of digital interactive media?

Yes, of course. Nietzsche called the philosopher the ‘physician of culture’. That we suffer from information overload and the challenges of digital media just tells us what philosophy should be addressing at the moment.

The following question was prepared by Professor Lee Braver“Why do you think Heidegger has had such lasting value? What is it about his work that continues to stimulate questions?”

Heidegger observed that in the age of electronic media the principle existential issue is ‘homelessness’, lack of ‘dwelling’. One dwells when there are things that are ‘near’ to one. But if some things are ‘near’ others have to be ‘far’. In electronic modernity, however, the ‘near’-‘far’ distinction is disappearing, things are assuming a ‘uniform distancelessness’. So the idea of a dwelling place is under threat. But there is more to dwelling than the idea of a special geographical region. Dwelling also depends on what Heidegger variously calls ‘the holy’ and ‘the poetic’. If you possess a dwelling place then it has, for you, a dimension that does not show up in a photograph – unless you are a very great photographer. One of the things Heidegger tries to do in his own writing is to convey the sense of this hidden, poetic, dimension. At the end of perhaps the greatest of his later works, ‘Building Dwelling Thinking’, he writes that ‘as soon as man gives thought to his homelessness, it is a misery no longer’. This is my experience of reading and thinking with Heidegger, which is why I return to him again and again. It is, I guess, a kind of spiritual therapy.

What are you currently working on?

I have just completed a long essay on ‘the turn’ in Heidegger’s ‘path of thinking’. But my main current project is a book on tragedy, which looks at what philosophers have said about tragedy starting with Plato and ending with Žižek. I am interested in the question of whether a great tragic artwork is still possible or whether we live in a post-tragic age. I am also writing on Richard Wagner who sought to bring about, in the form of his own artworks, ‘the rebirth of tragedy’, the rebirth of the great tragic artwork of fifth-century Greece.

©  Excerpts and links may be used provided that full and clear credit is given to Julian Young
and Figure/Ground with appropriate and specific direction to the original content.

Suggested citation:

Ralón, L. (2011). “A Conversation with Julian Young,” Figure/Ground. June 19th.
< http://figureground.org/interview-with-julian-young/ >

Questions? Contact Laureano Ralón at ralonlaureano@gmail.com




Interview with Robert D. Stolorow

© Robert D. Stolorow and Figure/Ground
Dr. Stolorow was interviewed by Laureano Ralón. June 13th, 2011.

Robert D. Stolorow, Ph.D., is a Founding Faculty Member and Training and Supervising Analyst at the Institute of Contemporary Psychoanalysis, Los Angeles; a Founding Faculty Member at the Institute for the Psychoanalytic Study of Subjectivity, New York City; and a Clinical Professor of Psychiatry at the UCLA School of Medicine. He is the author of World, Affectivity, Trauma: Heidegger and Post-Cartesian Psychoanalysis (2011) and Trauma and Human Existence: Autobiographical, Psychoanalytic, and Philosophical Reflections (2007), and co-author of Worlds of Experience: Interweaving Philosophical and Clinical Dimensions in Psychoanalysis (2002), Working Intersubjectively: Contextualism in Psychoanalytic Practice (1997), Contexts of Being: The Intersubjective Foundations of Psychological Life (1992), Psychoanalytic Treatment: An Intersubjective Approach (1987), Structures of Subjectivity: Explorations in Psychoanalytic Phenomenology (1984), Psychoanalysis of Developmental Arrests: Theory and Treatment (1980), and Faces in a Cloud: Intersubjectivity in Personality Theory (1993 [1979], 2nd. ed.). He is also coeditor of The Intersubjective Perspective (1994) and has authored or coauthored more than two hundred articles on aspects of psychoanalytic theory and practice. He received his Ph.D. in Clinical Psychology from Harvard University in 1970 and his Certificate in Psychoanalysis and Psychotherapy from the Psychoanalytic Institute of the Postgraduate Center for Mental Health, New York City, in 1974. He also received a Ph.D. in Philosophy from the University of California at Riverside in 2007. He holds diplomas both in Clinical Psychology and in Psychoanalysis from the American Board of Professional Psychology (ABPP). In 1995 he received the Distinguished Scientific Award from the Division of Psychoanalysis of the American Psychological Association, in which he is a Fellow.

What would you define yourself as – an author, a thinker, a public intellectual?

Being a weird interdisciplinary creature, I have to define myself somewhat complexly. I definitely think of myself as a psychoanalytic and philosophical thinker and author. Additionally, I am a practitioner of psychoanalysis and a teacher of both philosophy and psychoanalysis. In recent times, I have also been publishing articles and blogs applying my ideas about collective trauma and defensive ideologies to the socio-political sphere, so I guess that might make me a public intellectual too.

Who were some of your mentors in university and what were some of the most important lessons you learned from them?

I earned a doctorate in clinical psychology at Harvard in 1970 and a doctorate in philosophy at the University of California, Riverside, in 2007, and I reaped rich benefits from mentors during both periods of graduate study. My principal mentor at Harvard was Robert White, from whom I acquired an abiding interest in and respect for the uniqueness of each individual’s world of experience. My principal mentor at Riverside was my dissertation chair, William Bracken, who, although relatively unpublished, is perhaps the most brilliant Heidegger scholar I have encountered. I owe him an enormous dept of gratitude for his contributions to my development as a Heideggerian philosopher. Other important mentors at Riverside from whom I learned a great deal were Kantian philosopher Andrews Reath, phenomenologist Charles Siewert, and another brilliant Heidegger scholar, Mark Wrathall.

You trained both as a philosopher and a psychoanalyst. How did the two careers reinforce each other?

Wow, I would have to write an intellectual memoir to address this question adequately! I’ll try to hit the highlights. I first became interested in the interface of psychoanalysis and philosophy as an undergraduate in the early 1960s when I encountered the writings of Ludwig Binswanger, Medard Boss, and Rollo May, early pioneers who recognized the relevance of Heidegger’s existential philosophy for psychotherapy and psychoanalysis. While a graduate student in clinical psychology, I became disillusioned with empirical psychological research, feeling that it stripped psychology of everything humanly meaningful, and toyed with the idea of doing a second doctorate in philosophy (an ambition that had to await several decades before coming to fruition), which at the time I thought could provide tools for cleaning up the mess that was psychoanalytic theory. However, during my clinical internship I found that I really enjoyed psychoanalytic work and, after completing my doctorate, decided to go to New York to pursue psychoanalytic training instead.

A nodal point in my intellectual career occurred in 1972 when, still in psychoanalytic training, I took a job as an assistant professor of psychology at Rutgers where I met George Atwood, who became my closest collaborator. George (an autodidact with an encyclopaedic knowledge of Continental philosophy) and I embarked upon a series of psycho-biographical studies of the personal, subjective origins of the theoretical systems of Freud, Jung, Rank, and Reich, studies that formed the basis of our first book, Faces in a Cloud: Subjectivity in Personality Theory (Aronson, 1979). From these studies, we concluded that since psychological theories derive to a significant degree from the subjective concerns of their creators, what psychoanalysis and personality psychology needed was a theory of subjectivity itself: a unifying framework capable of accounting not only for the psychological phenomena that other theories address, but also for the theories themselves. In the last chapter ofFaces, we outlined a set of proposals for the creation of such a framework, which we called psychoanalytic phenomenology. We envisioned this framework as a depth psychology of personal experience, purified of the mechanistic reifications of Freudian meta-psychology. Our framework took the experiential world of the individual as its central theoretical construct. We assumed no impersonal psychical agencies or motivational prime movers in order to explain the experiential world. Instead, we assumed that this world evolves organically from the person’s encounter with the critical formative experiences that constitute his or her unique life history. Once established, it becomes discernible in the distinctive, recurrent patterns, themes, and invariant meanings that pre-reflectively organize the person’s experiences. Psychoanalytic phenomenology entailed a set of interpretative principles for investigating the nature, origins, purposes, and transformations of the configurations of self and other pervading a person’s experiential world. Importantly, our dedication to illuminating personal phenomenology had led us from Cartesian minds to emotional worlds and, thus, from intra-psychic mental contents to relational contexts. Phenomenology had led us inexorably to contextualism.

Once we had rethought psychoanalysis as a form of phenomenological inquiry, a focus on the mutually-enriching interface of psychoanalysis and Continental phenomenology became inescapable, and I began reading phenomenological philosophy voraciously. In 2000, I formed a leaderless philosophical study group in which we devoted a year to a close reading of Heidegger’s Being and Time and another year to Gadamer’s Truth and Method. Philosopher-psychoanalyst Donna Orange had joined the collaboration with Atwood, and she brought to our phenomenological contextualism a perspectivalist hermeneutic sensibility and a view of psychoanalytic practice as a form of phronesis rather than techne.

A second nodal point for me occurred when I turned my attention to the phenomenology of emotional trauma in the wake of the death of my late wife, Dede, in 1991—a massive trauma that shattered my world. The close study ofBeing and Time in 2000 proved to be critical. On one hand, Heidegger’s ontological contextualism (In-der-Welt-sein) seemed to provide a solid philosophical grounding for our psychoanalytic phenomenological contextualism. Even more important to me at the time, Heidegger’s phenomenological analysis of Angst, world-collapse, uncanniness, and thrownness into being-toward-death provided me with extraordinary philosophical tools for grasping the existential significance of emotional trauma. It was this latter discovery that motivated me to begin doctoral studies in philosophy and write a dissertation on trauma and Heidegger, which eventuated in my two most recent books, Trauma and Human Existence: Autobiographical, Psychoanalytic, and Philosophical Reflections(Routledge, 2007) and World, Affectivity, Trauma: Heidegger and Post-Cartesian Psychoanalysis (Routledge, 2011). In the last book, I showed both how Heidegger’s existential philosophy can ground and enrich post-Cartesian psychoanalysis and how post-Cartesian psychoanalysis, by relationalizing Heidegger’s conception of finitude and expanding Heidegger’s conception of relationality, can enrich his existential philosophy. I feel that in this book I have, in my sunset years, come into my own as a philosopher.

In your experience, how do you think the role of university professor might have evolved since you were an undergraduate student?

Perhaps partly because I have not been a university professor (in psychology) since 1984 when I moved to California, I have not noticed significant changes in the role of university professor. I was very struck by the enormous devotion to teaching, guiding, and mentoring shown by my philosophy professors at Riverside. Perhaps the biggest change for me as a graduate student was the current importance of the internet and the need for me to become computer-literate fast!

How do you manage to command attention during your talks and lectures in this “age of interruption” characterized by fractured attentiona and information overload?

When I first began lecturing and then presenting in the early 1970s, I learned to bring my affect into my speaking. This has served me well ever since. I have found that the affect-laden quality of my recent work has been especially appealing to young philosophers.

The following guest question was drafted by Professor Iain Thomson: “Do you think all resurrective ideologies necessarily deny human finitude?  What about the later Heidegger’s postmodern idea that truly acknowledging human finitude can give us insight into the inexhaustible nature of being?”

This is a great question. There have been two contexts in which I have written about “resurrective ideology.” One has been my effort to extend my ideas about trauma to the socio-political sphere.  In my 2007 book on trauma, I contended that the essence of emotional trauma lies in the shattering of what I called the “absolutisms of everyday life,” the system of illusory beliefs that allow us to function in the world, experienced as stable, predictable, and safe. Such shattering is a massive loss of innocence exposing the inescapable contingency of existence on a universe that is chaotic and unpredictable and in which no safety or continuity of being can be assured. Emotional trauma brings us face to face with our finitude and existential vulnerability and with death and loss as possibilities that define our existence and that loom as constant threats. Often traumatized people try to restore the lost illusions shattered by trauma through some form of resurrective ideology.

Consider, for example, the impact on Americans of the terrorist attack of September 11, 2001, a devastating collective trauma that inflicted a rip in the fabric of the American psyche. In horrifyingly demonstrating that even America can be assaulted on its native soil, the attack of 9/11 shattered Americans’ collective illusions of safety, inviolability, and grandiose invincibility, illusions that had long been mainstays of the American historical identity. In the wake of such shattering, Americans became much more susceptible to resurrective ideologies—e.g., that offered by the Bush administration—that promised to restore the grandiose illusions that have been lost.

The other context, actually the original one, was a psycho-biographical account of Heidegger’s fall into Nazism, which I wrote in collaboration with Atwood and Orange and incorporated into my 2011 book.  There we contended that Heidegger’s enthusiastic embrace of his version of Nazism, whose grandiose quality was chillingly manifested in his Rector’s Address, “The Self-Assertion of the German University” (1933), represented his effort to resurrect his sense of agentic selfhood, which had been crushed by the combined emotionally annihilating impact of three circumstances: His muse and lover Hannah Arendt’s withdrawal from him; his magnum opus Being and Time’s being met by the academic world “by hopeless incomprehension”; and his mother’s essentially disowning him on her deathbed for his having broken with the Catholic Church.

After resigning as rector of Freiburg University in 1934 and disengaging from political involvement, Heidegger largely withdrew into a life of solitary philosophical and spiritual reflection, wherein the “turn” in his thinking gained momentum. I think Iain Thomson is right when he claims that the later Heidegger’s acknowledgment and acceptance of an aspect of human finitude—namely, the historically and temporally embedded limitedness of any understanding of being—gave him insight into “being as such,” the inexhaustible source of all intelligibility that resists any attempt to conceptualize it. And yet, do we not glimpse a trace of the old restorative grandiosity in Heidegger’s self-designation as the agent of a new “other beginning,” the initiator of a new epoch in the history of being?

Other emotional themes in Heidegger’s later philosophy are apparent to a psychoanalytic eye. Heidegger is often rightly criticized for never having openly expressed remorse about his Nazi involvement. Yet the whole tenor of his later philosophizing—wherein the grandiose, aggressive, goose-stepping self-assertiveness of the Rector’s Address is replaced by a view of the human being as the “constant receiver,” the “shepherd” and the protector, of the “gift” of being—can be seen to reflect his recognition of his dreadful, deplorable mistake.

Moreover, there is another dimension of human finitude—the finitude of human connectedness, of our “being-with-one-another”—that goes largely unnamed throughout Heidegger’s philosophizing. In my 2011 book, I claimed controversially (with Critchley and Derrida) that human finitude is relational, that being-toward-death always includes a being-toward-loss of loved others, and that death and loss are existentially equiprimordial. In the chapter on Heidegger’s Nazism, we contended that for Heidegger the threat of loss of connectedness with others was built into the quest for authentic individualized selfhood, as was shown vividly in his wrenching struggles to separate himself from the Catholic Church of his family and in his mother’s deathbed renunciation of him for doing just that. In the poetry of Holderlin, Heidegger found the powerful theme of returning—returning to being-at-home and to the lost god that had disappeared—imagery in which we discerned his longing to restore connections lost in his pursuit of individualized selfhood, such as those with his mother and the Catholic family of his childhood. The later Heidegger returned home.   

Returning for a moment to your dual training as a philosopher/psychoanalyst, do you think any insights from the social sciences might help transform the philosophical profession for the better and vice versa? Should fields like philosophy and psychology/sociology remain separate, or are there advantages to bridging the existential andexistentielle dimensions of human reality in the spirit of interdisciplinary studies and methodological pragmatism?

Clearly, as an interdisciplinary creature myself, I am an advocate of interdisciplinary cross-fertilization (of which my 2011 book is a clear instance), rather than disciplinary insularity. Heidegger’s Being and Time is filled with examples of the advantages of bridging the existential and the existentielle, the ontological and the ontical dimensions of human reality. It is my view that academic psychology made a big historical mistake when, caught in the grip of modern scientism, it separated itself from philosophy in order to become a “hard science.” I regard psychoanalysis, or at least my brand of it, as being neither a branch of medicine nor of psychology, but as applied philosophy.

You have defined your intersubjective-systems theory as a “phenomenological contextualism.” How is your own brand of contextualism similar and/or different from the relational model put forth by social constructionist thought?

There are of course many similarities, but I think there are subtle differences—differences in sensibility—as well. I would say that my brand of contextualism embraces a hermeneutic rather than a constructivist sensibility. Following Gadamer, I would say that all understanding involves interpretation, and this seems different to me from saying that all understanding is constructed. Interpreting something—i.e., understanding it from a particular perspective—seems different to me from constructing a narrative about it.

I assume you are familiar with Speculative Realism and Object-Oriented Ontology. Since your approach to psychiatry is both phenomenological and contextual, I will quote a passage from Graham Harman’s Guerrilla Metaphysics and ask you to reflect on it: “What I am advocating is a reversal of the familiar social pattern in which everyone proves their adequate philosophical training by jabbing a few more daggers into the corpse of realism. From the flintiest analytic philosopher to the most dashing Francophone icon, philosophy today is united through a shared contempt for any probing of a real world in itself. Like all broad fashions of any era, this disdain begins to take on the character of an automatic reflex, and like all mental reflexes soon decays into compulsion. Given this atmosphere, it is widely supposed that substances are championed only by reactionaries living in an irrelevant past, while innovation seems to be on the side of relations and contexts, not individual things. On a related front, it is supposed to be the reactionaries who believe in substances independent of our perceptions, while the self-proclaimed avant-garde delights in bursting this final bubble of the true believers – a tedious drama of canned iconoclasm playing out across the decades. The champions of wholes over parts and the doubters of independent realities can continue to mock the conservatism of their foes if they wish, but the fact is that they have now largely defeated those foes. Holism and antirealism, their days of novelty long past, have become the new philosophical dogmas of our time. The sole difference is that the old orthodoxies viewed their opponents as dangerous cutting-edge transgressors, while the new ones have so exhausted the field of critique and transgression that they are likely to view their challengers only as conservative throwbacks.”  Is metaphysics a thing of the past in your view, or do you tend to agree more with Harman?

I don’t really know whether metaphysics is a thing of the past. Heidegger certainly thought that it was, or wished it to be so. What I would say is that metaphysical questions, like the debate between realism and anti-realism, fall outside the domain of phenomenological inquiry (except insofar as metaphysical systems can be historically contextualized and deconstructed, as Heidegger attempted to do). I think Husserl got it right when he characterized the intentional structure of consciousness phenomenologically as always as if directed toward an object, where the “as if” indicates that the metaphysical question about the reality of the intentional object is not to be asked by the phenomenological inquirer.

In agreement with Nietzsche, Heidegger, and Gadamer, my own phenomenological-contextualist viewpoint holds that all understandings of the “real world” are deeply perspectival. A passage from my 2011 book makes this claim very strongly: “Corresponding to its Cartesianism is traditional psychoanalysis’s objectivist epistemology. One isolated mind, the analyst, is claimed to make objective observations and interpretations of another isolated mind, the patient. A phenomenological contextualism … reunites the Cartesian isolated mind with its world…. Correspondingly, intersubjective-systems theory embraces a perspectivalist epistemology, insisting that analytic understanding is always from a perspective shaped by the organizing principles of the inquirer. Accordingly, there are no objective or neutral analysts, no immaculate perceptions (Nietzsche), no God’s-eye view (Putnam) of anyone or anything” (p.20).

What are you currently working on?

I’m planning a paper elaborating on Heidegger’s use of mood as a bridge between the ontical or psychological and the ontological, a bridge to the “truth of being.” In this paper, I want to counter two criticisms of Heidegger: (1) that he fails to distinguish sufficiently the phenomena of mood, emotion, and feeling, and (2) that he neglects the ontological significance of the body.

 —

©  Excerpts and links may be used provided that full and clear credit is given to Robert D. Stolorow
and Figure/Ground with appropriate and specific direction to the original content.

Suggested citation:

Ralón, L. (2011). “A Conversation with Robert Stolorow,” Figure/Ground. June 13th.
< http://figureground.org/a-conversation-with-sharon-butler >

Questions? Contact Laureano Ralón at ralonlaureano@gmail.com