Interview with Len Shyles

© Leonard Shyles and Figure/Ground
Dr. Shyles was interviewed by Laureano Ralón. March 31st, 2012.

Dr. Leonard Shyles has been a Professor in the Department of Communication at Villanova University since 1989. He served as the director of the video production lab until 2000. He is also a member of the speech faculty, teaching Business and Professional Speaking to the University’s undergraduate majors mainly from the Business School. His research focuses on the transition from analog to digital platforms across media and telecommunications institutions. His principal recent work includes the texts Deciphering cyberspace: making the most of digital communication technology (2003), and The Art of Video Production (2007) by Sage publications. Dr. Shyles is also an expert in both qualitative and quantitative research methods. He has taught research methodology in the Communication Department continuously since 1989, and has published widely in the fields of political communication, public communication campaigns, and military propaganda analysis.

How did you decide to become a university professor? Was it a conscious choice?

I consciously decided to become a university professor based on my experience at home and in the workplace as I moved toward independence, guided by my parents, other family members, and important figures (several of them teachers) during my teens and twenties. Both my parents taught me by example to do things for myself from an early age. My dad learned a trade under the GI Bill after serving in World War II–he became a cutter, grader, pattern-maker and designer of women’s sportswear in mid-town Manhattan. He was an artisan at heart. His ethical example taught me that the most important asset you can have is to earn the trust of others based on keeping your word. My mom was a bookkeeper and, later, a union-organizer; she helped unskilled blue-collar workers improve their health benefits and overall working conditions. Both stressed the importance of education as the way to a better life. When they were at work, it was up to my sister (four years my senior) and me to take care of our household chores, do our homework, prepare our music lessons, and at times get dinner ready.

My mom’s brother was a huge influence; he was the only family member to have attended college before me. He became a lawyer, and later, a politician. His career high-point came in the early 1960s when he successfully represented a couple of mixed race who were arrested for being married; that case wound up before the US Supreme Court—it was called the Loving trial, and it ended miscegenation laws banning inter-racial marriage. He later served as a legislator in the Virginia statehouse for about twenty years. He taught me (also by example), that strong argument based on sound reasoning could bring significant social change. In other words, one can achieve desired goals without resorting to violent tactics; the implications of that for my chosen career should be self-evident.

At high school and college, several teachers motivated me to consider the value of teaching as a career option: In high school I had some language teachers, a phys ed. teacher, and a few math teachers who exuded joy at seeing my progress in their classes. In college, I had several philosophy professors, excellent teachers, who spurred my thinking about aesthetics, linguistics, rhetoric, epistemology, and logic, among other things. All of these people influenced my career decision.

The workplace also played a role. Specifically, I had acquired a teaching certificate in my last year of college that enabled me to work as a NYC public school teacher. I taught remedial reading and served as a per diem substitute in several inner-city middle and junior high schools over five years in Brooklyn, New York. During that period, I went to grad school at night to get my MS degree in Broadcasting with a specialization in Television Production. It was during this period that I learned I did not want to cope with the classroom management issues presented on a daily basis in that setting; at the same time I gained tremendous respect for the dedicated teachers in the K-12 work-world who take on that career choice; those who do it well have my undying admiration.

Ultimately I completed my Ph.D. at The Ohio State University (OSU) so that I could learn how to do sound empirical research, gain mastery in statistical thinking and methods, and qualify for a university teaching position. I have been at it since 1981.

Who were some of your mentors in graduate school and what were some of the most important lessons you learned from them?

My mentors in grad school included Gary Cronkhite, a teacher/scholar who started out planning to become a minister but who moved toward a university teaching in the communication field. I’m glad he did. I met him when he came to OSU as a visiting professor in the late 1970s. He was a clear, deep thinker, with a vast command of the research on credibility and ethos. His training and competency in Aristotelian rhetoric and speech theory were impressive, nearly peerless. Surprisingly, his understanding of statistical methods was equally strong. We became friends and he welcomed our intellectual association. I solved some knotty problems with his help.

My association with Gary reemphasized what I already believed: that qualitative and quantitative ways of thinking about the world should not be mutually exclusive; in fact, separately tracking majors along such lines actually harms the entire enterprise. By contrast, when properly employed, each approach can enlighten the other. Unfortunately, in cases where institutions have decided to structure their communication programs into exclusive, separate qualitative and quantitative course requirements, it is inevitable that some severe problems of are bound to develop. I think offering degrees that enable students to study one or the other approach without requiring both conveys wrongheaded views of the entire process of inquiry, and that is too bad.

Another mentor was my advisor. His most lasting lesson to me as a developing scholar was to understand that the research enterprise is ontologically a social process from start to finish, and he was a social genius at making that part of his own research practice. He also led by example. The way he did research was in every way social, from the development of questions of interest to the performing of literature reviews, gathering of data and conducting data analysis etc. He also conveyed the idea that if you engage in the research process and are not having fun, you are doing it wrong. His teaching was infectious as a result.

Joshua Meyrowitz’ thesis in No Sense of Place is that when media change, situations and roles change. In your experience, how did the role of university professor evolve since you were an undergraduate student?

I agree that media change things, but unfortunately it is not always clear how. For example, even the greatest linguist cannot predict my next utterance. However, in general terms, I think it is obvious that the authoritarian role of the university professor has softened in the wake of the digital transition; things have shifted toward being more collaborative with students. The ability to bring students into the research process is more feasible through the help of speedier and more complete distance learning resources provided by the internet, including greater access to many once-unavailable databases. At the same time, having such tools and resources in no way releases teachers from doing their jobs: namely demonstrating clear thinking, mastery of their field, teaching students how to develop questions that are truly worthy of contemplation, and doing all of that with an ethical spirit in how one treats others in the process.

Inside the classroom, how do you manage to command attention in an age of interruption characterized by fractured attention and information overload?

I hold up my cell-phone on the way into the classroom and request that the students do what I do–I turn mine off and exclaim that if they can power down when they go to the movies, they can do so in class. I also request that students close their laptops when I see them becoming distracted. I mention in the syllabus that a final course grade may be significantly reduced for lack of participation. Inattention can be identified by calling on a student to answer a question, seeing that they cannot, and then asking them to repeat the question that has been asked and seeing again that they cannot. In such cases, I have resorted to the dreaded florid swirl of my arm (pen in hand) as I make a mark in the roll book. Usually that is enough to solve the problem.

As for my personal ability to command attention, I do that by asking questions, and by using appropriate presentation skills (i.e., proper use of vocal range, gesture, facial expressions) when I speak. If done correctly, such techniques increase audience involvement which helps in meaning acquisition. I also make every effort to keep vocal pauses (i.e., “um”, “uh”) to a minimum, and to keep lectures brief and clear.

In 1964, Marshall McLuhan declared, in reference to the university environment that, “departmental sovereignties have melted away as rapidly as national sovereignties under conditions of electric speed.” This claim can be viewed as an endorsement of interdisciplinary studies, but it could also be regarded as a statement about the changing nature of academia. Do you think the university as an institution is in crisis or at least under threat in this age of information and digital interactive media?

Much is under threat as the digital transition alters the way information is created, stored, edited, retrieved, transmitted, shared, and assessed for its veracity. In some cases, the identity of end-users must be authenticated, as when examinations are taken online. Bio-medical methods of authentication (i.e., iris-scan technology) will be used more in the future as universities realize they must provide ways to validate student achievement in order to maintain accreditation. The integrity of the educational process is at risk when bad actors take advantage of weaknesses in the system to cheat or misrepresent information or plagiarize. Everyone loses when the system is compromised. Outside the university setting, there are increased opportunities for piracy, copyright infringement, corruption of information, and a host of other issues that challenge the integrity of information, content, and authorship of works in the online world.

In 2009, Francis Fukuyama wrote a controversial article for the Washington Post entitled “What are your arguments for or against tenure track?” In it, Fukuyama argues that the tenure system has turned the academy into one of the most conservative and costly institutions in the country, making younger untenured professors fearful of taking intellectual risks and causing them to write in jargon aimed only at those in their narrow subdiscipline. In a short, Fukuyama believes the freedom guaranteed by tenure is precious, but thinks it’s time to abolish this institution before it becomes too costly, both financially and intellectually. Since then, there has been a considerable amount of debate about this sensitive issue, both inside and outside the university. Do you agree with the author? What are your arguments for and/or against academic tenure?

I am tenured and support retaining it for my future colleagues. Why? Because it is one of the ways to safeguard academic freedom. It releases controversial professors from a significant part of the worry associated with losing a job for taking positions that may be socially unpopular or critical of the institution for which they work. Of course, lazy professors may take advantage of tenure by becoming non-performers, which detracts from the scholarly enterprise, including teaching. Of course, there are a number of ways to deal with slackers: I have seen such nonperformers get undesirable teaching schedules, unattractive committee assignments, low or no merit raises etc. Such responses encourage deadwood profs to “Shape up or ship out.” The folkways and practices of the shop one works for have many ways of conveying displeasure.

What advice would you give to young graduate students and aspiring university professors, and who are the thinkers today that you believe young scholars should be reading?

My advice to young grad students and aspiring university professors is to keep reading and writing, engage in dialog at conferences with respected experts, keep asking questions, and then develop methods for answering them; talk to students and colleagues from other disciplines. Watch other professors teach. Audit classes in which you have an interest. Stay away from overly burdensome service assignments before tenure. Hone your teaching and presentation skills. Pay close attention to the classic authors and writings of your field.

In my field, knowing the history of ideas is in my view essential. The classics include the works of Aristotle (especially The Rhetoric and The Poetics), and rhetorical works by several Greek and Roman classical orators and scholars (i.e., Plato, Cicero, Quintilian). In the modern period, the history of ideas continues with the British Empiricists (Hume, Mill, Locke, Russell), and the major Continental philosophers (Descartes, Kant, Nietzsche, Sartre, Wittgenstein, Husserl). In the philosophy of science, I recommend the edited collections of May Brodbeck, and works by Karl Popper, A. J. Ayer, Arthur Pap, Abraham Kaplan and Israel Scheffler; a fine, brief treatise on experimental and quasi-experimental design by Campbell and Stanley will repay close attention; a good research methods text is Fred Kerlinger’s book. In digital media, try to gain some understanding of the physical nature of the infrastructures that make digital media and messages possible (computers, telecommunication, and broadcasting systems, all of which are now interwoven).

On the social science side of things, students should be familiar with the theories of Skinner, Freud, Piaget, Durkheim, Mead and Parsons; critical theorists worth one’s time in my view include works by Susanne Langer, Cassirer, Habermas, and Gregory Bateson; to learn content analysis methods, I like Holsti, Krippendorff and Berelson. Carl Hovland and Clark Hull are worth some time for seeing how others have conducted true experiments involving learning and persuasion.

Your research focuses on the transition from analog to digital platforms across media and telecommunications institution. What are some of the authors, theories and schools of thought that inform your research?

I believe that to fully understand digital media, it helps to have a basic understanding of the physical nature of radio energy, telecommunications systems (i.e., how the telephone works), and computers. One needn’t become a physicist or electronics engineer, but it helps to have some appreciation for the technical aspects of the devices that make it possible to encode and send messages around the globe at the speed of light.

Once messages are encoded and transmitted, they must be consumed by an audience to have an impact—it is therefore essential to have some understanding of how symbolic content is apprehended by audience members. At the symbolic level, we transcend the realm of the physical and we are propelled into a cultural context.

The authors that can help in understanding the technical (physical and mathematical) aspects of media systems include Shannon and Weaver, George Boole, Charles Peirce. My book, “Deciphering Cyberspace,” mentions many others who have contributed to our understanding of  this part of the field.

As for the authors whose writings inform the symbolic aspects of messages, and audience analysis concepts, I recommend speech act theorists like Austin and Searle, linguistics scholars like Korzybski and Hayakawa, and meaning theorists like Ogden and Richards, Osgood, Suci, and Tannenbaum, and Charles Morris to name a few.

For the history of American broadcasting, students should read Erik Barnouw, and similar works by Gleason Archer. I have also found the ideas presented in American Pragmatism to be helpful so I’d recommend the works of John Dewey, and Charles Peirce, once again. If students want to learn more about the persuasion process, I’d recommend overview historical writings of James L. Golden, where many other scholars are recommended.

You are also actively involved in the assessment of distance learning and the use of digital media in distance education. How have student-professor relations changed with this advent of this new media ecology we dwell in?

As I stated earlier, the education process and the relationship between teacher and student are more collaborative and symmetrical, less authoritarian, more democratic, more a shared act of co-discovery, but with guidance from the professor-as-advisor. I find my most useful work lies nowadays in helping students invent well-formed questions, those amenable to yielding answers that can make the world a better place, questions that are worthy of contemplation.

In his 1969 interview with Playboy Magazine, McLuhan made the following statement with regards to modern education: “Because education, which should be helping youth to understand and adapt to their revolutionary new environments, is instead being used merely as an instrument of cultural aggression, imposing upon retribalized youth the obsolescent visual values of the dying literate age. Our entire educational system is reactionary, oriented to past values and past technologies, and will likely continue so until the old generation relinquishes power. The generation gap is actually a chasm, separating not two age groups but two vastly divergent cultures. I can understand the ferment in our schools, because our educational system is totally rearview mirror. It’s a dying and outdated system founded on literate values and fragmented and classified data totally unsuited to the needs of the first television generation.” Obviously, much has changed in terms of educational technologies since McLuhan’s time, but it seems as though the main claim behind this argument (his notion of a “rearview mirror”) remains valid even today. Would it be fair to say that the educational establishment is reactionary in that it always seems to lag behind technological innovation?

Technology development, it seems to me, often brings with it the need to adjust our legal and policy considerations, and to revise our cognitive understanding of how old and new technologies can should be used to serve culture and society; we often adopt new uses for old technologies. There are often unforeseen developments technology brings, even to the very institutions and systems that invented them in the first place. In short, the entire process is much more virulent, if you will, than anything McLuhan said in the quote you have selected. The problem with the McLuhan quote is that he characterized the developments he was writing about as constituting only “two vastly divergent cultures.” Instead what he should have said is that as the history of technology developments has shown, there are a number of paradigm shifts much greater than two. For example, we are now experiencing internet 2.0 or 3.0; such a locution leaves room for a next iteration, namely 4.0, 5.0, etc. Of course, through it all, from oral tradition to print, from analog to digital, from presses to cold type methods of reproducing printed words and pictures, from photography using film stock and films using celluloid (in dozens of film stocks and speeds developed over decades, including both black and white and color capability), to purely digital techniques for image capture helped by marrying cameras to computers, from ribbons of tape to hard drive recording etc., the one thing that has not changed is the pre-eminence of the idea that “Content is still king.” That is, people will not want to attend to media devices if there is no message worth attending to. The storage media and the delivery systems have changed, and those changes have wrought innumerable effects on the businesses that have either successfully or unsuccessfully adapted to the demands of new infrastructures, but without the creative spirit of artists presenting ideas that are worthy of contemplation, you have no audience. Today’s audiences have many choices—it is up to those with creative talent to capture audiences with important, worthwhile, engaging messages. That has become more important than ever for creating successful messages, meaning those that actually get an audience’s  attention.

In an article written in 2001, Whither Educational Technology?, Andrew Feenberg writes: “There is something about dialogue, and the active involvement of the teacher, that is fundamental to the educational process and that must be woven into the design of any new instructional tool. Educational technologies that lack an interactive component, such as televised courses and computer-aided instruction, have never succeeded in displacing teachers from the front of the classroom.”  Do you agree with Feenberg?

Yes! Piaget’s work on child psychology underscores the importance of thinking, discovering, and hypothesis-testing about the world in social settings in the physical world where open-ended possibilities abound, and human agents get to use all of their senses and bodies-as-brains to expand their understanding and share new ideas. Let me illustrate what I mean with a concrete example. In a fancy computer game that engages players with goals like saving a princess, or escaping bad guys in a dangerous place etc., there are relatively limited possibilities for grasping the universals of human nature, or learning about relationships between antecedent conditions and consequent circumstances in human affairs. Possibilities for action in a computer game don’t really measure up when compared to the open possibilities for action that occur every day in the real social world. In short, chess programs can’t teach tennis; programs that computes taxes can’t make a tuna fish sandwich. And whereas success in a computer game can be had with a limited array of physical motions using a controller or mouse while staring at a screen, the real world may suddenly demand unforeseen shifts in the set of expectations and the actions required for solving problems that arise with respect to a desired project (i.e., a flat tire on the way to a wedding may require replacing the flat, calling for help, walking or running to the event location, or being delivered there by a good Samaritan etc.). That is, a great number of solutions not originally conceived of by the human agents involved in the project may emerge through a flight of inventive adaptation. Currently, there does not seem to be a viable substitute in the virtual world for actual social activity that teaches how to adapt to changing circumstances. That is the advantage of actual human dialog between teacher and student.

What do you see distance education evolving into in the next 10 years?

Distance learning (DL) is destined to be with us and in ever-expanding roles over the next decade—that is my belief. I say this because there are good things DL can do for many who, for one reason or another, are unable to get to a live classroom. Some folks live too far away; some lack funds to afford tuition. Some welcome DL infrastructures being made available to them because they have physical limitations that make it difficult or impossible to attend traditional college classes. As DL advances, there will be growth in the quality and quantity of the offerings. Material will be vetted for accuracy and value, and be updated more often; hybrid classes (where some meetings will take place in more traditional settings) will supplement the curriculum. DL should be embraced by teachers because it opens up possibilities for reaching people who would otherwise be unable to get any instruction. Having said that, I do not buy the idea that DL is a perfect functional alternative or replacement for the traditional model of live instruction; it will be a long time before that will happen with any consistency. So I would continue to view DL as a supplementary alternative, but not a replacement for traditional instruction.

What are you currently working on?

My latest project is a prospectus for a new book; the working title is Social Media: Digital Politics. I look at the ways in which social media infrastructures that give us such things as Facebook, Twitter, and YouTube have altered the way political action unfolds. The inspiration for the book idea is the similarity I have noticed among the recent political events of the Arab Spring, the Occupy Wall St. Movement, and the eruption of flash mobs for both good and ill. I think it is a subject old scholars from the pre-digital world (like Innis and McLuhan) would spend some time trying to understand in light of things they have said about how the spatio-temporal aspects of communication systems change the way nations, cultures, groups, and individuals use the opportunities and changes wrought by the digital transition.

© Excerpts and links may be used, provided that full and clear credit is given to Let Shyles
and Figure/Ground with appropriate and specific direction to the original content.

Suggested citation:

Ralon, L. (2012). “Interview with Len Shyles,” Figure/Ground. March 21st.
< >

Questions? Contact Laureano Ralón at

Interview with James C. Morrison

© James C. Morrison and Figure/Ground
Dr. Morrison was interviewed by Laureano Ralón. March 29th, 2012.

James C. Morrison is a Lecturer in Business Communication in the F. W. Olin Graduate School of Business at Babson College in Wellesley, Massachusetts. His research interests include the societal impacts of new communications media, development of hypertext and hypermedia systems in higher education and research, information management in high-tech environments, and national media policy. As a consultant, Morrison has conducted writing workshops for Darling Consulting Group, in Newburyport, Massachusetts; Logistics Management Institute, a federally funded research and development center in Alexandria, Virginia; Brown University’s Writing Center; Simpson Gumpertz & Heger, Inc., consulting engineers in Lexington, Massachusetts; and CuraGen Corporation, in Branford, Connecticut. He has also worked as a developmental editor for innovations proposals entered in the Better Government Competition of the Pioneer Institute in Boston. He is the newly elected president of the Media Ecology Association.

How did you decide to become a university professor? Was it a conscious choice?

As a teenager I thought about a number of careers—journalist, musicologist, high school teacher—but once in college I decided that I wanted to be like my English professors, some of whom I almost idolized. I loved the whole ambiance of college, which to me was epitomized by Sanborn Library at Dartmouth, seat of the English Department. Lots of tweed and hiking boots, a professor’s dog or two lolling around the offices, cozy armchairs in book-lined niches, and tea and sherry on Thursday afternoons, talking with anyone and everyone in the department. I affected pipe smoking and dreamed of joining similar ranks after graduate school. For a blue-collar, full-scholarship kid who was the first on either side of his family to attend college, it was a choice both conscious and subliminal, as it involved the search for a sense of self and identity. I had visions of becoming the next Edmund Wilson. Fond dreams of youth.

Who were some of your mentors in graduate school and what were some of the most important lessons you learned from them?

Because my graduate school experiences were divided among three institutions, at none of which I did a dissertation, I never really had a Mentor. Although I’m sure Neil Postman didn’t consider me his Telemachus, his influence had the greatest impact on my scholarly career. As a beginning instructor in 1969, I readTeaching as a Subversive Activity, and it profoundly transformed my thinking about teaching and learning, as well as my understanding of both McLuhan and Plato, whom I had started reading in high school (extramurally, of course).

I didn’t study in the Media Ecology program at NYU, simply because I wasn’t aware of it when I was looking to complete my graduate studies after teaching at the University of Hawai’i with an M.A. Had I picked up the NYU catalogue and found the program at the Steinhardt School I would assuredly have applied, as I was looking at interdisciplinary programs, and seeing Neil’s name would have been practically all I needed. Instead, I went to Princeton in comparative literature and studied with Robert Fagles, who shared with us his early translations of theOresteia and the Aeneid. His ground-breaking translations of the Iliad and theOdyssey, and his final version of the Aeneid were to come only years later, but his most memorable and influential course for me was that featuring parallel readings of the Odyssey and Ulysses. Fifteen years later, when I was teaching at Harvard’s Kennedy School of Government and simultaneously pursuing a mid-career Master’s in Public Administration, Neil went there as a Visiting Adjunct Professor, and I leapt at the chance to study with him.

He was invited there by Marvin Kalb, who was then Director of what is now the Joan Shorenstein Center on the Press, Politics and Public Policy, and for whom I would do research on debates in American presidential elections. Marvin had been impressed by Amusing Ourselves to Death, and Neil taught a course titled “Television and Political Change.” At the time, Neil was writing Technopoly, and the course involved all of the former book and elements of the latter.

While Neil was very clear about his perspective on communication and culture, he always stressed the process by which he arrived at the understandings he was sharing, and he invited his students, collegially, to critique, revise, and even contradict his assertions. That is, he involved his students in his process of discovery, and in that process they learned not only about the subject, but also about themselves and their own process of thinking.

In the Prologue to his later book Teaching as a Conserving Activity, Neil wrote that during his work with Charles Weingartner, “Charlie suggested early in our collaboration that the last sentence of each of our books should be ‘Or vice versa’.” Although Neil ultimately rejected the suggestion lest it might falsely imply a lack of seriousness, his even entertaining it represents, to me, an important principle: to be sure of your own vision, but to accept that, as fallible human beings, none of us has a franchise on the absolute truth. It is our role as educators to lead students to their own earned truths.

I owe the direction of my teaching and learning career to Neil, and I have had the privilege since then to work closely with many others who had completed their graduate work with him at New York University. Neil helped me especially to understand Marshall McLuhan, whose ideas are all too easy to discount and dismiss, but whose insights into the integrating power of the ancient rhetorical tradition and its application to the contemporary media landscape are fundamental.

But just as important, if not more so, was the inclusion of many other parts of what I like to term media ecology’s “loose canon”: the work of Walter Ong, Eric Havelock, Elizabeth Eisenstein, and Jacques Ellul, as well as pointers to Daniel Boorstin, Edward Hall, Harold Innis, Julian Jaynes, Lewis Mumford, Susan Sontag, Norbert Weiner, Joseph Weizenbaum, Benjamin Lee Whorf, and Henry Perkinson. It was the media ecology program in a nutshell. Except for the work of Weiner and Mumford, I had not been exposed to any of this before, and to me it was a revelation. I have been a media ecologist ever since. To this day, I mentally kick myself for not exploring NYU’s catalogue back in 1973, but I remain continually grateful for Marvin Kalb’s overture to Neil, which made the rest of my academic career an adventure with only broadening horizons.

Joshua Meyrowitz’ thesis in No Sense of Place is that when media change, situations and roles change. In your experience, how did the role of university professor evolve since you were an undergraduate student?

I think that the role of university professor changed from a calling to a career, starting in the 1970s. That’s when the leading edge of the baby boom wave started breaking over the academy, and all of a sudden both the demand for and supply of young academics increased, seemingly exponentially, though probably not literally. Accompanied by the knowledge explosion, this demographic tsunami seemed to turn what was once a gentlemanly and ladylike dedication to the liberating arts into an urgent need to stake one’s claim in the competition for academic placement and preferment. When I was an undergraduate, it was not uncommon, and went unremarked, for one’s tenured professor to have an M.A. as his or her highest degree, though this was not true of the younger faculty, for whom the Ph.D. was starting to be de rigueur. Still, everyone, both teacher and student, was addressed as Mr., the affectation of “Dr.” among nonmedical degree holders not yet having taken root.

But the difference was not in the level of degree so much as in the change to more of a careerist emphasis. One younger English Renaissance faculty member at Dartmouth observed to me about one of his coevals in the French department, “If I were writing an allegorical play, [the person in question] would serve as my model for the character of Ambition.” Not that such proclivities were unknown before this, but such would soon become the rule rather than the exception as the ‘70s “progressed” under the influence of deconstructionism and continental critical theory, with the consequent need for finding safe ecological niches in the increasingly Spenserian (Herbert, not Edmund) academic environment. In the process, the implicit humanism that preceded this paradigm shift was replaced by a zero sum game, in which I saw many able people squeezed out of chances for positions for which they were perfectly qualified. And I’m not talking about only top-flight liberal arts colleges and Research I universities, but those at all places in the academic spectrum.

This was one of the main reasons for my leaving a Ph.D. program and gaining experience in business to see if that path offered fresher air. After pursuing that direction for 10 years I was able to return to academia with practitioner credentials instead of a doctorate—though not to take its place. I believe I was enriched by that experience, for it gave me a pragmatic basis for teaching that no number of years staying sheltered in the academy could have provided. I actually had some basis for teaching people about communication, as opposed to just trying to lead them through an appreciation for words on paper. Of course, having made that choice, I’m sure you wouldn’t be surprised that the poem best reflecting my feelings about the matter is Robert Frost’s “The Road Not Taken.”

Inside the classroom, how do you manage to command attention in an age of interruption characterized by fractured attention and information overload?

I teach all my courses using the case method, with the emphasis on student-to-teacher and student-to-student interaction. As a result, my students not only don’t have the chance to be distracted, but they don’t have a reason, either. I got into this mode of teaching at Harvard Business School, where I returned to academia to teach management communication, and have used it ever since, even when not using purpose-written cases. The 1940 the Harvard Alumni Bulletinarticle by Charles I. Gragg titled “Because Wisdom Can’t Be Told” describes the instructional use of the case method at HBS. My exposure to this article while there was serendipitous, for its first sentence articulated a philosophy of teaching almost identical to that of Postman and Weingartner in Teaching as a Subversive Activity: “Students must be accepted as the important part of the academic picture.”

While students are obviously the focus of one’s energies, saying that students are the important element in education is a qualitatively different assertion. What Gragg promotes, and what I have consistently tried to achieve in the classroom, is “true intercommunication” among teacher and students alike. That is, I see my role as empowering the student to engage intellectual issues and problems actively and, under guidance, to learn the suasive arts of critical thinking and expression. In sum, true learning results when one teaches oneself and others how to learn. This is the measure of one’s success.

To be sure, the case method needs to be rightly managed to be effective, and poorly managed it can lead to confusion and uncertainty. And, as Gragg points out, if it becomes merely a tool for the teacher to indoctrinate students by making them guess what the teacher is thinking, it is a perversion not only of the case method but of education itself. But whatever methods are used in class, their proper aim is to foster self-reliance, independent thinking, and learning how to learn for a lifetime. It is a truism that the aim of education is not to teach studentswhat to think, but how to think, but one that is all too easy to forget in the fruitless pursuit of the absolute right answer. I’m more interested in the right question.

My students are free to have laptops open during class and use them for whatever purposes they find best, whether it’s taking notes, checking on material, finding alternative perspectives, or even finding that great bargain on an iPad on eBay. But none of that absolves them from being engaged in the conversation, and everyone is graded on class participation. If they want, or try, to zone out, they will be found out, and graduate students do quite a bit of self-policing. In one of my classes at Babson College, Corporate Communication in the Digital Age, Irequire students to use their computers and other interactive devices during specific class times to complete class work. Once you start assigning people to engage in distractions, they turn out to be distractions no longer. It also helps that I’m teaching graduate students, the vast majority of whom are adult evening MBA program students whose maturity is well established. But this strategy can work as well with undergraduates, especially if you designate open-shell and closed-shell periods, with group work, impromptu reports, role plays, and so forth, and only mini-lectures as needed. I don’t contend I’m doing anything new or original. The important thing is to keep things moving and keep students thinking and participating, whether with a partner, a group, the teacher, or the class as a whole.

Also, I make digital distraction one of the topics of my courses, so that we have opportunity to explore the issues, share experiences, exchange strategies, and propose solutions. There is a growing literature about this, and I draw from such sources as Nicholas Carr’s The Shallows, William Powers’s Hamlet’s BlackBerry, Jaron Lanier’s You Are Not a Gadget, and a variety of articles.

In 1964, Marshall McLuhan declared, in reference to the university environment that, “departmental sovereignties have melted away as rapidly as national sovereignties under conditions of electric speed.” This claim can be viewed as an endorsement of interdisciplinary studies, but it could also be regarded as a statement about the changing nature of academia. Do you think the university as an institution is in crisis or at least under threat in this age of information and digital interactive media?

It is assuredly under threat—it can’t help but be, just as the medieval university and Church (they were one and the same) were threatened in the age of the printing press. But universities adapted and survived, only to forge ever stronger bonds with the societies they served. I think the modern university is at least as resilient, particularly because our economic fabric is an interweave of threads comprising business, government, academia, and nongovernmental entities promoting research and social investment. For one of those strands to shred would entail the shredding of some part of all the others, and I don’t think that media alone can do that. I’m not a technological determinist, at least not in the reductionist sense that would say that media change is the only determinant of societal change.

To look back a bit, the modern university—that is, the research institution founded on the compartmentalization of knowledge reflected in separate departments and “disciplines”—was formed only in the late 1800s on the basis of the Wissenschaft model of German education. All this is splendidly laid out by Alvin Kernan in The Death of Literature. The printing press was certainly a major agent of change (to borrow from Elizabeth Eisenstein) in this process, but it was a process that only confirmed the nature of the university. This was because the new model, or paradigm, was overlaid on a pre-existing paradigm of the medieval craft guild—a self-perpetuating Gemeinschaft of scholars operating under the rules of apprenticeship, with the authority to choose its members and to exclude from its body any who did not abide by its rules. This has had its pluses and its minuses, but at least it allowed the medieval institution to adapt to changing conditions.

When printed books were introduced into universities, they didn’t undermine the educational processes going on there, but accelerated them. While, as Eisenstein points out, printed books made it possible for students to outpace their teachers, and created autodidacts, these trends only promoted new markets for higher education. Rather than replacing or displacing traditional education, these new channels of learning created new opportunities for access to institutions of higher learning. But the printing press helped to foster a burgeoning middle class, and it was this group that benefited most from the interest on the part of the state in widening opportunities for entry into the growing bureaucracies.

New information technologies don’t destroy or displace the institutions that have fostered or thrived as a consequence of the older learning technologies—they only transform them and force them to adapt. The fact that electronic means of communication are recapitulating modes of learning typical of medieval education (digital rhetoric as a recursion of the trivium; light through, as with stained glass windows and illuminated manuscripts, rather than light on, as with print) should mean that the medieval structure of institutions like the university will be reinforced, rather than undermined. It is the hierarchical corporation, theGesellschaft, which was the product of fragmentary, mechanical culture, that is under siege, along with its corollary, the nation state, at both the macro and micro levels.

The fact that tuitions have risen faster than the cost of living as of late is a valid concern, but there are steps that can be taken to adjust the effective cost of education. We now assume that everyone has the right to a higher education, whereas sixty years ago only a minority of the population had a high school education. My mother’s father, who graduated only from high school in the early twentieth century, was an insurance underwriter and an autodidact who taught me the foundations of much that I know today. When I entered college, my father was earning $5000 a year; I got there on scholarship, and 60% of my classmates, in a so-called “elite” institution, were on scholarship as well. That situation continues through this day. Unfortunately, the political situation is such that state governments have had to renege on their once-solid commitment to making public higher education affordable, owing to the power of intransigent taxophobes, calling into question our self-image as a meritocracy. Any at least some of the online diploma mills have been suspected of being mere pretexts for siphoning federal loan funds from unsuspecting students who are offered no realistic chance to graduate within a reasonable time frame. If the university is seriously threatened, it is from these quarters, and not from impersonal media.

Applications to residential colleges, especially the most “elite” institutions (who see as part of their mission to diversify their student bodies as much as possible), are rising, not falling. The modern university is not being undermined, but is learning how to amortize its fixed assets more completely by branching out into electronic ventures that complement its mission, and by broadening its reach to wider audiences. Brick-and-mortar institutions are not going away; rather, they are learning to adapt to a new environment, much as the medieval universities learned to adapt to the new environment of the printed book. While fly-by-night all-electronic vultures may get the press, it is the established “brand names” in education that will prevail, because they carry the established reputations necessary for recognition in the ephemeral electronic landscape.

I’m confident that, to borrow from Mark Twain, the news of the university’s demise has been greatly exaggerated. We may well have, now and in the future, universities without walls, but that simply means that we’ll have more-inclusive, well-established universities.

In 2009, Francis Fukuyama wrote a controversial article for the Washington Post entitled “What are your arguments for or against tenure track?” In it, Fukuyama argues that the tenure system has turned the academy into one of the most conservative and costly institutions in the country, making younger untenured professors fearful of taking intellectual risks and causing them to write in jargon aimed only at those in their narrow subdiscipline. In a short, Fukuyama believes the freedom guaranteed by tenure is precious, but thinks it’s time to abolish this institution before it becomes too costly, both financially and intellectually. Since then, there has been a considerable amount of debate about this sensitive issue, both inside and outside the university. Do you agree with the author? What are your arguments for or against academic tenure?

Unfortunately, Francis Fukuyama jumped the gun in declaring the end of history twenty years ago, and in this article he shows lack of perspective once again. While he may be the prime exhibit for his own argument, that doesn’t make him right. All in all, I would have to borrow from Sir Winston Churchill’s judgment that democracy is the worst system of government, except for all the others. Unless there is some way of keeping educational administrators from instituting a downward wage spiral, the tenure system is all we have right now to keep them from creating a Hobbesian system in which the life of most educators at most colleges and universities would be nasty, brutish, and short. We may even be at that point now, but eliminating the tenure system would, I believe, only accelerate that process and put more unaccountable power in the hands of administrators, many of whom have never graced a classroom. And many of those who have done so are prime exhibits for the Peter Principle.

However, the main problem isn’t so much that people are promoted until they reach their level of incompetence, but that the administrative ranks are increasingly dominated by “professional” administrators who may have fancy degrees in academic administration but who couldn’t teach their way out of a paper bag. The explosion of cost in higher education is not due to excessive compensation given to master teachers and scholars, but to the proliferation of non-teaching and non-scholarly administrators, who burden higher education not just with their inflated salaries, but also with expensive schemes for “marketing” their institutions.

Such schemes involve not only wildly inflated administrative costs, but also building programs that put great burdens on endowment and tuition in the effort to attract students with meretricious amenities and “facilities” that facilitate little else than non-academic , and even anti-academic, distractions. I admire those institutions that resist this trend, but they are becoming fewer and farther between now that many colleges in the middle-to-lower academic ranks are seeing their route to the promised land in beefing up not their academics, but their football team. And if a food court ends up beefing up everyone else on campus, who’s going to complain at Homecoming and Reunion Week, when the administrative minions are ready and eager to solicit and accept flurries of checks? After all, that’s how their “productivity” is determined. And who cares about the graduation rate, as long as you can make it someday into the “Sweet Sixteen”? In academia these days, madness isn’t confined to March.

Now, I say this as someone who has never been on a tenure track and probably never will be (especially now!), so I have no skin in this game. But if I were to proffer an improvement in the tenure system, it would be along the following lines.

First, keep the current tenure system, but also keep the administration’s noses out of it. No administrator, even the president, gets to overrule the decision of the department and the college-wide promotion and tenure committee. If it’s an extraordinary situation, then perhaps a process to include a faculty senate would be in order. But the general principle should be complete faculty governance in academic matters. The administration takes care of the finances, while the faculty takes care of the curriculum.

Second, everyone comes up for review every six years. It would take a super-majority of votes by the department, the P&T committee, and the faculty senate to deny renewal of tenure to a senior faculty member. That would weed out any free-riders and provide sufficient protection for anyone’s free speech rights. Those rehired would get a sabbatical, and those rejected would be given a one-year extension.

Third, student evaluations would be neither ignored nor the primary or sole criterion for tenure or rehiring consideration. A menu of criteria including teaching, scholarship, pedagogical creativity, and service both on and off campus would be taken into consideration in any promotion or tenure decision. Faculty would be given three-year reviews, with the opportunity to show improvement in the following three years, if necessary. That’s the way it is where I currently teach, and I think it’s a model to emulate. One provision I would add would be that no one could be terminated before six years without due cause. But good luck with having that enforced.

Who would monitor such a process? That’s a very vexed question. Accrediting bodies visit only every so often, and if a college or university were to deviate, there’s no “tenure police” to keep them on the strait and narrow. But that’s the way it is today, so at least we wouldn’t be any worse off.

What advice would you give to graduate students and aspiring university professors, and who are the thinkers today that you believe young scholars should be reading?

My general advice to those entering the academic communication ranks is first to get as broad an education in the liberal arts and sciences as possible. By that I mean particularly languages, literature, classics, philosophy, history, Eastern and Western religion, anthropology, archaeology, psychology, economics (a branch of psychology), mathematics, physics, and biology. In the major, get as firm a grasp of real communication theory as you can, especially rhetoric and semantics. By “real” theory, I mean pragmatic theory, not tendentious, Cloud Cuckooland ideologies whose aim is to indoctrinate rather than illuminate. And don’t starve your education with courses in what I call “digitalia”: digital stuff you’re already immersed in. If you want to be a social media wizard and Web 2.0 guru, knock yourself out, but don’t pretend that that in itself will qualify you to teach communication. Any place that will hire you on that basis alone is a training ground, not an educational institution worthy of the name. Be a well-rounded person, not just a one-trick pony.

Some contemporary thinkers young scholars should be reading: Robert Albrecht, Corey Anton, Susan B. Barnes, Nicholas Carr, Peter K. Fallon, Thomas J. Farrell, Thomas Friedman, Raymond Gozzi, Jr., Paul Grosswiler, Jane Healy, Jaron Lanier, Paul Levinson, Lawrence Lessig, Robert Logan, Casey Man Kong Lum, Robert MacDougall, Joshua Meyrowitz, Steven Pinker, William Powers, Douglas Rushkoff, Paul A. Soukup, SJ, Lance Strate, Sherry Turkle, Kathleen Welch, Maryanne Wolf, Jonathan Zittrain . . . with apologies to anyone who thinks they belong here but whom I might have left out.

What attracted you to media ecology generally and the work of Marshall McLuhan specifically?

I remember when Understanding Media came out when I was a freshman, what a splash it made, and what arguments it engendered. I didn’t pick it up immediately, but in a sophomore English class I remember Chauncey Loomis, whose eighteenth-century English literature course included Ian Watt’s The Rise of the English Novel, recommending The Gutenberg Galaxy as a work whose greater value was being overshadowed by UM’s notoriety. As a result, I picked upGG first and found myself totally at sea.

I was completely unprepared for encountering McLuhan’s idiosyncratic treatment of King Lear at the beginning, often befuddled in the mid-portion, and absolutely bamboozled by the discussion towards the end of The Dunciad, which I had not yet read. Not only had I not been exposed to much of the historical material he discussed and alluded to, but also I was a callow youth who possessed only an inkling of the literary sophistication that McLuhan had achieved in his studies at Cambridge University. As a result, my marginal comments in the popular paperback copy I acquired and struggled through were remarkable only for their ordinariness. Still, those parts of the discussion relating to contemporary media and contrasts between literacy and orality resonated somewhat, and I decided to hang in there to see if I could catch up.

The following summer, I picked up Understanding Media and found much more to sink my teeth into. Looking at the notes in my popular format paperback copy, I am struck by my obtuseness. The problem wasn’t so much my callowness, as my sophomoric certitude.  The literal definition of “sophomore” is “wise fool”: i.e., one who thinks he is wise but is alone in this opinion. My objections to McLuhan’s probes were based not so much on my wisdom as on my imperception of his method: that he was probing and seeking to elicit original perceptions on seemingly familiar topics, rather than presenting a smoothly linear discourse aimed at definitive proof. My understanding of discourse analysis was some years to come, so my reactions can be placed in that context. I was both strangely attracted and wildly repelled.

As I said previously, reading Postman and Weingartner’s Teaching as a Subversive Activity later on gave me a greater appreciation for McLuhan’s method over and above the references to pop culture, to which I could of course “relate,” to borrow a cliché of the period. But as my education advanced in breadth and, one hopes, depth, I think I gradually came closer to becoming the ideal type of reader for McLuhan; the Joycean elements in his work served as useful anchors, since Joyce has always been a major focus for me. When I took Neil’s course at the Kennedy School I was finally provided with the intellectual framework within which to really appreciate McLuhan and his contribution to media ecology. For I could then see the intellectual tradition informing his thinking and from which he emerged for me as a synthesizer.

What I have discovered in media ecology is a coherent narrative for the transformation of cultures that none of my prior education had provided. I had been searching for an interdisciplinary way of connecting the strands of knowledge in a coherent pattern—something the Wissenschaft model of education has made difficult, indeed. What made this possible for me was essentially McLuhan’s retrieval of the unified curriculum of the ancient trivium, which had been the basis for Western education up to the heights of the industrial age. And it was McLuhan’s mosaic method of tessellating the pattern of culture into fractals that made it possible for me to reconnect it on my own. Perhaps this is a prime example of one of McLuhan’s laws of media: a reversal achieved by pushing something to its extreme. This clearly was the method in his mod-ness.

In a recent lecture, Paul Levinson said that Fordham University is a much better place to study McLuhan and Media Ecology than Ivy League Universities such as Columbia University. McLuhan’s brief appearance in the Movie Annie Hall comes to mind. In your experience, what has been the reception of Media Ecology at places like Harvard, Columbia or MIT throughout the years?

Paul Levinson is absolutely right in saying that the last places you will see appreciation for media ecology are the elite institutions, which are so heavily invested in the status quo. I discovered this personally in 2006 when I co-coordinated the MEA convention at Boston College with Don Fishman. All of my overtures to the powerhouse institutions in the area fell on deaf ears. This shouldn’t be a mystery to anyone who has read Technopoly, or knows of the obstacles thrown in Milman Parry’s path to completing his doctorate at Harvard, or of the resistance to Eric Havelock’s work at Yale. A program director at MIT told me that he would never support an invitation to Neil Postman to speak there, because he despised much of his work.

Media ecology lives more strongly not only at Fordham but also at such places as Curry College, in Milton, Massachusetts, where Rob MacDougall is developing a new graduate program steeped in media ecology; and Manhattan College, in Riverdale, New York, where Thom Gencarelli is establishing an environment of appreciation for media ecological approaches and is hosting our upcoming convention in June. There are many other places where individual media ecologists are establishing plots of their own that they are working to establish fertile ground.

What these initiatives have in common is that they are taking root in places dedicated to the educational pragmatism exemplified in the work of Charles Sanders Peirce, William James, John Dewey, and George Herbert Mead. These are some of the intellectual foundations of media ecology, and it is ironic that those very institutions that fostered these thinkers have abandoned their principles for the aridities of logical positivism, scientism, critical “theory,” or the blandishments of direct corporate sponsorship and, hence, ownership of research.

You’ve been recently elected President of the Media Ecology Association. What goals would you like to accomplish during your mandate and where do you see the organization heading in future years?

My first goal is to guide the organization through the process of reforming its organizational structure to be more in line with those of the major communication associations with whom it is affiliated. In the MEA’s first decade, the founders developed a model of association that suited a special interest group closely resembling a family or clan. However, as the organization has grown in scope and reach, we have recognized that we are achieving, as was the intention from the beginning, an international presence and size requiring an organizational structure that both encourages and requires a wider degree of participation by its regular membership.

We can’t continue to depend upon a relatively small circle of founders and their immediate associates to keep the organization running. Quite frankly, advancing age and personal responsibilities are a factor as well. While we have been gratified by our ability to infuse the Board of Directors with new and younger blood, some of us are feeling the effects of years of dedication and assuming a variety of roles demanding time away from our professional commitments and families.

Our affiliate organizations have structures that provide for a regular and automatic line of succession from Vice President-Elect through President, and that is the model we have appointed a committee of the Board of Directors to work towards. We will be developing a proposal along these lines to present to the General Business meeting at our upcoming convention, for comment and suggested refinement. Once we have developed the final proposal, we will be putting it on the election ballot in the fall. If the proposal passes, we will have a transition year under something similar to our current constitution, to prepare for elections under a new dispensation.

Beyond this, we are continuing our efforts to broaden the reach and scope of the association, both domestically and internationally. We have made great progress along these lines, with 255 active members and associates in 21 countries, over 700 subscribers to our electronic mailing list, and over a thousand records in our database of people we consider friends. Interest in hosting our convention has been expressed from a number of institutions in not only the United States, but also Brazil, Spain, Italy, and Israel. Our last convention was in Canada, at the University of Alberta, and several years ago we were hosted by Tecnológico de Monterrey, Estado de México, near Mexico City. Next year we will be at Grand Valley State University in Grand Rapids, Michigan.

This year’s convention will have as its featured speakers Jaron Lanier, author ofYou Are Not a Gadget;  Sherry Turkle, author of The Second SelfLife on the Screen, and Alone Together; Douglas Rushkoff, author of ten books, including the recent Program or Be Programmed: Ten Commands for a Digital Age, and producer of three Frontline documentaries, including “Merchants of Cool,” “The Persuaders,” and “Digital Nation”; and Terence P. Moran, Professor of Media, Culture, and Communication at New York University, one of the three founding members of NYU’s media ecology doctoral program, and author of Selling War to America: From the Spanish American War to the Global War on Terror, andIntroduction to the History of Communication: Evolutions and Revolutions.

What are you currently working on?

I am developing a presentation on James Joyce and media ecology for a panel at the upcoming meeting of the Eastern Communication Association in Cambridge, Massachusetts, near the end of April; and a review of Nicholas Carr’s The Shallows: What the Internet is Doing to Our Minds, for Explorations in Media Ecology {EME}, our journal. I’ll be making a presentation at our upcoming convention of Carr’s book, which I believe is the most significant volume in media ecology since Postman’s Amusing Ourselves to Death. It points to empirical research on functional magnetic resonance imaging (fMRI) that confirms McLuhan’s claims that experience with electronic media has effects upon brain structures and functions that significantly differ from those established by other communication media, including face-to-face communication and print.

© Excerpts and links may be used, provided that full and clear credit is given to James C. Morrison
and Figure/Ground with appropriate and specific direction to the original content.

Suggested citation:

Ralon, L. (2012). “Interview with James C. Morrison,” Figure/Ground. March 21st.
< >

Questions? Contact Laureano Ralón at

Interview with John Caputo

© John Caputo and Figure/Ground
Dr. Caputo was interviewed by Laureano Ralón. March 27th, 2012.

Dr. John Caputo is Professor and Chair of the Master’s Program in Communication and Leadership Studies at Gonzaga University and the Walter Ong S.J. Scholar. He founded the MA Program in 2004.  Dr. Caputo earned his Ph.D. from the Claremont Graduate School and University Center. He has been teaching communication courses for more than 30 years and has appeared on radio and television news and discussion programs. His areas of expertise include communication theory, intercultural and interpersonal communication, and media and social values. He is the author of seven books: Effective Communication HandbookCommunicating Effectively: Linking Thought with ExpressionDimensions of Communication; Interpersonal Communication: Competency Through Critical Reasoning, which was co- authored with Bud Hazel and Colleen McMahon; Public Speaking Handbook: A Liberal Arts Perspective with Bud Hazel; McDonaldization Revisited: Critical Essays on Consumer Culture which he co-edited with Mark Alfino and Robin Wynyard for Praeger Press and his newest book, Effective Communication. John Caputo has written more than 25 articles in professional journals, been honored as a Visiting Scholar In-Residence at the University of Kent at Canterbury, England. Dr. Caputo Directs the Gonzaga-in-Cagli Project, a cultural immersion multi-media program in Italy each summer. He has been honored with Master Teacher Awards by the Western States Communication Association and the University of Texas at Austin and most recently received an Exemplary Faculty Award form Gonzaga University.

How did you decide to become a university professor? Was it a conscious choice?

I wanted to be a teacher from very early days.  Once I realized I was not destined for the priesthood, teaching seemed like the closest thing to it, so in 7th grade I joined Future Teachers of America.  I meandered throughout college on several ventures in rock and roll, etc., but in my heart I still wanted to be a teacher.  I taught high school for four years, earned my first MA and thought I would teach community college.  It was while teaching community college that I really thought I could teach at a University.  That led me into further graduate education earning a second Master’s and then a PhD.

Who were some of your mentors and what lessons did you learn from them? 

I have been very fortunate to have great mentors who allowed me to stand on their shoulders and guide me to places I really didn’t think I could get to.  As a first generation college student in my family I was clearly going into unchartered waters.  Two mentors that really stood out were Ellis Hays, at CSU-Long Beach and John O. Regan at Claremont Graduate School.

Ellis Hays was a PhD from Denver and Purdue. He came to Long Beach and opened up the world of communication theory and interpersonal communication for me.  My undergraduate work had been primarily in rhetorical studies, and so the shifts from speech, to speech communication, and communication were major, and rifts in departments created significant tensions in departments of communication.  Ellis Hays introduced me to General Semantics, Neil Postman, interpersonal communication and communication theory. Along the way he helped me to develop my voice, improve my writing, and to take abstract theory and make it practical.  Much of this happened outside the classroom, nurturing my development in his spare hours. He helped me get small articles published and allowed me to make contributions to several of his own writing projects.  He really helped me to think about being a college teacher. Perhaps the most important lesson was to keep working, don’t worry how many revisions you need to make, and to enjoy the journey.

John Regan came to Claremont from University of Alberta. He hails from Melbourne, Australia and came to Canada to earn his PhD.  I went to Claremont to study language and communication with John.  He calls himself an anthropological linguist and studied with McLuhan.  At Claremont he helped guide the Blaisdell Institute of World Cultures and Religions and he also founded Claremont Graduate School’s Communication Project and its central public focus, the “Issues in Communication” series.  The history of the seminar series, with over 40 symposia and workshops held on topics of verbal and nonverbal creativity, is well known.  Scholars in fields of linguistics, anthropology, communication, and education would come to Claremont from all over the world.  As John’s Research Assistant, I was normally the person who would collect these visitors at the airport, bring them to events, and be invited along to special dinners.  It was these informal meetings that allowed me to build relationships with colleagues around the world and find resources to make such journeys happen.  It also helped me to be invited to share my work at other universities on the national and international level.  John is a clear thinker who taught me how to pay close attention to small details, especially for observational ethnographic work.  He lent me his library on Innis, McLuhan, and Ong.  He got me engaged with scholars like M.A.K. Halliday, Sydney Lamb, George Traeger, Henry Lee Smith, Thomas Sebeok, E.T. Hall, and of course Walter Ong, S.J.. John instituted a set of weekly dialogues with me called the Marco-John Dialogues (he got to be John while I was Marco – I think he liked it from Marco Polo and searching) in which we probed many questions that became my curriculum at Claremont.   We would meet for about 2 hours a week in some out of the way place on the Claremont campuses, share a can of Melbourne Bitters and carry on our Socratic conversations.  The works were then transcribed by me and became the core of my study into seamless webs (Edward Sapir’s words) of user and use, figure and ground, language and communication.  The work was placed on reserve in the library and available for others to read.  I don’t really know what ever happened to them.  They are of course utilized in many ways in my PhD dissertation.  The most important lessons I learned from John are about being a gentleman, to be a good husband and father, and how to steal away time to work on scholarship when there are other demands all around.  He is a model of civility and much of my academic life has been modeled on Mr. Regan (Claremont didn’t use titles)

How did the role of university professor evolve since you were an undergraduate student?

The role of university professor has changed significantly, but good teachers are good teachers.  My undergraduate classes were lecture classes (sage on the stage) with a small quantity of discussion, papers and exams.  By the end of my undergraduate days, classrooms began to move more to interactive discussions and experiential learning with simulations, case studies and occasional films.  By the 70’s – 80’s electronic technology was entering the classroom, and scholarship was becoming more public rather than just in professional journals. Distance education had moved from “correspondence school” to flying into remote places–then to compressed video being shipped places, and then list-serves being used to open up 24-hour classrooms or “classrooms without walls”.  Many faculty began to create webpages for students to access class content and participate in discussions. Professors no longer gave their work to secretaries or typing pools.  It became his or her own work to do. The computer freed them and created much more work simultaneously.  University professors moved from “sages” to “facilitators” in many cases.  Classroom content management systems like Blackboard became common and expensive while a large amount of University budgets became dedicated to computer technologies.  On any given day, a professor could be bogged down in tech difficulties, and whatever time there was for reflection seemed to evaporate. Currently, I teach students utilizing technologies that have evolved over the past 20 years.  I teach face to face, online, hybrid, and overseas.  I use doc cameras, smart boards, conference calls, video Skype, and gotomeeting.  I still give public lectures, write and publish in books and journals.  Some of the journals are online only.  My university is starting a virtual university. I can be away at conferences or even teaching overseas, but still doing work at my campus.  I can have a SKYPE video call from a colleague in Vietnam, walk into a classroom, and by the time I get back to my office have a conference call to Rome.  I am reading papers on computer screens, and posting commentary and grading right on the same documents.  I am also co-writing papers with colleagues that we share and send electronically to each other. Does it sound tiring? It can be.  My academic classroom is unlimited and I seem to be the only one who can set any limit on it, if I want to. It’s challenging and rewarding.

What makes a good teacher today? How do you manage to command attention in an age of interruption characterized by fractured attention and information overload?

This is an extension of my last answer.  There are no sure fire ways to get attention, and some would say we don’t need all that attention, that we are effective at multi-tasking.  I remember a theory from process-oriented communication that said humans are basically single-channel receivers.  I can’t remember if that came from McLuhan but I know it had to do with dominance of the senses, and humans had to focus on one sense at a time, but could rapidly change focus.  I just started a new course on college teaching of communication entitled “Communication Teaching and Pedagogy.” In that course I have mentioned that good teachers still have to develop their relational communication with their students and that there is a high need for authenticity.  Good teachers also have to weave interesting narratives and tell stories. Knowledge is a crucial part of preparation. Both in face-to-face and online education, multiple forms of delivery help different kinds of learners.  At the same time, the advertiser’s slogan of “cut through the clutter” rings true.  When some of my students say they have not been successful reaching a particular professor, I ask them have they ever considered “changing the medium?”  That is the same question a good teacher needs to ask.

What advice would you give to young graduate students and aspiring university professors?

The advice I give to young graduate students and aspiring university professors is that advanced degree work takes endurance and that teaching is a calling.  It builds over time and sooner for some like myself than others.  If you don’t have the calling, it is probably not a place to go.  When addressing new students I talk about the only reason for gaining new knowledge is to give it away and hopefully make the world a better place.  I have, when in a flippant mood, described my job as getting paid for “thinking out loud.”  If someone is willing to pay me for that, great, but that is not the reason I do it.  I do it because I care about it.  My university is very mission-driven, and a central principle of that mission is “men and women for and with others.” So learning, experience, reflection, and action are all important steps toward solidarity.  These are not slogans but a way of being.

In 1964, Marshall McLuhan declared, in reference to the university environment that, “departmental sovereignties have melted away as rapidly as national sovereignties under conditions of electric speed.” This claim can be viewed as an endorsement of interdisciplinary studies, but it could also be regarded as a statement about the changing nature of academia. Do you think the university as an institution is in crisis or at least under threat in this age of information and digital interactive media?

I am not so sure that McLuhan’s view coincides with my experience.  Departments are very strong silos and interdisciplinary work is still rare even when encouraged.  My former graduate school always talks about transdisciplinary work.  A prospective graduate student wrote to me about wanting to do a joint MA with my program and philosophy.  The philosophy department said no, because they thought disciplinary methods of philosophy were so crucial for their MA students and that the prospective student should choose our degree or theirs.  I have actually heard colleagues say they don’t know how to talk to other departments so how could they be interdisciplinary? Of course communication departments like my own have always been interdisciplinary.  Regarding whether institutions are in crisis or under threat, it is certainly a topic of discussion, but it seems to have more to do with the economics of higher education, and the commodification of culture, learning and education.   So the perceived threat is not in the technology itself, but in the ends of education.

In 2009, Francis Fukuyama wrote a controversial article for the Washington Post entitled “What are your arguments for or against tenure track?” In it, Fukuyama argues that the tenure system has turned the academy into one of the most conservative and costly institutions in the country, making younger untenured professors fearful of taking intellectual risks and causing them to write in jargon aimed only at those in their narrow sub-discipline. In a short, Fukuyama believes the freedom guaranteed by tenure is precious, but thinks it’s time to abolish this institution before it becomes too costly, both financially and intellectually. Since then, there has been a considerable amount of debate about this sensitive issue, both inside and outside the university. What are your arguments for or against academic tenure?

I cannot agree with Fukuyama and when my younger colleagues express something like that to me, I think they have not seen the need, yet.  Certainly the sociology of knowledge would hold that keepers of knowledge – academic departments, journal editors, and professional academic associations, have the ability to be the agenda setters for what gets published, what gets presented and what gets taught.  At the same time, almost every junior university professor knows that it is unwise to take a position of shared-governance, because you will feel muted from expressing contrary points of view to Deans, Vice-presidents, etc. Junior faculty know that it is their tenured colleagues who can help fight the battles over key decisions.  Tenure allows faculty to not only fight the political battles in the University, but additionally to allow academic freedom to pursue new lines of inquiry.  I happen to work in an institution that has a religious foundation, but I did not take a vow of obedience although I clearly support the mission.   My job is to be an educator and tenure allows me to bring truth to the endeavor.  There is no less need for that now from when tenure was first put into the academic enterprise.

Your areas of expertise include communication theory, intercultural and interpersonal communication. What attracted you to communication studies in the first place? Do you think communication should be a discipline in the first place, concerned as it is with mediation, the invisible effects of technological environments, and so on?

I was attracted to communication studies firstly because I loved public speaking and eventually entered high-school speech contests.  I was attracted to all things in rhetoric and communication and spent time in television, radio, music and other practical endeavours.  In college I was a debater.  When I was required to take my first theory class, (both rhetoric and the precursor to communication theory) I went with great trepidation and fear of boredom.  It was these classes however, that changed everything about the field for me and was at the dawn of new sub-specialities like interpersonal and intercultural communication. Theory and theorizing opened up so much for me and the expanding field of communication helped me to see all that I did not know and part of that was that so many disciplines, study communication.  Departments like biology, anthropology, engineering, sociology, psychology, broadcasting, philosophy, speech, communication studies, were all in pursuit. I was a Visiting Scholar at the University of Kent in their Social Psychology Unit because my PhD dissertation was on nonverbal communication and the connection to language learning and they were studying “motherese” and language development. At that point there were no departments in the UK for communication or media studies.  This is when my work started to expand into semiotics and media and culture. So although I personally see the roots of communication in the rhetorical foundations of ancient Greece, I do see the emerging departments of communication, communication studies, and media studies, as a healthy trajectory and yes, new discipline.  I read recently something like more students are studying communication than at any other time in history and the information age or communication age is the reason for this. Homes for this work will continue to flourish.

You are currently Walter Ong, S.J. Scholar at Gonzaga University. What attracted you to the work of Father Ong and in what ways does your interpretation of his oeuvre inform your research on the above-mentioned fields?

I was originally attracted to Ong’s work because of my interest in rhetorical theory. I was also attracted to the teaching of rhetoric (in the oral sense) in Greece and Rome and the place of the Trivium as the core of education. It felt right. When I read the work of Ignatius Loyola and came to Gonzaga, I realized how the Society of Jesus (Jesuits) were reforming teaching around the Triviumagain with the (re)birth of humanism. My own University has a core of classes 1styear students must take called “Thought and Expression” and it consists of logic, grammar and rhetoric.  The Trivium is thriving.  Ong’s ideas of orality and visualism helped me to extend parameters of communication, and realize that interpersonal communication (relational messages of immediacy, responsiveness and power), intercultural communication (cultural differences in thinking, logic, language), and communication theory (user and use, seamless webs of meaning, social construction of reality in language), were all parts of these extensions.   Eventually my interest in Ennis, McLuhan and Ong, led me to look more closely at the media of communication and the influence of media in the construction and transmission of culture.

As you know, there has been much debate lately amongst media ecologists about the differences between McLuhan and Ong. In your view, what place does Ong occupy within the media ecology tradition and what are your expectations about his centennial anniversary?

Clearly McLuhan and Ong shared perspectives but make different kinds of contributions.  Ong profoundly affected the study of communication and indirectly media ecology. Ong paid close attention to communication technology, modes of expression, and the cognitive and cultural impact of media. His work in Orality and Literacy explored how forms of communication interact with culture.  In Neil Postman’s talk in June of 2000,The Humanism of Media Ecology,he describes the beginning days of Media Ecology.  He said, “We put the word ‘media’ in the front of the word ‘ecology’ to suggest that we were not simply interested in media, but in the ways in which the interaction between media and human beings give a culture its character and, one might say, help a culture to maintain symbolic balance. If we wish to connect the ancient meaning with the modern, we might say that the word suggests that we need to keep our planetary household in order.” Postman goes on to say, “Let me conclude, then, by saying that as I understand the whole point of media ecology, it exists to further our insights into how we stand as human beings, how we are doing morally in the journey we are taking. There may be some of you who think of yourselves as media ecologists who disagree with what I have just said. If that is the case, you are wrong.” Ong clearly is part of Media Ecology as Postman and others like myself, describe it.  My expectations for his centennial anniversary are modest in light of Walter Ong being a modest man.  My hope is that his work will continue to inform and helps us to understand more fully the role language and communication plays in the development of consciousness and human growth.

What are you currently working on?

My current work is in media and social values, media and culture, and education.  I am working on a manuscript on communication and cultural dissonance and I direct the Northwest Alliance for Responsible Media, a community based organization that teaches media literacy and looks to understand the effect of media on our community and in the creation of culture; works with media to enhance the vitality and development of our community; empowers youth and adults to become critical consumers of media, and encourages media to act as responsible, effective stewards of this critical public trust.

© Excerpts and links may be used, provided that full and clear credit is given to John Caputo
and Figure/Ground with appropriate and specific direction to the original content.

Suggested citation:

Ralón, L. (2012). “Interview with John Caputo,” Figure/Ground. March 27th.
<  >

Questions? Contact Laureano Ralón at

Interview with Dylan Trigg

© Dylan Trigg and Figure/Ground
Dr. Trigg was interviewed by Laureano Ralón. March 20th, 2012.

Dylan Trigg is currently a CNRS/Volkswagen Stiftung post-doctoral researcher at the Centre de Recherche en Épistémologie Appliquée. He previously taught philosophy at the University of Sussex and continues to teach philosophy privately in Sussex. He earned his PhD at the same university, submitting a thesis on the materiality of memory. His thesis was supervised by Tanja Staehler and Paul Davies, and examined by Edward S. Casey (Stony Brook) and Celine Surpenant (University of Sussex). He has been a visiting scholar at Duquesne University, USA, a guest lecturer at the University of Montana, USA, and an invited speaker to several conferences. His research includes: phenomenology (especially Merleau-Ponty, Bachelard, Husserl, and Heidegger); the phenomenology of place (especially spatial phobias, memory and materiality, and the aesthetics of space); and various aspects of bodily existence (especially body memory, body horror, anxiety, eroticism, disease, and the prehistory of the body). He is currently writing a book on agoraphobia. In addition to many articles, Trigg is the author of two books: The Memory of Place: a Phenomenology of the Uncanny (Athens: Ohio University Press, 2012) and The Aesthetics of Decay: Nothingness, Nostalgia and the Absence of Reason (New York: Peter Lang, 2006)

How did you decide to become a university professor? Was it a conscious choice?

Like many people working in academia, accidents and errors have become more valuable than conscious choices. My introduction to philosophy came via psychotherapy, which itself came via criminal psychology. Before philosophy, I was studying existential psychoanalysis in London. This style of therapy is rooted in phenomenology, and the grand themes of death, freedom, anxiety, and meaning inspired an interest in Kierkegaard, Nietzsche, Schopenhauer, Sartre, Heidegger, Levinas and so forth. I was introduced to this through Irvin Yalom’s textbook, “Existential Psychotherapy,” which I read as a teenager and still hold in great regard, though perhaps with some uncritical nostalgia now. Later on, works by R.D. Laing, Karl Jaspers, and Ludwig Binswanger drew me closer to the phenomenological tradition more broadly.  Because of this background, the Wittgensteinian idea of philosophy as therapy retains a relevance for me both academically and personally, as Wittgenstein would have it: “The work of the philosopher consists in assembling reminders for a particular purpose.”  So, academia for me is not a conscious choice, as such. I did not harbour childhood fantasies of becoming a professor. It was instead an expression of something that began in the context of studying psychotherapy, which I then became seduced by.

Who were some of your mentors in graduate school and what were some of the most important lessons you learned from them?

After finishing my undergraduate studies at the Birkbeck College, I was keen to move to a department that would be able to accommodate my interests in the phenomenology of place. At the time, this was not possible through the University of London’s philosophy departments. Because of this, I had been repressing a longstanding desire to write about the importance of place within human experience (as it would turn out, this act of repression was sublimated into my first book, “The Aesthetics of Decay,” which I began writing while still an undergraduate). Coming to the University of Sussex, therefore, was liberating. Being there allowed me write openly about these issues that had been expressed only indirectly before. For example, my MA dissertation on Gaston Bachelard and personal identity contained a long section on the phenomenology of Starbucks, exploring its spatio-temporal qualities from a philosophical rather than cultural perspective. This kind of marginal research was only possible thanks to Sussex’s intellectually open spirit.  To this end, I was fortunate to have Tanja Staehler and Paul Davis as supervisors at Sussex. Both of them opened me up to a particular way of reading texts and engaging in ideas, which has left a strong impression on me. In terms of mentors, I was lucky enough to have been taught by the late musicologist, David Osmond-Smith (I believe I was his last student, in fact). Professor Osmond-Smith embodied a kind of visceral mental and bodily intensity, which gave not only his tutorials on Wagner and Nietzsche a feeling or passionate urgency, but also his entire presence. He remains, for me, an inspiration.

In what ways would you say the role of the European university professor, particularly in France, vary from that of its American counterpart?

The question is complicated by the different usages of the word “professor,” and the term varies broadly from Europe to the US. I should also say I am part of a small research centre in Paris, which is in a state of transition, so probably doesn’t reflect the general state of French academia. That said, in terms of sweeping generalizations and personal observations as an outsider, it seems to me French academia is rooted in a complex set of boundaries marking professors from non-professors. The hierarchical order is evident in some university practices, from the hiring procedure to the organisation of seminars. There is further complication, given that the title professor in France carries with it several different ranks, each of which entails a different status. Here, professors are held in very high regard culturally speaking, especially in Paris. Because of this, there is a sense that more reserved students can feel as though the professor is unattainable owing to this status. In the UK, hierarchical boundaries between graduate student and professor are more porous. As such, things tend be more informal and slightly less rigid than in France. This is partly because of the peculiar relationship we Brits have to institutions and national culture: a mixture of self-abusing irreverence and self-aggrandizing sentimentality. This is also because of the way teaching is structured. Instead of teaching being limited to large lectures, UK teaching tends to consist of both large lectures and also smaller tutorials with groups of students. A similar teaching style occurs in France, but to my knowledge, is quite different in the US. These tutorials are immensely helpful in forming a rapport with students instead of seeing them as a homogenous mass of units. It is usual for a professor to be involved in both of these formats, though like in the American system, the equivalent of teaching assistants plays a key role. The American situation seems to be placed somewhere between the French and British style, though I have only experienced it as a visitor, so my impressions may be one-sided.

Inside the classroom, how do you manage to command attention in an age of interruption characterized by fractured attention and information overload?

I think the key to commanding any kind of attention in the classroom is developing an honest rapport with your students. My approach to teaching is very simple: treat students as individual human beings and encourage them to recognise the importance of philosophical issues in their own lives. Ultimately, if the professor is enthusiastic about what is being taught, then it is up to him/her to transform that enthusiasm into a space of thinking. Students—and sensitive human beings generally—intuitively pick up on a lack of enthusiasm or a distracted mind. If you’re not engaging with the materials as a teacher in a committed fashion, then it’s unlikely the students will respond with attentiveness and enthusiasm. Of course, technology has altered modes of learning, problems of attention are pervasive, and interruptions in the classroom in the form of a student’s ringtone have been known to happen. But interruptions will always be present in one form or another. I don’t think the response to these interruptions should be to wage war on technology or to somehow compete with it in the form of elaborate PowerPoint presentations. My response to these problems of distraction and information overload is return to the basics: conversation. On the whole, though, most of the students I’ve taught have been philosophically and politically engaged in one way or another, and they tend to want to engage with the material to understand their intuitive convictions about life.

What advice would you give to young graduate students and aspiring university professors?

I’m not sure how qualified I am to confer advice upon aspiring university professors or young graduate students. I can only speak on behalf of those who have recently graduated, are in temporary contracts, unsure of what the future holds, and yet still retain a love for their discipline. To graduate students, I would say: find an area of research which appeals to you not only scholastically or because it’s academically topical, but also because it bears relevance to your experience of the world. The disillusioned graduate students I have known have mostly lost their way due to losing a rapport with their research topic. For them, it becomes an obstacle in the external world, an inert practice, rather than something that imparts meaning upon their lives in a dynamic way. Scholarly exegesis is essential for understanding the history of any discipline, especially philosophy. But it is not an end in itself, and in order to ward off the ever looming threat of graduate depression, there must be some kind of deeply held personal engagement in the research.  For doctoral students unable to finish their thesis: don’t get too hung up on saying everything you’ve ever wanted to say in the thesis. There will be time for that afterwards. Nor should you fixate too much on saying everything perfectly.  This will only stigmatise the thesis as an insurmountable obstacle, against which one is reduced to nothingness.

As for aspiring university professors: I think given the current state of things, it would be misguided to undergo doctoral studies with a view of securing a job at the end of one’s research, at least not without a lot of tribulation. Unless the employment situation changes within the next few years—which it looks unlikely to—then doing a PhD will invariably be an act of passion rather than a viable career move. This in itself might not be such a bad thing, as it forces the question of whether research has a value in and of itself. It’s important to have some humility in this respect

In 1964, Marshall McLuhan declared, in reference to the university environment that, “departmental sovereignties have melted away as rapidly as national sovereignties under conditions of electric speed.” This claim can be viewed as an endorsement of interdisciplinary studies, but it could also be regarded as a statement about the changing nature of academia. Do you think the university as an institution is in crisis or at least under threat in this age of information and digital interactive media?

The university is certainly in crisis, and the partial dissolution of departmental sovereignties may well play a role in this, though I think this is more a political and economic issue than an issue of the digital media. It doesn’t seem to me that the age of information poses a considerable threat to the integrity of a department: teaching still takes places in a physical environment, after all. Of course, the internet and email have made it possible to establish correspondences and intellectual partnerships in a way that was not possible 20 years ago. But all of this is compatible with the traditional working of the university environment. What is new, at least in the UK, is the level of apathy of the current government in its relationship to the humanities. This is not a new phenomenon, but the form it is presently taking is striking and nauseating. One of the trends in British universities is to conflate the humanities departments into one collective department. In principle this might not be bad thing, as it encourages dialogue from different disciplines. The real danger, I think, is for philosophy. Philosophy is already a marginalised discipline and has tended to occupy an ambiguous relationship between the humanities and the sciences. Its absorption into the humanities as a whole poses some risk of philosophy’s identity being assimilated by an “interdisciplinary identity,” with all the vagueness and problematic tensions that term implies.

In 2009, Francis Fukuyama wrote a controversial article for the Washington Post entitled “What are your arguments for or against tenure track?” In it, Fukuyama argues that the tenure system has turned the academy into one of the most conservative and costly institutions in the country, making younger untenured professors fearful of taking intellectual risks and causing them to write in jargon aimed only at those in their narrow subdiscipline. In a short, Fukuyama believes the freedom guaranteed by tenure is precious, but thinks it’s time to abolish this institution before it becomes too costly, both financially and intellectually. Since then, there has been a considerable amount of debate about this sensitive issue, both inside and outside the university. Do you agree with the author? Does the academic tenure debate in Europe center on similar challenges as in North America?

If trying to establish a career in academia means calculating one’s intellectual risks in a mode of fear and adhering to an insular and jargonistic discourse, then there is a fundamental problem in the tenure system. It is regrettable, I suppose, that this discussion of intellectual and political risk is posed in the question of tenure track at all.  Probably for this reason it becomes all the more striking when young untenured professors do take a stand, not only intellectually, but also politically against evident injustices in the system (I am thinking here of the assistant professor, Nathan Brown’s, letter of protest to the chancellor of UC Davies).  In the UK—especially in the humanities—there is less talk about tenure track. Indeed, there is no system of tenure, as such. It does not play the cultural role it does in the US. This is especially the case given the current state of things in the humanities, where the equivalent of tenured positions—permanent positions—are becoming rarer. Things are somewhat different in France, as you have funding bodies such as the CNRS, who would fund a researcher rather than the researcher being funded by a department.  This gives far greater freedom to research, in terms of being affiliated with a department that suits the interests of the researcher.

Let’s move on. You are CNRS/Volkswagen Stiftung post-doctoral researcher at the Centre de Recherche en Épistémologie Appliquée, where you specialize in phenomenology. Is phenomenology still relevant in this age of information and digital interactive media?

Phenomenology is especially relevant in an age of information and digital media. Despite the current post-humanist “turn” in the humanities, we remain for better or worse bodily subjects. This does not mean that we cannot think beyond the body or that the body is unchallenged in phenomenology. Phenomenology does not set a limit on our field of experience, nor is it incompatible with the age of information, less even speculative thinking about non-bodily entities and worlds. Instead, phenomenology reminds us of what we already know, though perhaps unconsciously: that our philosophical voyages begin with and are shaped by our bodily subjectivity.

It’s important to note here that phenomenology’s treatment of the body is varied and complex. It can refer to the physical materiality of the body, to the lived experience of the body, or to enigmatic way in which the body is both personal and anonymous simultaneously. In each case, the body provides the basis for how digital media, information, and post-humanity are experienced in the first place. Phenomenology’s heightened relevance, I’d say, is grounded in the sense that these contemporary artefacts of human life tend to take for granted our bodily constitution.

But phenomenology’s relevance goes beyond its privileging of the body. It has become quite fashionable to critique phenomenology as providing a solely human-centric access to the world.  This, I think, is wrong. One of the reasons why I’m passionately committed to phenomenology is because it can reveal to us the fundamentally weird and strange facets of the world that we ordinarily take to be clothed in a familiar and human light. Phenomenology’s gesture of returning to things, of attending to things in their brute facticity, is an extremely powerful move. Merleau-Ponty will speak of a “hostile and alien…resolutely silent Other” lurking within with the non-human appearance of things. For me, the lure of this non-human Other is a motivational force in my own work. It reminds us that no matter how much we affiliate ourselves with the familiar human world, in the act of returning to the things themselves, those same things stand ready to alienate us.

One of your areas of research interest is the phenomenology of place. In fact, your most recent book is entitled Memory of Place: a Phenomenology of the Uncanny, which has been characterized as a “lively and original intervention into contemporary debates within ‘place studies,’ an interdisciplinary field at the intersection of philosophy, geography, architecture, urban design, and environmental studies.” I was hoping you could give us a sneak peek of your book. In a nutshell, What is the place of memory in Memory of Place?

There are three places of memory I describe in the book. The first is the episodic memories of places that we have from our past. These are the memories we become attached, either though positive or negative experiences. They are the places, in which memorable events occur, and in the process transform the place in question from the background context of our memories to the formative focus of those memories. Typically, one thinks of such places as any place we have developed a relationship with, such that the place becomes a part of our sense of self. The philosophical question surrounding these memories concerns to what extent the memories of places we’ve inhabited contributes to our sense of self. In the book, I argue that the memory of place is a privileged memory, as it allows a heightened interplay between the bodily self and the material world. Put another way, the memory of place attests to our bodily entwinement with materiality. In this way, it presents a critique to the Lockean idea that personal identity is secured by the continuity of an immaterial memory. So, spatiality is not an extension of memory, less even a mnemonic to cue specific memories. I am not, for instance, concerned with how particular places nudge dormant memories into consciousness, as though memory occupied an incidental relationship to the environment. Rather, what concerns me is the necessary relationship between memory and materiality, and how this relationship can pose a source of alienation as well as a source of continuity to the remembering subject.

The second place of memory refers less to the individual experience of places from one’s past, and more to construction of memories through the natural and built environment. Here, my concern is with monuments sites of trauma, and ruins that portend to events outside the memory of the living subject. This transition from the memory of place to the place of memory mirrors a shift from a phenomenological focus on lived experience to a hermeneutic analysis of the environment. So, for example, in my discussion of monumentality and space, the key question is how can a material artefact stand aside from the surrounding world, embody a commemorative silence, and stop us in our tracks? This is a complex process, which carries with the ethical responsibility of how materialityought to respond to the past. And there are no clear responses here, as any monument has to negotiate between the obligations of the past and the uncertainties of the future.

The final, and perhaps most important, place of memory is the human body. The body’s memory place is implicated in both forms of memories above, but it is also independent from these. This phrase “body memory” needs to be clarified, as it can refer to many things. The way that I use it in the book is less in the manner of Proustean recollection, as an invitation to lost time (though, of course, these memories are vitally important to our understanding of the embodiment of the past). It refers even less to a mechanical retrieval of applied motor memories, such as being able to hold a pen. Instead, the phrase refers to the relation between how we cognitively recall the past and the way in which our bodies act as anonymous organisms for manifesting a history different to that cognitive impression of the past. This emphasis on “difference” is because body memory carries with it a fundamental ambiguity: the body’s memory of places belongs to us as personal subjects and simultaneously can remain at odds with our personal recollection of the past. Obviously one clear way in which the body can manifest a past different to the past we’ve ordinarily remembered is in cases of traumatic recollection. Traumatic memory is one especially visceral way that the body can become a host for a living history that the traumatised subject is alienated from despite being constituted by that past.

But this sense of body memory as being the site of a different past is not limited to trauma. As I argue in the book, the role of body memory can help explain phenomenon such as hauntings. Both trauma and hauntings call upon the idea that the body has a hidden teleology that strives toward the preservation of self, even if that self is now a materialization of self-estrangement, now ill-at-home in its flesh. This principle is also evident in more innocuous environments such as airports, waiting rooms, and modern offices. This transition from sites of trauma to airports may seem flippant. But in fact one of the things I argue in this book is the following: the certain places can be so cognitively overwhelming or disorientating, that bodily intentionality takes a more focal role in guiding us through the world. Of course, thematically, there is a huge difference from being lost in an airport and being imprisoned in a solitary cell. Yet in both cases, the structure of bodily experience retains a parallel role. In turn, this can lead to a nullification of memory in our conscious lives. All along, the body is in the midst of establishing its own history of the world, which may return to us long after the place has receded from our waking lives.

All of this points to the importance of the uncanny in the book. As I mentioned above, phenomenology has a special relationship to the uncanny, insofar as returning to the things themselves can, in Merleau-Ponty’s words, encourage us to view the world, “as if viewed by a creature of another species.” In the book, I am interested in how this other species joins Freud’s account of the uncanny as involving a “species of the frightening.” This encounter of weird species is the backdrop against which my study of memory and materiality takes place. Memory fits especially well into this uncanny landscape, as it involves a twilight zone between presence and absence, past and present, and the familiarity of visual memory and the unfamiliarity of a memory anchored in the body’s cryptic experience of things.

What can you tell us about “place studies” as a contemporary field of inquiry? What can you say about its origins and antecedents?

Place studies is the field of inquiry that dedicates itself to the study of how we experience places, how places intersect in political life, and how places shapes our understanding of identity, individual and collective. My own work in this field has tended to veer toward the phenomenological study of place, even though I occupy a critical stance to some tendencies in the phenomenological tradition. The study of place, as it features in philosophy, tends to take its point of departure from Heidegger’s account of dwelling, Merleau-Ponty’s emphasis on bodily spatiality, and perhaps more critically, Gaston Bachelard’s important book, “The Poetics of Space.” Bachelard is especially crucial here for thematizing the role of place in our remembering lives. For him, the childhood home becomes a sort of ontological centre, around which much of our subsequent live revolves. This is because, for Bachelard, the memory of places is oneiric in nature: memories of childhood homes drift into our daydreams and imaginations, creating an overlapping duration in our history. Of course, there is much that is problematic in this idealization of childhood memories, and Bachelard remains mute in his treatment of the house as a site of hostile memories. Nevertheless, he has exerted a tremendous influence on philosophical studies of place. Both Bachelard and Heidegger are formative in influencing the discipline of human geography, which is closely aligned with phenomenological studies of place. So, in the 1970s and 1980s a steady output of research on place by pioneering thinkers such as David Seamon, Yi-Fu Tuan, and Edward Relph. These thinkers were among the first to explicitly employ a phenomenological background to our experience of the natural and urban environment. At the heart of much of this research is an ethical assumption about what constitutes a “sense of place.” As such, in its earliest stages, the phenomenological treatment of place tended to be slightly one-sided in its criticism of the “placelessness” of the urban environment, at times gesturing toward a vaguely Bachelardian nostalgia.  Later on, thinkers such as Karsten Harries, Robert Mugerauer, and the architect Juhani Pallasmaa pushed these origins in a more diverse direction, while still retaining a broadly Heideggerian foundation. Today, the contemporary field of place studies is in a healthy state: thinkers such as Jeff Malpas, Ted Toadvine, and Doreen Massey are all pushing the field in exciting ways. For me, though, the most significant thinker for my own work is Edward Casey. Casey has written prolifically on place, remembering, and imagination. His books such as “Getting Back Into Place” and “The Fate of Place” are exemplary in their thematic richness, scholarly breadth, and attention to phenomenological detail. This last point was especially compelling for me when I first discovered his work as an undergraduate. It seems to me that one strength of phenomenology is the ability to remind us of things that we already know but have been overlooked through habit and over familiarity. Reading Casey was a breakthrough for me, as he work calls attention to the richness of everyday life with such clarity and precision that one has exactly this sense that his thinking is also an act of recollecting what we already know but were blind to.

As you probably know, there has been much debate lately about the “right to forget”. France, in particular, is considering new legislation that would give net users the option to have old data about themselves deleted. What is at stake is the length of time that personal information should remain available in the public arena. What is your take on this debate and, from a philosophical standpoint, what is so uncanny about “remembering forever”?  

Nietzsche famously told us that the capacity to forget is as important as the ability to remember. The point he was making is that we are ethically obliged to be critical of the past, lest it occupies a monolithic and antiquated relationship to the present. The erosion of memory, virtual or otherwise, is as important to good health as water and clean air are. In a more banal context, I have been blogging since 2004, which is a fact I’m very ambivalent about. Part of me is compelled to delete the blog, to begin again, freeing me of a connection to archived materials. At the same time, this compulsion is outweighed by a hoarding mentality, where my written past becomes an external object, which I treat in forensic terms, as though it were connected to me only by a trace. This too reappears in the new Facebook timeline feature, where it now becomes easy to chronicle the neuroses and idiosyncrasies of one’s online life in an archival fashion. Indeed, the internet in a way promotes this kind of pre-emptive nostalgia toward the immediacy of our near past in a way that is producing a sickly, uncritical fascination with the contrived creation of memories. The ghost towns and graveyards of the internet are fascinating because they still hold remnants of a life that is traceable in the future. And this uncanny affectivity of the internet as a memorial site for the archive of an individual’s life is compelling, I think. Urgent research is likely needed on the relationship between the internet and people’s sense of their own (im)mortality. Is the ambiguous temporality of the internet—with its anonymous storage units and sinister databases—a source of comfort for those who wish to have their existence carved in virtual stone long after they’re physically dead? I don’t know. But net users should certainly have the option to delete data about themselves which is stored online. But I’m not sure all internet users would wish this upon themselves.

What are you currently working on?

I’m working on two projects. The first and most pressing is a phenomenological study of agoraphobia. After writing “The Memory of Place,” it occurred to me that many of my descriptions of the uncanny were rooted in a phobic experience of the world. There is a phobic dimension to the uncanny that is manifest, for example, in the experience of homesickness. In homesickness, the world is explicitly structured into homely and unhomely territories. This kind of division can set in place a phobic relation to unhomely and unfamiliar places and bodily sensations. That tension was implicit in “The Memory of Place,” but I’m now pursuing it in a more focused way through attending to agoraphobia. This work, which is being carried out at the Centre de Recherche en Épistémologie Appliquée in Paris, is situated in a broader analysis of intersubjectivity and embodiment, and I’m especially concerned with questions such as: how does the look of the other affect our bodily experience of the world; what role do other people play in shaping the materiality of the world; and what can anxiety tell us about the ontology of the body? Increasingly, I am becoming to think that these questions will require some kind of dialogue between phenomenology and psychoanalysis. How this is possible, I do not yet know.

The second project is more speculative in flavour, and broadly concerns phenomenology and the origins of life. Philosopher’s sometime talk in terms about “thought experiments.” The problem with thought experiments is that the speculative dimension of the experiment is presented in a fictional sense in order to get to a supposedly “deeper” problem. I’m interested in pursuing an experiment in thought that takes the speculative aspect on its own terms. All of this is a polite way of explaining that I’m exploring the human body’s relationship to theories of panspermia (the idea that life is transported in space through microbes inhabiting asteroids and meteoroids). Much of the inspiration for this work takes its inspiration from Merleau-Ponty’s cryptic notes on nature and his last unfinished manuscript. For example, Merleau-Ponty’s use of the term “ineinander” to refer to the “strange kinship” of human and non-human animals can provide a model for how Earthly and non-Earthly bodies relate to one another. Here, I am especially interested in exploring the prehistory of the subject as it figures in Merleau-Ponty’s notion of the prepersonal body. The speculative dimension of this research, therefore, will be situating this prehistory in both the origins of the Earth but also in the materiality of cosmic space – if indeed, one can draw such a distinction between Earth and cosmic space in the first place.


© Excerpts and links may be used, provided that full and clear credit is given to Dylan Trigg
and Figure/Ground with appropriate and specific direction to the original content.

Suggested citation:

Ralón, L. (2012). “Interview with Dylan Trigg,” Figure/Ground. March 20th.
< >

Questions? Contact Laureano Ralón at

Interview with Dermot Moran

© Dermot Moran and Figure/Ground
Dr. Moran was interviewed by Laureano Ralón. March 12th, 2011.

Dr. Dermot Moran is Professor of Philosophy (Logic and Metaphysics) at University College Dublin. He previously taught at St. Patrick’s College, Maynooth, Queen’s University of Belfast, and Yale University. He has served as a visiting professor of philosophy in many universities around the world, including Rice University, Sorbonne, University at Albany, SUNY, Catholic University of Leuven, Trinity College Dublin, Connecticut College and Ludwig Maximilian University of Munich. He has been an elected member of the Royal Irish Academy since March 2003 and has been involved in the Fédération Internationale des Sociétés de Philosophie, the highest non-governmental world organization for philosophy, since the 1980s. He is the founding editor ofInternational Journal of Philosophical Studies, published by Routledge, and co-editor of Contributions To Phenomenology book series, published by Springer. His book Introduction to Phenomenology was awarded the Edward Goodwin Ballard Prize in Phenomenology (2001) and was translated in Chinese. A Turkish translation of the book is in preparation. Moran has also been elected President of the Programme Committee for the 23rd World Congress of Philosophy which is scheduled to take place in Athens in 2013.

How did you decide to become a university professor? Was it a conscious choice?

Well, I always wanted to be a teacher, and even in my high school I was already asked to teach some of the other classes – especially mathematics, because I was actually very good at it. So I wanted to go on with math in university, but I was also interested in literature and I was writing and publishing poetry at the time. I was really torn between the two, so I actually enrolled in both English and mathematics; that was in 1970s, which was prior to computers, and they allowed me to do so – but three weeks into the course I discovered that all the English lectures were at the same time as the math lectures. I had to make a choice, so I picked English and then from there I chose philosophy to replace the mathematics, moving gradually into philosophy.

Originally when I went to Yale, I was going initially to do a one-year master’s degree. It was only when I got there that I discovered that I could turn my master’s into a PhD, and that really made me think that I wanted to be a university professor. So it was not originally a conscious choice. I might also add that when I left Yale, which was in 1978, there were no university jobs available, so I spent a year working in a bookshop and so on until I did get a temporary university position back in Belfast. Even if you wanted to be a university professor a the time, that did not mean that you got to be one.

It sounds like the job market back in the late 70s was somewhat comparable to today’s market…

It was exactly like right now; we are going through a cycle. What happened was that there was a large expansion in the 60s, and then in the 70s – especially after the oil crisis in the US in ‘73/74 – many of my colleagues at Yale who finished PhDs were moving either to Business school or Law school because they didn’t see any future in academia. And we are back at that situation again, unfortunately.

In your experience, how did the role of university professor evolve since you were an undergraduate student?

My own sense of this is that the media did not change radically until extremely recently. I don’t think that radio, television, phone calls and faxes changed the university at all; I think where the change has been, very recently, is in the provision of these electronic ways of teaching, for example, Blackboard. These commercial packages have led to a kind of ‘infantilization’ of students and professors alike. Students especially now expect everything to be on this electronic blackboard; they expect you to upload everything and just click on it. I always tell them that they still should go to the library and search for books, because often where you are supposed to find the book that you are looking for you will find other books that are even more interesting. You have to actually do the physical work of walking around in the library, and very few people want to do that; as a result, experience is impoverished. That is ironic because you would have thought that with hyper-textuality and the Internet, the range of reference would be broader; but I found that actually the opposite has happened. It’s all surface too: I just read a study which said that the research for most undergraduate essays nowadays is based on the first page of a Google search, whereas, even if the relevant information were all on the Internet, the really interesting stuff might be on page 13th of Google. The students either do not have the time or the patience or lack the scholarly sense to conduct proper research. So that’s from the university side.

On the professor’s side, what really worries me is that we are regarded more and more as service providers. I was discussing this yesterday evening in France where I was, and we were all saying that – in the US especially, but now also in Europe – the students think nothing of emailing you anytime of the day or night to say “I can’t find this on Blackboard” or “I’m writing my essay, is it okay to cite this book?” There is just a complete lack of recognition of what used to be back in our day the more senior role of the professor as having more serious duties to perform than to answer rather trivial emails from students about materials that they could find themselves. So my view is that these Blackboard-type platforms are changing things very radically, but I don’t think in a good way.

You seem to be suggesting that there is a great deal of babysitting taking place in these new electronic environments we dwell in, but isn’t your position regarding the role of teachers and educators somewhat elitist?

Actually, I think you are right, and of course, I would sound elitist in saying that. Well, even yesterday, talking with my colleagues – a French person, an American, and myself – we were all wondering whether it would be a terrible mistake to not answer emails at all. That sounds elitist; but on the other hand, we are in a constant situation where we get cold calls, spam emails, etc., and we are overwhelmed. I think we ought to have some sort of limit. We have office hours and people rarely come physically to your office, but the night the paper is due they think it is fine to just send emails. I think they miss the role of independent studies.

What makes a good teacher today? How do you manage to command attention in the classroom in an “age of interruption” characterized by fractured attention and information overload?

I think it is easy to command attention if your material is interesting. Although I very early took on using PowerPoint, I now really hate it because it sends the students to sleep and freezes their capacity to engage with the material…

Which makes you wonder what we really mean by “interactivity”…

Yes, this so called interactive technology is not at all interactive. Look, I have a colleague who runs a Facebook page, and she has admitted to me that she does not even read it anymore; it is a requirement for the course that people engage with this page, but she just checks whether people have uploaded stuff for the course. So what we are doing here is rewarding people for pretending to participate; this is like, just because a person asks a question in the class, it does not mean that they are awake. So I do think that a good teacher senses this immediately, and will answer the right material. There is an intuitive way of being a teacher; there is a very human way of engaging with people, and all of this information overflow you describe is actually causing the problem. Right now I have a problem in class, and I am talking with other professors because we are considering taking off the provision of the WIFI in the lecture theatres. People are spending their time either uploading Facebook, or chatting, or texting each other. They are all multitasking. Or else – and this I find just as bad – they check on what you say by Googling every term that they do not understand. This causes a sort of deflection of attention. During one of my lectures, I was talking about Meinong, who is a German philosopher, and someone Googled meinung, which in German means “meaning” or “intention.” The student put the hand out because they mixed both terms up, though it was clear that I was talking about a human being and not the word meaning. And that kind of thing happens because they were only half listening.

What advice would you give to young graduate students and aspiring university professors?

I believe graduate students should be TAs. What is happening right now in Europe – I don’t know if it’s happening so much in America – is that a lot of the graduate Fellowships that are available specify “no teaching”; the funding says that you are not allowed to teach. I think that’s a mistake: graduate students only know they have learned something when they are able to teach it to somebody else.

As for aspiring university professors, I think it is really crucial to build a course thinking of the best courses that you yourself have taken. I like to build my courses around classical texts, so in my intro to philosophy we read Descartes’ Meditations. And when students ask me: “what else do you want me to read?” I say “nothing else; I just want you to read Descartes’ Meditations.” Students find that it is surprisingly hard. By week three, they are struggling to read one paragraph. So I think we need to restore the art of reading – how to read slowly and carefully –, because all of the major books in philosophy require that level of attention; and that’s something that you cannot get by any other way except by carefully reading, stopping and trying to figure things out.

I know somebody like Dreyfus, for example, when he teaches Heidegger, he only assigns up to 20 pages of Being and Time, because it’s such a condensed text…

Yes, absolutely. I heard him lecture, I never actually sat in a class of him – but I think that’s very good. Another thing that is very good for people to do at the graduate level is translation, because again, you can only really translate something if you understand it. So the challenge is really trying to get to understanding. Someone who was a terrific teacher was Gadamer, of course. I partook in seminars with Gadamer: one of the things he would do is announce the text in advance that he wanted to talk about, and when he came into the room he gave everyone five minutes to write down the questions that they were most interested in discussing concerning that text on a slip of paper. Then he collected up all the slips of paper and went through them and grouped them. And then he would talk about the five most prominent themes that would emerge out of those questions. So this was a really good way of connecting to what students wanted to know, and at the same time not just leaving it be a free for all where maybe one person dominates the discussion by asking questions that are not very intelligent – or questions beginning with the turn of phrase “by the way,” which are not directly connected. I think the discipline of listening is terribly important. Seamus Heaney once said that it is very hard listening to poetry, and he made a distinction between really listening to a poem and what he called “daydreaming along in sympathy.” I think it is the same in philosophy. It’s easier to daydream along in sympathy, but when you really go “wait a minute, why is Heidegger saying this?” it is much more difficult…

Let’s talk about your book, Introduction to Phenomenology, which was awarded the Edward Goodwin Ballard Prize in Phenomenology (2001) and has been translated into Chinese and Turkish…

Well, several people have said to me that, although the book is calledIntroduction to Phenomenology, it is actually quite an advanced text. This was deliberately so. I called it “Introduction to Phenomenology” because I thought of it as leading you into phenomenology along the lines of which Heidegger has a book called “Introduction to Metaphysics” instead of “Lectures to Metaphysics,” which were actually very difficult lectures. I don’t mean by “Introduction to Phenomenology” that this is a kind of lazy man’s handbook that will clearly explain all things about phenomenology at an undergraduate level. On the other hand, it’s being used like that, so curiously it has relatively succeeded in being a first introduction to phenomenology as well as an advanced introduction. I got an email from a person who was an expert in technical writing and taught technical writing in university – and by “technical” I mean writing the handbook for the latest Lotus racing car, or the handbook for flying an airplane; these are technical works of great sophistication that nevertheless have to be done in a relatively clear and straightforward manner. He wrote to me saying that he used my book “Introduction to Phenomenology” as an example of good technical writing, so I’m really proud of that. He was not a philosopher or a phenomenologist, yet part of his job was to go into bookshelves and try to identify good introductions to those subjects. I think McLuhan would have been interested in that too, actually.

It’s interesting how some books can be difficult to read yet remain quite accessible. Difficult does not have to mean obscure, I guess. For example, I read both Being and Time and Being and Nothingness, and I thought both of them were extremely difficult, yet the former was infinitely more inviting than the latter. Do you agree with this appreciation?

I think in the case of the two books you mention, in Being and Time you get a sense of great intellectual rigor and discipline. Heidegger was struggling to articulate what he wanted to say, but he really does try in each of the chapters to get to the core issues. Let’s say, the chapter on being-in-the-world, he is really struggling to get a sense of what worldliness really means. And it is a very difficult chapter but it is all focused, there is nothing in there that is irrelevant; whereas Sartre was a very difficult writer – wonderfully imaginative, but sprawling –, so you would never have the sense of where he is going to go next. I think that is why most people get lost, even myself when I teach parts of Being and Nothingness.

Interesting. To paraphrase McLuhan, what I feel when I read Being and Time is that I am right there sitting in a seminar with Heidegger, whereas when I read Being and Nothingness, it feels more like a lecture…it’s a much more distant experience, less inviting.

I think that is right, and I think that is deliberate. Heidegger is always challenging you and forcing language, whereas Sartre was a great literary writer, so he often loads you into thinking that he is not saying much, when in fact he is saying something quite considerable. Also, just from a technical point of view, whereas Husserl and Heidegger, and later on Wittgenstein, use this idea of numbering sections and using headlines and subheadings, Sartre has very little of that, so it is very hard to find anything. Actually, I will say something else having to do with the form of the book – and again McLuhan would be interested in this: the French are very poor at producing analytical indexes of book, so there is no really good index for Being and Nothingness, and I never understood the French tradition of putting the table of contents at the end of the book. It is just a formal matter, but surely the table of contents – intellectually speaking – should be at the beginning.

That’s an interesting observation…

Yes, and I have been saying this because I have huge difficulty in finding the passages of Sartre’s Being and Nothingness when I want to cite them. And that is something that has been changed by technology; for example, I have a PDF version of Merleau-Ponty’s Phenomenology of Perception, and now you can search for individual words, which means that you can follow a thread in ways that you never could. So there is some advantage now to these PDFs – not to mention the fact that you can carry around a whole library on a stick.

Toward the end of his life, Marshall McLuhan declared that, “Phenomenology [is] that which I have been presenting for many years in non-technical terms.” Is phenomenology still useful and relevant in this age of information and digital interactive media?

Absolutely! In short, phenomenology really involves the careful attention and description to the way things present themselves to us, and surely this is very important in an age of digital interactive media. McLuhan was of course famous for the phrase “The Medium is the Message” and it is in a way true, but the phrase is often misinterpreted, I think. There was a very interesting article in the New York Times quite recently by a critical commentator complaining about the way in which the Egyptian and the Tunisia revolutions were presented in the US media as if they were victories for Facebook. The fact of the matter was that the Internet was down, as was the mobile phone system in these countries, for five or six days. So people were not mobilizing via Facebook or the Internet. There was external people doing that, raising consciousness, but within Egypt and Tunisia it was not done that way; they did it the traditional way by meeting each other than passing on traditional messages – like having a sense of going to the square at the same time or whatever. The point is that we are told that “The Medium is the Message” in that simple-minded sense, and I think that is deeply distorted. Phenomenology looks at the way things appear; and looking at the manner in which knowledge appears, we have to be very careful, because that is often quite distorting. We must have a sort of “hermeneutics of suspicion,” to paraphrase Ricoeur – the sort of equivalent of Husserl’s epoché: the non endorsement of the initial presumption. The discipline of phenomenology means the discipline of stopping yourself from being carried along any particular avenue of meaning, until we really allowed the phenomenon to show itself. And I think this is more necessary than ever, especially when information is being packaged and distorted in various ways.

One of your areas of research interest is the relationship between analytic and continental philosophy. Is the division between analytic and continental philosophy an insurmountable dualism?

I think that it’s a distinction that has had its day. What happened was that, for various reasons, including political reasons, two different versions of German philosophy (broadly speaking, Husserl and Heidegger students on the one hand, and the students of the Vienna circle on the other – both of whom were exiled by the Nazis; some because they were Jewish, some because they were Marxists) settled down in America after WWII. These two different groupings had very different visions of what philosophy was, depending on how they understood the nature of the à priori and so on. So you have on the one hand logical positivism, and on the other hand you had the Husserlian tradition, which went on into the New School and led to various different stresses. I have written about this, but I think that once you step back and see them as two tendencies within an overall tradition – as you do when you look at the Husserl/Frege correspondence, for example – you realize that that particular opposition had its own sense in a particular time and place and does not make sense anymore. Especially, it does not make sense when you attempt to impose this distinction upon the entire history of philosophy. I mean, you have this bizarre idea that there are some texts of Plato where he is an analytic philosopher, and other texts where he is a continental philosopher…

What are you currently working on and when is your next book coming out?

I am currently working on the phenomenology of embodiment. I was giving a doctoral seminar this year and I ran a number of workshops during the course of the last academic year. I am really interested in revisiting the whole area of our embodied subjectivity. What I am interested in is overcoming both the cognitive science approach to human beings that tends to overemphasize things like cognition, perception, memory – the sort of intellectualist paradigm that Dreyfus criticizes.

Thinking of the human being as an information-processing machine…

Absolutely, I actually was just talking about this yesterday in Paris. We were talking about Dreyfus, who has this idea of absorbed coping, this sort of expert basketball player who does not have to think when playing a game. I think there is something right about that, but the contrast that Dreyfus has is a sort of Cartesian/Husserlian picture of the mind. My argument was that Dreyfus probably has Husserl wrong here; I think Husserl is probably more on Dreyfus’ side than Dreyfus realizes. The picture that Dreyfus has of that kind of Cartesian intellectualist model of the mind is, I don’t know if you have ever seen the movieRobocop. The main character was half man half machine and had all these calculations showing on his visor. You are supposed to see that on jet fighters as well, where the information is coming up in the screen in front of them; it’s almost as if we were calculating in some kind of mathematical way all potential for action. I think that’s the model that cognitive science definitely has, in that they are trying to track all the routines and have a line of codes for everything that a human being does. While Dreyfus is right to attack that model, but I think that in between that and the sort of absorbed coping, there is an intermediate model of the embodied person who is conscious of his/her body but not extremely self-conscious unless something goes wrong.  I am sitting on a chair now and I am really more thinking about talking to you than being conscious of sitting on a chair, but I can bring my attention to bear on where my feet are, and I can move my feet and adjust my position to make myself more comfortable – and I do all of that while I am doing everything else because I am embodied all the time. I think that level of embodied consciousness has to be brought to bear, and frankly I do not think Dreyfus quite gets it. There is a kind of aware body that is not the mindless robot that is in the zone, nor the calculating robot that is doing everything like Robocop. There has to be something in between, which is the human person.

I am also working on this: phenomenology was always characterized as a philosophy of consciousness, especially in Husserl. But in Ideas II, he talks about the personalistic attitude, that is, living first and foremost in a personal world with others; and “persons,” of course, means that we respect each other as sources of meaning and value that are in some respect irreplaceable. That personalistic philosophy you find it also in Wittgenstein and Scheler, and it is something I want to bring back to the phenomenological debate.

© Excerpts and links may be used, provided that full and clear credit is given to Dermot Moran
and Figure/Ground with appropriate and specific direction to the original content.

Suggested citation:

Ralón, L. (2011). “Interview with Dermot Moran,” Figure/Ground. March 12th.
<  >

Questions? Contact Laureano Ralón at