Interview with Peter K. Fallon

© Peter K. Fallon and Figure/Ground
Dr. Fallon was interviewed by Laureano Ralón. December 30th, 2010.

Peter K. Fallon is an Assistant Professor of Journalism at Roosevelt University in Chicago. He has more than twenty years of professional experience in television production, video editing, and electronic journalism; over fifteen years of teaching experience at the university level; a significant record of scholarship and publication (his most recent book is The Metaphysics of Media), and a terminal degree (Ph.D.) in Media Ecology from New York University. He is the current editor of Explorations in Media Ecology, the peer-reviewed journal of the Media Ecology Association.

How did you decide to become a university professor? Was it a conscious choice?

Of course it was, at some point. The problem was not so much one of decision as of discernment, of seeing the various choices that were, in fact, before me. We do not live in a culture that much values either education or educators, and many young people will choose a pay check (especially a nice fat one) over a life that offers rewards more spiritual and intellectual than pecuniary. Thirty-five or forty years ago I had no idea I’d be doing what I’m doing today. And I probably would have laughed at the prospect. But the cumulative effect of my life experiences over the years pointed me in a direction and provided me with evidence that it was the right direction. I once thought I’d be a musician (my first college major was music theory) and later I wanted to be a recording engineer. My first real job was making educational and training videotapes for the New York State Office of Mental Health, and then I spent nearly two decades with NBC News. But all during this time I was in and out of school (mostly in, in retrospect), working first on a Master’s degree in Communication, and then on my Doctorate in Media Ecology. And I loved every moment I spent in the classroom as a student.

I had some opportunities to teach as an adjunct along the way and I loved sharing what I was learning, especially in my work at NYU, with my own students. My own teaching, I hope,  is influenced by some of my best teachers who were able not only to give me objective information but were also able to convey (or, perhaps, unable not to convey) their fascination with a subject. And unlike my work with NBC, which was seen on a daily basis by several million people to whom I remained completely anonymous, and whose effects on those millions of people I shudder to think about, my work with a handful of students was immediately gratifying, immensely rewarding, and unambiguously positive: I help them to see things deeply embedded in their culture that were previously invisible to them, and I help them to thinkabout those things through systems of critical thought they had never encountered before this class.

One last point about this particular conscious choice: in order to accept the offer of my first full-time teaching position, I had to leave NBC News and take an enormous cut in pay. To be precise, I took a $60,000.00 cut in pay. I can assure you I didn’t make this decision lightly or without deep, deep reflection. So I had a whole series of conscious choices to make along the way. Eventually and inevitably, I believe, they were all leading me to teach. It would have been a terrible personal tragedy for me to have ignored all the experiences I had in the course of this journey.

In your experience, how did the role of university professor evolve since you were an undergraduate student?

Well, that’s the funny thing about “the social construction of reality,” isn’t it? Reality never changes. It is our attitudes about reality that change. Technologies play an enormous role in our socially-constructed reality because they are the media that constitute the message of our society. New media restructure human relationships and therefore human priorities. They restructure both the things we think about (or not) and how we think about them. Media, as Neil Postman said, are epistemologies. But they are epistemologies with firm ontological foundations – they are extensions of our senses (or, in the case of most of our media, only two of our five senses). We tend to know what is real to us and what does not appear to be real we ignore. And so roles change because situations appear to have changed because our media have changed.

But what has changed? The truth is that nothing changes. As it says in Ecclesiastes (1:9), “What has been will be again, what has been done will be done again; there is nothing new under the sun.” Or as Shakespeare put it (Sonnet 59), “If there be nothing new, but that which is hath been before, how are our brains beguiled…” It is we who have changed, we who are beguiled by technological change, we who have ceased to believe that a certain situation exists while beginning to believe a new one has replaced it. We still love and hate, suffer and feel joy, resent and admire, covet and sacrifice. We still allow some with power to exploit and marginalize others without power, and we still look on quietly, feeling bad about it all but doing nothing. Nothing at all changes when new technologies are introduced into a culture. Nothing changes but our attitudes about what is and is not “real,” what is and is not “important,” what is and is not worth knowing. And we change because we choose to change, because media, as McLuhan tells us, are nothing more than extensions of us.

It could not be any other way. One of the things I have grappled with over the years (as do most media ecologists) is the problem of technological determinism. The term is frequently tossed around quite carelessly, I think, and many times is so tossed in lieu of a reasoned argument against theories of technology and social change. But still it is there – the accusation: determinist! – and the serious media scholar will consider the possibility that a certain idea is, after all, deterministic. I have always been a believer in two truths, one immanent and one transcendent: the virtually limitless potential of human intelligence and the possibility of free will. Anyone who believes in either of those truths – but especially anyone who believes in them both – will seek other explanations for phenomena than deterministic ones. Nothing has to be. Whatever is is because we’ve either actively made it so, or allowed it to exist without our resistance.

All of this is to say that many of the role changes we’ve seen and experienced in academia – and in “the real world” – are entirely unnecessary, a reaction to nothing more than an appearance of change, and a more or less unquestioned assumption of the reality of that change, that has become part of a socially-constructed (as opposed to objective) reality. What I thought was important as an undergraduate student three and a half decades ago is still important today. The basics never change. It’s really as simple as that.

When I was an undergrad, those “best teachers” I referred to in the previous question were concerned with making me think, and making me think critically, and they provided a framework of theoretical knowledge and principles derived from that knowledge to help me do so.

They demanded that I read. It didn’t matter whether I thought the text they were making me read was “relevant” to my life or not. They knew that I was in no real position to make such a determination. They knew that the relevance of a text would not necessarily be apparent to a young person of eighteen, nineteen, or twenty years. Relevance can be judged only in the light of experience, never decided beforehand. Such a decision reflects culturally manufactured desires, not a mature acknowledgement of human needs.

They demanded that I not only have an opinion (for who on this earth has no opinion?) but that I be able to support that opinion with reasoned argument, logic, and evidence, and role modeled behavior that gave me the confidence to be tolerant of other opinions, but unafraid to question and challenge them.

They demanded meticulous care in my use of words. Words, I learned, were the brick and mortar of reason. Every rational judgment I made would be made in words and expressed in words and I’d better strive for precision or be prepared to live with sloppy thought and poor communication skills.

But equally significant in my experience, my best teachers also showed me that they cared about human communication, that it is fundamental to our humanity, and that it is important enough to study it so that we can not only understand human interaction (for all human behavior is symbolic) but also improve it when it falls short.

That was what my best professors did four decades ago, and that is what I believe the best professors do today. None of this has changed. This is objective reality. But all of this kind of begs your question which is essentially that in our socially-constructed reality, new media have presented to us new roles and redefined relationships requiring new “skill sets” and ways of learning. And I believe – and there is an awful lot of objective data that supports this belief – that it our acceptance of these presumed new roles and relationships that account for at least some of the failures in American education in the last few decades.

As I write this, I have learned that one of the greatest teachers I’ve ever known, my mentor and dissertation chair at NYU Christine Nystrom, died yesterday (December 22). I am saddened by this news almost beyond words. It is difficult to explain just what Chris meant to me as a young scholar full of questions and uncertainty. She embodied the qualities I’m talking about and inspires me still to provide my students with the lessons she gave me: observe, question, think critically, think clearly, write clearly, and above all care. If we ever allow the role of university professor to “evolve” beyond these simple ideas, then God help us.

What makes a good teacher today? What advice would you give to young graduate students and aspiring university professors?

I believe I’ve already answered that, although I certainly admit that most people today would be inclined to argue against my point. What makes a good teacher today is what has always made a good teacher: command of a subject, a critical mind, a demanding nature, and an ability to inspire students to pursue knowledge for some end beyond mere financial rewards. A good teacher might be entertaining and funny, but shouldn’t set out to be. A good teacher may have broad experience with and skills using technology, but the mere possession of such experience and skills doesn’t make one a good teacher.

My advice to people who want to teach is pretty simple and very likely to be ridiculed: don’t believe the bullshit. You’re not there to help students get skills for a workplace. You’re not there to make them more marketable. You’re not there to provide them with answers to petty, superficial questions. You’re not there to impress them – or yourself – with the latest technological wonder that promises to make something “better” but will probably only shorten some algorithmic process and benefit an employer. You’re not there to mass produce replaceable parts for the machinery of the global economy. You’re there for one reason and one reason only: to make them better people than they were when they came in.

In order to do this, you’ll have to push them, prod them, cajole them, anger them, question them, and make them question themselves and their own previously unquestioned assumptions about the world. You’ll have to butt heads with your colleagues, your school, your administrators. You’ll have to be prepared to explain yourself to others who will want to know why you appear so out of sync with your culture. It’s not easy, so you’d better get used to it.

Or, like far too many university professors today, you can aspire to nothing more than merely cranking out more cogged wheels for the machine, being a servant of the technological society.

As usual, the choice is ours to make.

I know you are a practicing Catholic, as was Marshall McLuhan. How did your Christian beliefs influence your career trajectory, your research interests, and your perception of communication studies as a discipline?

I’ll go you one further: I went to Catholic grammar school too, Cure of Arts School in Merrick, New York. I was educated by Dominican sisters in both grammar school and high school, and eventually spent ten years teaching alongside them at Molloy College in Rockville Centre, New York. My wife Mary Pat and I are both Dominican Associates – lay members of the “Ordinis Praedicatorum” – the Order of Preachers. But I think it is dangerous to look at someone’s religion or their religious background or their upbringing in general and assume, “Ah-HAH! That’s why they came out the way they did,” as though two children of the same parents raised in the same household can be expected to think and act the same. On the contrary, it took me years – decades – to “find my faith” and I’m still working on a day-to-day basis to figure out what that faith means. Indeed, your question misquotes me. I usually describe myself as “a practicing Catholic, and I’m going to keep practicing until I get it right.”

I’m actually a very bad Catholic, or so I have been told by people who define themselves as “good Catholics.” I come into conflict with my Church in a number of areas (it is not necessary to detail them here). While I don’t agree at all with the category or the label attached to it, I am considered by some a “cafeteria Catholic,” which is to say I have real problems with (among other things) the principle of Papal infallibility and patriarchal structure and tend to choose which teachings sound to me to be authentic reflections of Divine will and which ones seem little more than the whims of fallible humans (men, to be clear). I’m not proud of this tendency and I constantly question my own motivations for thinking as I do, but I am in no way ashamed of it either. It is part of the person I am and if there is a God (and I believe that a God exists) then I would be doing an injustice to that God’s creative power to be anything or anyone other than what and who I am. For better or for worse – and may God forgive me for my arrogance – I will not be swayed by the teachings, the traditions, or “the magisterium” of a faith simply because the culture that faith engendered and continues to support says I must be.

So, no, I don’t think my faith influenced my career trajectory, my research interests, or my attraction to the ideas constituting the meta-discipline of Media Ecology. I think some part of me that I can’t describe, that I’m not sure exists, that doesn’t reside in any organ or system of my body, but that still makes me who I am influenced my interests, made me see Media Ecology as a system of investigation uniquely suited to examining interactions of technology and cultureandeventually led me to a place where I felt compelled to embrace a particular faith. Again, I don’t see anything – not a technology, not a culture, not a language, not a religion – as having that sort of power. That, to me, smacks of determinism. I’m not quite as convinced that we are not genetically predisposed toward or against certain attitudes, stances, strengths and weaknesses, etc., but that’s another story. What I’m saying is that I chose Media Ecology, I chosewhat subjects interest me, and I chose my faith – and all of these choices came after decades of questioning, longing for answers, uncertainty, confusion, etc. And I continue to choose them on a daily basis.

In 1999, Eric McLuhan edited an interesting book with a provocative title – The Medium and the Light: Reflections on Religion. Is God the Light?

Well, that question just packs metaphor upon metaphor, doesn’t it? Both the word and the idea of “God” are metaphors for something we can’t begin either to understand or to even imagine. “God” is the purest act of the human imagination, arguably the earliest of our inventions. Pure metaphor. And “light” or, even better, “the light” is a metaphor perhaps even more primal than God. So you’re asking me, in essence, if one metaphor constitutes another metaphor. My answer is yes, metaphorically speaking.

What is the use of “God” if not to stand as a metaphor for the goodness that human beings find in the core of their own beings? “God” is goodness and right. “God” is creation and, by extension, the creative urge. “God” is the organizing principle of the universe, the “logos,” the Divine, Cosmic wisdom, negative entropy: perfect and unbounded love. “The light” is the good, the opposite of darkness (and, therefore, by extension, of ignorance), the source of life, provider of warmth and security. Light is truth. And the source of light (metaphorically speaking) is love.

So of course there are multiple layers of overlapping meanings between these two metaphors and it should come as no surprise that we’ll use one metaphor to explain another. I have no problem saying, within this context, that “God is the light” any more than I would saying “God is love.”

Indeed, the electric light has always been a privileged medium to media ecologists. For instance, at one point in the documentary Picnic in Space, McLuhan turns on a flashlight and remarks that light does not have a point of view; that it radiates in all directions at once, having a spherical, auditory character. He believed that the electric light was the only medium that had no content – the only medium whereby medium and message were the same. This is very phenomenological, I think. Unlike the empiricist notion of consciousness as a passive absorption of sensory impressions bombarding us from the external world, phenomenologists regard consciousness as transcendental, i.e., as pointing outward into the world. In a sense, both the electric light and consciousness could be viewed as a sort of nothingness. Consciousness, by way of intentionality, emerges attracted by something other than itself, while the electric light becomes transparent and withdraws from our conscious awareness to create an environment that permits us to focus on specific entities other than on itself. Aren’t our invisible environments then, much like consciousness, a sort of room-making nothingness which pierces through the heart of being?

Neil Postman talked at great length about technology answering the question “How do we do something?” but that it was up to philosophy – particularly the field of ethics – to answer the question “Why do we do something?” The history of science, in general, is the history of our growing understanding of the physical world, and empiricism has played a central role in this process. The history of technology is the history of applying that understanding of the material world to solve some sort of problem. The history of philosophy is the history of human groping for meaning in the raw data of material reality. But raw data presents itself to us as more or less objective, and meaning can appear very, very personal.

All of this is to say that I’m not sure I can answer your question satisfactorily in the way you asked it. McLuhan’s observations about electric light (like all of McLuhan’s observations) are not terribly objective and don’t lend themselves to empirical investigation. That is both their weakness and their strength, as Lance Strate has reminded me time and time again by emphasizing the heuristic playfulness of “the probe.” The fact that electric light is pure content (rather than having no content) is an illustration of McLuhan’s aphorism “the medium is the message” – the significance of electric light is its restructuring of the day and of the traditional human understanding of time and its utility.

But no artificial light is omni-directional; it is in the nature of artificial light to exist in artificial space, and there is always something or someone holding on to it or hanging it. There is no three-hundred-sixty degrees (cubed) with artificial light. We might also consider the laser which is focused electric light, a single point rather than an infinite number of points (cubed) and is nothing if not subjective. Point of view is part and parcel of artificial light, and let’s not fool ourselves into thinking otherwise regardless of McLuhan’s heuristic and playful (if sometimes dangerous) eloquence.

Nor does light resemble consciousness, I believe, in the way you suggest. To paraphrase the words of my friends Michael Quirk and Stephan Mayo, we don’t “radiate” consciousness. I honestly wish we did. If human consciousness were like radar – another form of light – and radiated out from us and brought back to us impressions of things of which we would not otherwise be aware, we would be, as a species, a lot better off than we are now. Consciousness is indeed intentional and that is both the strength and the weakness of human intelligence. For our intention can be to use our technologies to build a comprehensive understanding of our world and to address its problems, or our intention can be to use our technologies to create our own comfortable, self-sufficient, solipsistic worlds and to ignore the objective reality that surrounds us.  I write at some length about this in my book The Metaphysics of Media.

The point of all of this is merely to note that McLuhan was engaging in metaphor to illustrate something that was difficult to express in objective, literal terms. And so was Plato, and Aristotle, and Aquinas, and Descartes, and Kant, and Hegel, and Husserl, and Sartre. Or if they weren’t intentionally engaging in metaphor then we must understand that they ought to have realized that their ideas were, in fact, models – metaphors – that attempted to explain the interplay of matter, imagination, and mind but were doomed to be pale, inadequate reflections of a reality that we are not, at this moment, fully able to comprehend. Each of these metaphors focuses our attention on a specific dimension of human experience. They are neither entirely wrong nor entirely right.

What can you tell us about your most recent book, The Metaphysics of Media? Other than the invisible environments and effects that media ecologists strive to raise to awareness, what ‘exactly’ is metaphysical about media?

Again, media are really nothing more than extensions of us. It is we who are metaphysical. We believe and refuse to believe. We believe in things that have no physical nature, no material reality, and we refuse to believe in them. We believe in things that not only have a physical, material nature but are also empirically measurable, and we refuse to believe in them. And our media play a role in this.

The Metaphysics of Media is predicated on the observation that different media throughout human history have engendered and supported different conceptions of reality and, consequently, un-reality.

It is difficult, in a few short paragraphs, to go into the argument in great depth or to cite the evidence I provide in some detail in the book. But I’ll give a thumbnail sketch: the era of primary orality is marked by animism and monistic pantheism; a single realm of reality imbued with the supernatural. Magic, science and religion (to cite Malinowski’s title) intermingle as immanent reality and transcendent reality appear inseparable.

The development of writing systems – especially the alphabetic – and the onset of literacy create a rift between immanent and transcendent experiences. Oral tales of transcendent experience become sacred scripture and orthodoxies are formed, while at the very same time the fixity of speech in space lends objective distance to thought (as Walter Ong pointed out). The rift deepens and expands as literacy itself deepens and expands following the development of movable-type printing. Three “camps” appear, each one championing its preferred metaphysical orientation: those who believe only in an immanent reality (natural philosophers, scientists, etc.), those who believe in a dualistic reality with the transcendent trumping the immanent (theologians), and those who desperately try to bridge the gap between faith and reason (Thomists).

In the technologically-developed west, the breach between transcendence and immanence becomes irrelevant in the era of electricity as transcendence itself all but disappears. Propositional structures of thought fall to the presentational, reason gives way to emotion, fixed, objective point of view cedes to subjective personal experience, a “secondary orality” arises. We fine-tune our technologies to bring us only the information we want, and ignore much of the objective reality of those parts of the world (the vast majority, in fact) who do not enjoy the same level of development as we do.

At every point, I must emphasize, our technologies act as instruments of our will. There is no determinism involved. If Americans are largely ignorant of the world, it is not the fault of the media that saturate our lives. It is because we choose our ignorance, and use those media to facilitate it.

You are an active member of the Media Ecology Association. How did media ecology as a subfield within the larger discipline of communication studies evolve since you were a doctoral student at NYU?

I am at the moment an active member of the MEA and certainly wish to remain so. I was not always so active and can’t really predict what the future holds in store for me. The “vicissitudes of life” and all that… Others have been far more active than I have and are largely responsible for shaping this organization into what I believe is an incredibly important “clearinghouse” for scholarship and collaboration. Lance Strate, Thom Gencarelli, Janet Sternberg, Jim Morrison and so many others have been there since the beginning and we all owe them a great debt. They can describe in far greater detail the evolution of the MEA as well as the meta-discipline of media ecology.

For my part, I’m not sure that media ecology, as a meta-discipline, has evolved at all. There has always been a fluidity, a flexibility, in both perspectives and methodologies, that allows for a very high degree of creative and critical thought. McLuhan’s probes are both philosophy and literature, Ellul blends sociology and theology, Innis looks at communication as an economic activity and technologies as media of exchange, Mary Ann Wolf (although I’m not sure anyone has yet told her she is a media ecologist) gave us our first neurological study of literacy that is, at the same, a deeply philosophical work. Elizabeth Eisenstein’s historical approach influenced the work I did on my first book, Why the Irish Speak English, and even The Metaphysics of Media, while it attempts (and I emphasize the wordattempts) the sort of cultural criticism so masterfully achieved by Jacques Ellul and Neil Postman, is very historical in its approach.

So I see no real evolution in media ecology beyond the “shape shifting” nature that seems to have been deliberately embedded in its fabric. The one thing that must, I think, always define a study we recognize as media ecological is its acknowledgement of the interactions of cultures – and the people who constitute those cultures – and their technologies.

Do you think that media ecology will ever attain the status of a discipline, concerned as it is with invisible media effects and environments – again, a sort of nothingness?

I sincerely hope not. It was only after oral tales became written orthodoxies that some people were labelled “pagans” and “heretics” were burned at the stake for unorthodox views. The greatest strength media ecology possesses is its ability to generate unorthodox views. Media ecology makes a better “Trojan horse” than a golden bull.

What are you currently working on?

I am currently testing the limits of my inadequacy as editor of EME: Explorations in Media Ecology, the journal of the Media Ecology Association. I am working hard on it, but have the unenviable fate of following in the wake of the last editor, Corey Anton, a frighteningly intelligent guy and a fierce workaholic who did a fabulous job with EME over the last few years.

If I manage to survive this experience, I am very close to having prepared for publication what will be either a very lengthy biographical essay or a very brief book about a late-19th/early-20th century Dublin barrister and amateur bibliographer, Ernest Reginald McClintock Dix. Dix represents, I believe, one of the last of the archetypal “men of letters” that McLuhan insisted were a natural by-product of alphabetic literacy.

I also claim to be working on a book, very much indebted to the thought of Christine Nystrom, about the possibility of human extinction as a result of our short-sighted and self-centered technological choices. I started it nearly two years ago and have made very little progress on it in the last year. No, that’s not true. I have made absolutely no progress on it in the last year. But I continue to claim that I am working on it. Perhaps you might ask me again this time next year?

© Excerpts and links may be used, provided that full and clear credit is given to Peter Fallon
and Figure/Ground with appropriate and specific direction to the original content.

Suggested citation:

Ralon, L. (2010). “Interview with Peter Fallon,” Figure/Ground. December 30th.
<  >

Questions? Contact Laureano Ralón at

Interview with Marshall Soules

© Marshall Soules and Figure/Ground
Dr. Soules was interviewed by Laureano Ralón. December 26th, 2010.

In 2009, Dr. Marshall Soules retired from Vancouver Island University (formerly Malaspina U-C), where he taught in the English Department before founding the Media Studies program. At VIU, he was Chair of Media Studies from 1998 to 2008, and directed the Media Research Lab from 2004-2009. He developed the curriculum for two Media Studies degrees; both feature a strong emphasis on Canadian media theorists and the media ecology movement. As an honorary research associate at VIU, he continues his writing and photography projects from his home in Ladysmith, B.C. His doctoral work at Rutgers University focused on the early plays of Sam Shepard to explore the “protocols of improvisation” across the performative arts.  His interest in improvisation led to a series of presentations at the Guelph Jazz Festival and to the Society for Digital Humanities.  In 2007, SSHRC funding provided support to document social and political messages in the public sphere as part of the Canada/Cuba Image Dialogue project. He has exhibited his photographs of distressed posters, graffiti, and street art in a number of solo and group shows. He is currently working on a textbook for Edinburgh UP — Media, Persuasion, and Propaganda – and a book of documentary photographs and commentary on Cuban propaganda called Cuba’s Revolutionary Landscape.

How did you decide to become a university professor? Was it a conscious choice?

When I started my university education at University of Toronto in 1967, I thought I wanted to study psychology but quickly discovered I was more interested in literature than the clinical psychology being taught at the time. I had always enjoyed the study of literature and writing, and found additional inspiration from literature professors such as David Godfrey at Trinity College, then some excellent teachers at Middlebury College when I transferred there to take advantage of scholarships. Great literature provides ample opportunity to study and discuss human psychology, and my BA Honors thesis at Middlebury focused on the early work of Vladimir Nabokov, with its intense and self-reflexive exploration of human motivation.

As a graduate student at Rutgers University, I was a teaching assistant from 1970 to 1973 and discovered the challenges and satisfactions of university teaching. In those days, when raising consciousness was as much on our minds as teaching academic skills, our group of TAs met regularly to discuss how to improve our teaching, open up the practise of education, and try new things in the classroom.  My instructional manual in those days was Teaching as a Subversive Activity by Neil Postman and Charles Weingartner. I had also discovered Marshall McLuhan and used Understanding Media as a text in one of my first-year English courses.

As a TA, I made the conscious decision to become a university teacher. I loved the interaction with students, the politics of education, the opportunities provided by the study of literature, and the research and preparation required to offer stimulating content. Despite a tendency towards bookish introversion, I enjoyed being the centre of attention in classroom discussions.

After I left Rutgers as an ABD, I taught at University of Guelph (1973-74), Niagara College (1975-77), and then Capilano College (1977-78). At Capilano, I had a crisis of confidence as a teacher: except for summer and part-time employment, I had never worked outside a university or college and realized I was biased and ill-informed about commerce and the world of work outside the institutional setting of the university. I feared I might be a charlatan, a clever bluffer preaching my ignorance and narcissism, without adequate discipline in my approach to teaching. As I’ve joked with colleagues since then, I would argue that black was white and think the exercise was good for students.

I turned down an offer to teach a full course load at Capilano College in 1978 and started a construction company with a friend, Raincoast Construction Ltd.  I had some talent as a woodworker, and enjoyed the challenge of learning a trade (carpentry) and building a successful business. My wife and I also started a company to promote the music and dance of West Africa. By 1986, however, I started to feel that carpentry and construction would not satisfy my love of learning and teaching, and did not relish the thought of moving tools and construction materials in and out of buildings for the rest of my working life. Following a tip from a colleague at Capilano, I was lucky enough to find a full-time position at Malaspina College (now Vancouver Island University) beginning in 1987.

Leaving teaching for almost ten years was probably the best thing I could have done to make me a better professor, though I felt my academic career had been “set back.” At least I had learned to be more practical, focused, motivated, and responsible as a teacher—no more black is white (unless it was a class on phenomenology!)—and I rediscovered everything I loved about university teaching.

By 1994, I completed my long-delayed PhD (in English/Performance Studies at Rutgers) and set about to build a new department of Media Studies from my base in the English Department.  Still a fan of McLuhan and media ecology, I was able to establish an independent department by 1998, finally with a BA Major degree by 2008. In 2009, after 22 years at VIU, I took early retirement to pursue my own writing and photographic art.

Joshua Meyrowitz’ thesis in No Sense of Place is that when media change, situations and roles change. In your experience, how did the role of university professor evolve since you were an undergraduate student?

In addition to the internal changes I describe above—professor as medium!—the most obvious and pronounced change in my teaching experience came with the internet, especially after 1994 and the emergence of the graphical user interface and worldwide web. I had been using computers as a much-improved typewriter for years, and was a relatively early adopter of the internet when I returned to teaching in the late 1980s. In 1995, I participated in a pilot project that delivered course material on the web and through video conferencing (using switch-56 technology). I routinely used the internet to offer fully online courses or to supplement traditional classroom instruction until my retirement in 2009.

At educational technology conferences throughout the 1990s, a cliché summed up how we thought about the transformational impact of the internet on education: “The sage on the stage becomes the guide by the side.” This ideal was buttressed by the vogue of constructivist pedagogy; briefly, students are supported to take a more active role in the direction of their own education. Luckily for me, the emerging technologies and attendant philosophy of constructivism suited my temperament and I took a leadership role in promoting both in a series of conference papers and in the curriculum I developed for VIU’s BA Major in Digital Media Studies. For this degree, we combined media and communications theory with courses in digital media technology (digital film, digital audio, web design, interactivity etc).  Our idea was to integrate media theory and praxis, while crossing as many disciplinary boundaries as possible.

The internet is a powerful tool for learning: as a means of delivery, it supplements the traditional classroom, and opens up new patterns of interaction; as an encyclopaedic resource, it transforms how we research; it is a powerful publishing medium for students and professors; it builds social networks useful in education; and it is a fascinating subject of study for a student of the media. I hazard the generalizations that the internet contributes to a levelling of hierarchies found in universities, and contributes to a welcome erosion of disciplinary boundaries.

More specifically, I used the internet to publish student writing in my courses and argued that this practise could transform the teaching of writing: students no longer wrote for one person (the professor) but for an audience of their peers. Student writing is often more engaging, thoughtful, and careful when they know their work will read by their classmates. For fully online courses, their writing is the only means to communicate with the rest of the class.

In my work developing curriculum for two separate degrees, applying for research funding, running a department and a media research centre, and organizing a number of conferences, the internet was invaluable, but likely contributed to the feeling of being overworked and spread too thin.

While not specifically related to the introduction of a new medium, university teaching during my time became increasingly concerned with what faculty and students alike called “political correctness.”   While accommodating differences is critical in the university setting, the fear of being politically incorrect had a chilling effect on open and honest dialogue. In the university context, hearing the expression of bias and prejudice can be immensely instructive, an opportunity for meaningful discussion if managed appropriately. Greater tolerance for unpopular opinions should be the hallmark of universities, and professors should be in the vanguard to encourage wide latitude of expression.  How to do this requires technique worth learning.

What makes a good teacher today?

There are many styles of “good” teaching, and different teachers can make different styles work effectively. That said, I think a good teacher is a person who loves to learn, and continues to learn. This person is curious, listens well, and asks open questions to draw a motivated response from the learner(s). A good teacher has the ability to frame concepts and information to encourage critical thinking and curiosity. In the end, these good teachers get out of the way of their students’ learning, providing guidance and support as required. Because they love to learn themselves, their knowledge is deep and broad, and they communicate their learning with authority and authenticity.

Hierarchies can be useful for a university professor—for credibility, classroom discipline and management, and for institutional interactions—but should be called on judiciously. The de facto imposition of hierarchies is anathema to true education. While they are often hired for being knowledgeable and confident, good teachers are humble with their knowledge, share it freely, and admit when they are wrong. They should be “professionally paranoid” about their own learning, and recognize that their role as expert makes them susceptible to dogmatism and propaganda.

Good teachers do not feel overly constrained by disciplinary boundaries and are willing to search outside their discipline for new answers to old questions. I would go further–following the influence of Innis, McLuhan, Grant, Carpenter and other media ecologists—in suggesting that good teachers question the efficacy of the very institutions that employ them. In my experience, many university teachers defend the status quo, deny their reactionary support of government by elites, and do not adequately contribute to the progressive reforms required of their respective societies.  It took me too long to recognize these biases in my teaching, and I thank former students for pointing them out.

A recurring theme in department meetings and gatherings of professional associations is the purported “decline of literacy.” I dispute this assumption and suggest that new media of communication require new skills. While there may be a demonstrable decline in print literacy in some cultures, perhaps there is a concomitant increase in visual or auditory receptivity. Good teachers, in my view, do not need to bemoan the inadequacies of their students based on outmoded measures of achievement, but engage students in their own time and with respect to their own terms.

While it is a commonplace among university teachers, the use of fear to motivate students should be used with caution, and strategically. Learning motivated by fear can stifle motivation and compromise the desired outcome. Good teachers assume that students want to learn what they need to know, and help them find ways to do so without the heavy hand of authority, but rather as a co-creator in the learning process.

Another thing I learned from personal experience: one can be an introvert and still feel at home in the university setting. Teaching is both a subversive activity and a performance. If you’re an introvert, you might have greater challenges performing as a teacher, but you also have insights to offer that extroverts may not have at hand.

As a performer, learn how to abandon your prepared script–extemporize and improvise. Do not give yourself permission to be boring, and expect the same commitment from your students. Teaching is performance. Learn to be a pro.

At the beginning of a course or program of studies, many teachers make assumptions about what their students know or should know. I advise teachers to ask students what they know about a given subject before launching into lectures or discussions based on untested assumptions.

Personally, I love humour and playfulness in university instruction, and am a big fan of story-telling in the communication of ideas—something Malcolm Gladwell, Oliver Sachs, Douglas Coupland among others practise with aplomb as writers.  On a related and final note, I love teachers who establish rapport with their audiences rather than lecture to them from prepared notes.

What advice would you give to young graduate students and aspiring university professors?

I have hired both experienced and inexperienced university teachers, and know the profession does not suit everyone. It takes some experience in the university classroom to discover if the profession will suit you. (I am grateful for the opportunity I was given as a teaching assistant while in graduate school.)

Aspiring university professors should ask what it is that attracts them to the profession and do a realistic assessment of the workload and lifestyle. Ask as many of your professors as you can what they do. For example, I was always changing my readings and this involved more work than for a person who teaches the same material over and over again. Vancouver Island University was known as a “teaching institution,” and we all had heavy instructional loads. For those of us engaged in research, we had to find that time, as we said, “off the sides of our desks.” The workload for the conscientious researcher/instructor can be heavy and hectic, and may require those long summer holidays to accomplish with satisfaction.

As noted above, I believe university professors should be humble about their knowledge, willing to share it freely, and be enthusiastic to learn more. Ask yourself if you are likely to be a “lifelong learner”—this is the primary responsibility of the university professor as a model for students to emulate.

If you don’t like people, or if you think humanity is by-and-large stupid, please don’t become a university teacher. You will do more harm than good. Arrogance in a university professor is a sure sign of insecurity, something I learned from personal experience.

Universities are institutions: though often incredibly dynamic and stimulating, they can also be hierarchical, claustrophobic, reactionary, and bureaucratic. University politics are not trivial, and often require significant diplomatic skills.  This is especially true if you want to develop new curriculum for senate approval.

If you have issues with the abuse of authority, as I do, be prepared to do a lot of emotional management during your university tenure. Even though you may feel vulnerable for speaking your mind about academic affairs, your colleagues will value your courage and commitment if your comments and questions are on point.

While they are institutions, universities are also communities. If you want to work in a university, be prepared to work in a network of relations and contribute to it in meaningful ways. This community contributes to your strength, and any self-imposed isolation from that community will compromise your effectiveness as a teacher and colleague. If you want to be a maverick, go into business for yourself. I tried it, and returned to the university community with a renewed sense of appreciation.

Be prepared to challenge the sacred cows of your discipline and to look for answers beyond its bounds. One of the perennial deficiencies of universities is their over-reliance on knowledge specialization. Yes, we need to specialize to become experts in our field, but we won’t become true experts until we explore beyond the bounds of its conventional wisdom.

Let’s talk about your research interests, both past and present. I know one of your areas of expertise is Canadian Communication Studies. What attracted you to the works of Innis, McLuhan et al?

As I noted previously, I discovered McLuhan—The Mechanical BrideGutenberg GalaxyUnderstanding Media, and The Medium is the Massage—in the late 1960s when McLuhan was still being promoted as a media guru. As I recall, the route to McLuhan was through Edward Hall’s work on time (The Silent Language, 1959) and space/proxemics (The Hidden Dimension, 1966) as media of communication. It was Hall’s insight, adopted by McLuhan, that tools are extensions of the body. McLuhan’s notion that electronic media extend the human nervous system remains for me an intriguing and rich vein of exploration, and forms one of the core tenets of media ecology.

I was aware that McLuhan had collaborated with Edmund Carpenter at the University of Toronto in the 1950s and when I first read Eskimo Realities (1973), I was impressed by the intersection of anthropology and the study of communication. Insights from that book—especially about Inuit mapmaking and sculpture—continue to inspire me. I remain an enthusiastic fan of Ted Carpenter’s work, especially They Became What They Beheld (1970), Oh, What a Blow that Phantom Gave Me! (1972), and his largely unknown but monumental collaboration with Carl Schuster on Patterns that Connect (1986-88, 1998).  Carpenter’s unorthodox anthropology seemed to complement McLuhan’s inspired humanities approach to create a fertile mental environment for thinking about the impact of media on culture.

The communications writings of Harold Adams Innis— Empire and Communications (1950) and The Bias of Communication (1951)—are difficult in style, but absolutely essential in establishing the political economy of communications studies. Innis’ suspicion of colonial influence, his pacifism, and his notion that change comes from the margins of empire contributed to a unique approach to communications studies that continues to resonate deeply with me. When we add the philosopher George Grant to this mix—as Arthur Kroker does in Technology and the Canadian Mind (1984) – we see an approach to communications variously informed by political economy, arts and humanities, anthropology, sociology, and  philosophy. The inter-disciplinarity of this approach and its insistence on the interdependence of humanity with its lived environment is perennially provocative.

For me, Canadian communications theory is a machine to think with. As a brief illustration: Innis did not experience the internet, but his notions of media bias and empire building provide useful insights for assessing the impact of this technology on global affairs. While it may be a nationalist conceit, I still think Canadians communicate from the margins of empire and we have something unique to contribute to global dialogue as a result.

My doctoral work at Rutgers in the 1990s explored what I call the “protocols of improvisation,” how writers, actors, artists, and musicians improvise in their chosen medium. Ultimately, I was (am) interested in the notion of the improvised character, and I detect a strain of improvisation in the work of these Canadian media theorists.

Canadian Communication studies, Media Ecology, The Toronto School of Communication – do these labels signify the same thing?

I don’t think so. The Toronto School—Havelock, Innis, McLuhan, Carpenter, Frye, then de Kerckhove, Logan, and Wellman—was situated in a specific time and place, and certainly exerted tremendous influence on Canadian communication studies and the media ecology movement. But Canadian communications studies should also recognize George Grant and the Krokers (associated with McMaster), and the foundational writers indentified by Robert Babe in Canadian Communication Thought, those not generally associated with the Toronto School: Graham Sprye, John Grierson, Dallas Smythe, Gertrude Robinson and others.

The media ecology movement has its core representation of Canadians –namely Innis and McLuhan—and those influenced by them–Carpenter, Ong, Postman, Schwartz and others—but also includes many who were either not Canadians or did not attend the University of Toronto. A visit to the Media Ecology Association website will clearly show that the Toronto School is a small, but influential subset of that association.  Canadians might be able to claim some inspiration for the movement, but the promotion of the media ecology brand has fallen to others such as Lance Strate, a New Yorker.

I have also argued in a conference presentation in 2006 at Ryerson University that there is a “new wave” of Canadian media theorists who carry on the pioneering work of the Toronto School and might well be considered media ecologists. Foremost among these media theorists are B.W. Powe, Paul Rutherford, the Krokers, Heather Menzies, Ursula Franklin, Murray Schafer, Barry Truax, David Rokeby, Paul Heyer and others.  I also included the anthropologist Wilson Duff in that list, but his inclusion might be considered idiosyncratic in some circles.

While the conflation of Canadian communication studies, media ecology, and the Toronto School is too imprecise to be helpful, one could argue that all share considerable common ground on the broad study of media’s impact on culture. However much they may share mutual influences and concerns, these theorists have many differences worth exploring by the new wave.

Speaking of mutual influences and concerns, your work on the “protocols of improvisation” sounds fascinating. I wonder if you found the insights of Heidegger and Merleau-Ponty – with their emphases on the non-rational aspects of existence, skilful coping, and the pre-reflective, playfully absorptive engagement with the world – of any use during your research. I am interested in the connection between McLuhan’s general media theory and existential phenomenology…

I feel somewhat sheepish to admit that both Heidegger’s and Merleau-Ponty’s work did not resonate with me, even though I knew it should at the time. I found both to be overly abstract and removed from questions of embodied performance. The work that did influence me included Jacques Copeau, Victor Turner, Richard Schechner, Louis Henry Gates, Joseph Chaikin, Keith Johnstone, Antonin Artaud, Marvin Carlson, Robert Farris Thompson, John Miller Chernoff, Augusto Boal—mainly people who explored improvisation as it is practised in the arts.

It would seem useful to examine McLuhan’s theories of the media through the lens of existential phenomenology: if media extend the senses, if electronic media extend the nervous system, then the individual’s subjective apprehension of the world will also be extended and perhaps amplified. The condition of narcissus narcosis could be intensified because the extended senses return more self-reflective data, with the attendant danger of sensory overload and loss of sensory equilibrium. Judgment will be compromised.

McLuhan’s brilliant use of Poe’s “Descent into the Maelstrom” as an analogy for surviving in the whirlpool of electronic media encapsulates the phenomenological dilemma: we need something buoyant to keep us afloat, and we need the right attitude to keep us calm enough to recognize the solution.  I fancy that McLuhan’s buoyant container in the maelstrom is filled with his insights about media, especially the four laws of media. Poe’s hapless mariner is forced to improvise with the tools at hand, and must also overcome fear and incapacity to act.

I suspect this doesn’t do justice to Heidegger, Merleau-Ponty, or phenomenology. You’ve given me some reading to do!

I guess the beauty of eclecticism is that it leaves ample room for multiple interpretations. In fact, McLuhan’s oeuvre demands a high level of engagement on the part of an active reader who is responsible for closing the circuit; he was after all a “cool” thinker whose work has been characterized as providing a “Do-It-Yourself-Creativity-Kit” requiring a “U-Think approach to moving ideas.” In a sense, it is quite possible that McLuhan will never be fully understood – if by understanding we mean reducing his work to a mere accumulation of mummified knowledge. Perhaps he should be thought of as somebody to think with rather thanabout. I think there is something contradictory yet magical about the fact that over the years McLuhan has been classified as an instrumentalist, a determinist, a critical theorist, and so on, but in the end many of these lables didn’t quite hold. This is very phenomenological, in a way: the fact that McLuhan resists clear-cut categorizations; that his multifaceted oeuvre appears inexhaustible, and that he is simultaneously all and none of the above (it seems there is a McLuhan for everyone!) gives him a sense of transcendence which is partly responsible for a legacy that lives on decades after his death. What do you make of all of this and where would you place your own McLuhan?

Your observation that McLuhan is “somebody to think with rather than about” is an excellent insight for both the best and worst of McLuhan criticism. McLuhan himself, though writing about Innis in his introduction to The Bias of Communication provides one of the more helpful observations for approaching the work of both theorists. McLuhan praises Innis for his “pattern recognition”—a phrase William Gibson may have borrowed for one of his novels—and describes a method of building up a mosaic of insights organized by interface: “[Innis] changed his procedure from working with a ‘point of view’ to that of generating insights by the method of ‘interface,’ as it is named in chemistry. ‘Interface’ refers to the interaction of substances in a kind of mutual irritation.” McLuhan observes how juxtaposition without connectives is the method of symbolism in art and poetry, and more characteristic of dialogue than of writing. The “interplay of aspects” found in dialogue can “generate insights or discovery.”   Good reading directions, I think, for both Innis and McLuhan with their shared bias towards orality as being more inclusive than print.

In many respects McLuhan was successful in turning his audience into the workforce, something he claimed for television, a cool medium with low intensity of data saturation in a single sense. Reading and thinking about McLuhan requires our full participation, and willy nilly engages our subjectivity; as you phrase it, “very phenomenological.”

This mosaic / interface technique of presentation makes demands on the reader who is asked to fill in the gaps in productive ways. Unfortunately, some of McLuhan’s critics seem to believe that providing their own idiosyncratic connections is sufficient to the task of understanding McLuhan’s take on the media. I’ve done this myself, and have seen students and critics do it.

As a person reading McLuhan since the late 60s, I’ve seen my reading of him mature to the point where I am enthusiastic about some of his insights and cool towards others. The “medium is the message” is a good insight despite the hyperbole of the phrase (and which confuses many people). “Narcissus narcosis” and a medium’s effect on the equilibrium of the senses are important insights worth further research. (If electronic media extend the central nervous system, what effect does that have on human health? Do electronic media—in delivering the shocks of the global village to their audience–contribute to stress and heart disease?) The laws of media are an excellent starting point for evaluating the impact of a new medium on culture, though many of the examples cited by the McLuhans are unconvincing to me. For example, radio was not made obsolete by television any more than computers replaced paper. Again, the tendency toward hyperbole has caused misunderstanding. Some probes hit the mark, others don’t. The reader’s labour discovers the difference.

Despite the potential pitfalls of reading McLuhan (and Innis), the insights they generate with their interface method are worth the effort, and are improved by years of experience with the media. I’m still working on my own private McLuhan and recommend the journey to those who can tolerate large dollops of paradox and ambiguity in their thinking. I think of McLuhan as a charter member of the association of tricksters, founded to challenge the status quo and reinvigorate culture by creating, in Barbara Babcock’s phrase, “a tolerated margin of mess.” Tricksters, straddling the boundaries of culture, act as mediators between what we think we know, and what we need to know to survive as a culture. McLuhan provides that function, still.

I personally enjoy the fact that many of McLuhan’s insights are encapsulated in catchy phrases – granular knowledge, as it were. This is very convenient, isn’t it? These aphorisms provide a good point of entry into the complexity of his thought and can even be said to function as “cool” structures that encourage the reader to think with McLuhan, if notthrough him. However, I also find it alarming that twenty-eight years have elapsed since McLuhan’s death and what remains most alive about his extensive oeuvre – especially in mainstream discourse – is a simplified take on some of his probes and aphorisms. To this day people continue to encounter McLuhan through these and other metaphors without fully understanding the significance of his entire system. To advance McLuhan, I think it is necessary to think McLuhanistically. This, I believe, requires dealing with McLuhan on its own terms by approaching his eclectic oeuvre from a different standpoint – that is, a playfully absorptive stance which accepts his work as being in “constant flux” rather than an accumulation of mummified theoretical insights disguised as clichés. In a word, focusing on the most obvious clichés, the area of attention, may bar us from making discoveries at the level of the ground, the area of inattention. The points of contact between McLuhan and phenomenology, for example, are well at the periphery of his oeuvre – but they are very much there; and the same applies to the connection with critical theory, which was brilliantly articulated by Grosswiler et al. My question to you is: what is your suggested approach for engaging McLuhan without just uncritically worshipping him, and how do we move beyond McLuhan without abandoning him?

Most commentary on McLuhan focuses on Understanding Media (especially the first seven chapters), The Medium is the Massage (fun, quick, provocative), possibly with some acknowledgement of the laws of media, and ideas expressed in the Playboy interview–what you call the area of attention. The main area of inattention is The Gutenberg Galaxy and The Mechanical Bride.  In The Mechanical Bride, one can see McLuhan fully engaging with popular culture in the form of advertising, and honing his characteristically playful, ironic, punning, and referential writing style. Many critics consider The Gutenberg Galaxy one of his best works, and I agree. Taken together with his introduction to The Bias of Communication, we discover McLuhan under the profound influence of Harold Innis and, as Ted Carpenter observes in “That Not-So-Silent Sea,” of Dorothy Lee.

My first suggestion for engaging McLuhan with some critical distance and appreciation of his influences would be to read the early work (The Mechanical BrideThe Gutenberg Galaxy, the “Introduction” to Bias of Communication), take a look at the essays in Explorations in Communications, and read Carpenter’s useful essay. Understanding Media and the books that follow can then be perceived as figures against the ground of McLuhan’s perennial interest—the rhetorics of media.  (The publication in 2006 of McLuhan’s PhD thesis, The Classical Trivium: The Place of Thomas Nashe in the Learning of His Time, reinforces the sense that he was dedicated to compiling a kind of rhetoric of media effects.)

While the Laws of Media is far from satisfying as a book, the student of McLuhan should work with the tetrad of laws, using them as a machine for thinking about media. For example, try running social networking through the tetrad.

Many of McLuhan’s insights were based on his eclectic reading and his intuitions about how media function. For a person who stressed that the medium is the message, some of his observations about the mechanics of a given medium could be truly baffling, and others would surely benefit from further research. Recent developments in the cognitive sciences, in conjunction with magnetic resonance imaging and brainwave monitors could be used to test the impact of a medium on the human sensorium. To what extent does a medium alter sensory equilibrium? This kind of empirical testing is not trivial since isolating the stimuli would be difficult and tend to decontextualize the effect(s). (Analogously, Jacques Ellul argues that propaganda cannot be analysed in isolation because it functions within a network of influences.)

Let’s take McLuhan’s ideas about hot and cool media, sensory equilibrium, extension of the senses, electro-acoustic space, the tribal echoland, Narcissus narcosis, reversal of the over-heated medium and test them, if possible, with some degree of rigour. Reading McLuhan is stimulating, and he often seems to stimulate a desire in his critics—me included–to free-form, probe, and prognosticate.

If we study McLuhan in the company of Innis and Carpenter, we will see the continuing need to further our study of the political economy and anthropology of media. In this fashion, we would retrieve the best of McLuhan and make genuine contributions to his pioneering approach to media analysis.

Finally, McLuhan considered artists to be the early warning systems of their respective cultures, and we should look to artists like David Rokeby, B.W. Powe, Edward Burtynsky and many others to see how his ideas are flourishing.

Finally, what are you currently working on and when is your next book/article coming out?

In the coming year (2011), I’ll be writing a textbook for Edinburgh University Press, in their Media Topics series edited by Valerie Alia, called Media, Persuasion, and Propaganda.  This text will review oral, written, and visual rhetorics of persuasion for an undergraduate audience, with an additional emphasis on the performance of propaganda.  It is scheduled for publication in March 2012.

My book called Cuba’s Revolutionary Landscape is almost complete and I’ll be trying to find a publisher while writing the propaganda textbook. This project features my photographs of Cuban visual propaganda taken from 2005 to 2009 (partially funded by  SSHRC), and provides commentary on the billboards (murales) and social murals arrayed across the Cuban landscape to further the goals of the socialist revolution. The most interesting of the murales challenge the on-going attempts of the U.S. government to isolate Cuba behind an economic embargo, and U.S support of anti-Cuban terrorists such as Luis Posada Carriles and Orlando Bosch. I’m hoping this project will see publication in 2012 as well.

Next on the agenda is a book on street art—collaged and distressed posters, graffiti, and stencils. The commentary will present these images as a collective improvisation demonstrating a class war between those with limited financial means and those with the resources to saturate the urban environment with images promoting consumption and prosperity.  I have completed extensive research on municipal legislation, the legal issues associated with management and control of the urban visual landscape, and social attitudes towards the proliferation of unauthorized images in the public sphere. The agents of property—both public and private–are at war with vandals and nomads, the folk devils of our time. Again, a heavily-illustrated book that will be a challenge to publish; perhaps this and the Cuba project will be illustrated articles instead of books.

© Excerpts and links may be used, provided that full and clear credit is given to Marshall Soules
and Figure/Ground with appropriate and specific direction to the original content.

Suggested citation:

Ralon, L. (2010). “Interview with Marshall Soules,” Figure/Ground. December 26th.
<  >

Questions? Contact Laureano Ralón at

Interview with Roman Onufrijchuk

© Roman Onufrijchuk and Figure/Ground
Dr. Onufrijchuk was interviewed by Laureano Ralón. December 23rd, 2010.

Donations can be made to the Roman O. Undergraduate Memorial Bursary or to the For Roman and Rita, in a time of need GoGetFunding campaign.

Roman Onufrijchuk is a former senior lecturer in the School of Communication at Simon Fraser University. He was born in Winnipeg, Manitoba, where he spent his early youth in Yorkton, Saskatchewan, then living and working variously in Saskatoon, SK and Edmonton, AB.  Following over 10 years in print, TV and radio, he arrived in Vancouver in 1982 to attend graduate school at SFU’s School of Communication, where he began teaching in 1985.  Since then – with the exceptions of a one-year stint as a visiting professor at the University of Peter Mohyla Academy in Kiev, Ukraine, and two years as supervisor of the Communication Arts program at Dubai Women’s College in the UAE – he has taught at Simon Fraser.  Roman has been Program Director of two radio stations, as well as Director of Television Programming for British Columbia’s provincial educational broadcaster, the Knowledge.  In addition to his work in broadcasting, Dr. Onufrijchuk has been director of programming for Arts and Design with Continuing Studies at SFU and, for over ten years, Chair of the Pacific Cinémathèque’s Board of Trustees. His research focuses on media theory, history of media, social and cultural implications of new media and robotics, and the history of the field of communication studies.

How did you decide to become a professor? Was it a conscious choice?

I started out in broadcasting, this following cultural animation and articulation in a diaspora community in Canada.  The latter required moving between cultural idioms, understanding, explaining and sometimes interpreting them between cultures, individuals, social groups, governments.  Born into a family of post-WWII “DP’s” who’d found themselves torn violently from their homeland, and in the midst of a very different culture, I learned to straddle between understanding and dismay early on – home was one culture, compellingly sung and written in memory and regret; on the street and in school was another, promising in intimations of “progress,” participation and pleasures.  The older I got, the more explaining I found myself doing.  I guess teaching emerged as something of an avocation, though I was in my 30s by the time I started doing it in the classroom.

Parents had hoped I’d go law or medicine, but I went TV, then radio, and after a reasonably good start to a career, decided that the 3 minute “item” (a rendition of a matter of any complexity, often enough short-changing the audience, the reported exigence, and the reporter), was limiting and limited.  At any rate, by the time the 03:00 minute item began to drive me up a wall, I had moved into the programming side of broadcasting – usually directorships of programming for radio stations and then again, full circle, back to TV.  That sort of work – “editorial” in the press world – suited me better, and kept me in the media a while longer.  But this too became increasingly shallow.  So, I “dropped out” of that, and tuned in at university.  I’ve continued working in broadcasting and community articulation off and on and still doing bits here or there.  By the time I got to SFU to complete my undergraduate degree I was some 10 years older than the average undergrad, so, having finished that and accepted into graduate studies, the department chair offered me a sessional posting. To be sure, the media had always been a fascination, and the literature and ideas I was working with added a great deal of depth to that fascination.  In the class room, I think I must have transmitted something of the fascination and passion I felt, results were, and continue to be, positive student response.

Teaching, I discovered, straddles knowing and the state of unknowing, a kind of reverse engineering from knowledge back to ignorance in order to repeat the path while helping the interested along the trail.  Didn’t hurt I’d grown up in an expressive culture intensely rooted in residual orality, story telling, proverbs, poetry and song.   Getting in front of folks wasn’t all that scary. And, I discovered, my own efforts to understand this state of straddling cultures, and how communication shaped the cultures I inhabited, benefitted from teaching.  Crossing and re-crossing of the field, reverse reengineering understanding, rendered both enriching insights, new intellectual temptations and distractions, and a still growing ability to recognize one’s inability to follow all those tributaries, but also to be reinforced and advised by their background presence.  I still learn a lot from questions my students ask.

What attracted you to the PhD at Simon Fraser University, who were some of your mentors and what did you learn from them?

I’d done my Master’s degree at SFU under Dr. William Leiss.  On defending it, I decided that the supervision I’d received was the sort I wanted to continue working under.   More to the point, my supervisor, seemed to have no qualms about my interest in Innis and McLuhan, and was encouraging.  When I proposed to do a study of the two of them, he suggested the study would be too large to tackle at the doctoral level, so I chose McLuhan.  There was a bit of a ploy involved.  You can work on Innis and completely ignore McLuhan, or give him a cursory footnote or two.  Perhaps take a shot at his introduction to the 50’s edition of Bias of Communication (which, like much else in McLuhan’s corpus, deserves a careful re-reading now that we live in “internetworked” media eco-systems).  But, you cannot do McLuhan without having done Innis.  Oh, sure you can, mainly because there’s “so much to misunderstand” in McLuhan, to quote Northrop Frye.  But, in addition to the Trivium-inspired thinking, there’s a deep socio-historical media awareness, and that comes from Innis.  You have a question lined up around rhetoric below, so I’ll elaborate more on this in a bit.  But, the point was, I wanted not the whiz-bang, “for your information let me ask you a question” McLuhan, nor the brilliant and short lived arc of his celebrity and reincarnation as the patron saint of the digital age, but the serious and thoughtful side, and the aspect in his work that pointed to the future.  For that I would need Innis.  Leiss didn’t object, and on I plodded.

In addition to William Leiss, I studied under two of your previous interviewees – Ian Angus and Paul Heyer.  I think the most important thing I learned from them was not to be afraid of working in a field that was both viewed with some suspicion by many of the other schools and faculties, and was a field that on any seasoned reflection also seemed to be impossible.  That said, the more I read, the more the communication and media/mediation began to appear everywhere I turned.  Can there be an anthropology, a philosophy, a quantum mechanics or biology without communication?  Can there be a history or literature without some kind of media use, without extrasomatic memory resources, access, and capacity to use them?  Without media and communication, can anyone stand on the shoulders of giants to see greater perspectives?  The School of Communication provided an environment where I could struggle with these larger questions through the prism of working on McLuhan’s thought.  Mind you, some of the faculty advised me against working on McLuhan – “dead end” – they said.  No one would want to read anything else about “the Sage of Wychwood Park.”  ‘Course that was said as the two “posthumous” books were about to appear, then the two excellent biographies, followed by a whole raft of re-workings and re-readings, and well before, as if almost out of spite, WIRED went and named McLuhan the “patron saint” of the Net (another demonstration par excellence of how much there is to misunderstand in McLuhan’s writings and the attitudes and values behind them).

What got me to SFU?  Well, when I decided to return to studies I wanted to “get serious” about the communication field.  I wanted to read this guy Innis and to really get into what McLuhan had actually said and meant.  There were few schools in Canada in those days where I could do so – most of them out East.  Then there was SFU in the West.  I’d heard of the school, and knew something of its history – a university on a mountain top.  Hell, that sounded way better than any ivory tower, this was a mountain top!

What are some of your areas of research interest and what courses do you normally teach at Simon Fraser University?

Research is focused on the histories of communication and media, and their study.  I normally teach an introductory communication course, the introductory history of communication, as well as courses on design, affective communication, new media, and material culture as media.

I’m mainly interested in the evolution of the study of communication and media, and the ways we’ve understood the affective and effective aspects of media ecosystems we inhabit and share with, perhaps suffer or impose on, others.  From my POV, the human condition is deeply intermeshed in relational ecologies, enabled by media eco-systems – the media aggregates accessible to individuals and groups and used at any given point in any given social formation – I cannot imagine any instance of single media use.  This involves questions of access as much as issues involving barriers, boundaries, restrictions, monopolization, negotiation and resistance.  Communication is ontogenic,ontophanicontomorphic and ontotropic, creating, revealing, structuring and changing human realities.  So, how do media enable these processes of reality making, and how are they used in communication practices and processes?  I suppose, there’s a sense in which I think of myself as an historian of ideas, in this case what we’ve thought by terms such as communication and media, and how we’ve used these to create and inhabit our personal and social realities.  This is theoretical inquiry, but it has a praxis dimension, as was the case of the work of both Innis and McLuhan.

What makes a good teacher these days? How do you manage to command attention in the classroom in an “age of interruption” characterized by fractured attention and information overload?

There’s an innate problem with the question, eh?  The word “teacher.”  I wonder what sort of teacher we’re talking about.  There are “professional teachers,” you know, the sort we all had (and sometimes had to survive) through K-12, and/or, colleges.  At the K-12 level these folks are usually stuck in roles not unlike those of wardens in some well appointed lock-up.  Then there are “gurus,” and the teachers one stumbles onto in life, people who know something that’s valuable, and willing to share.  And, then, there’s us in the universities.  The university is an odd sort of teaching-learning beast.  Deep down, it thinks itself more a craft guild than a school. The craft is research, of course.  The undergrads are a relatively tedious pool of future apprentices, with the capable/promising going on to make up the graduate community.

While “education is emancipation,” it’s also a performance art.  That doesn’t mean bells and whistles, presentation software and sophisticated audio-visual aids, gesticulation, pyrotechnics or theatrics.  Most times that stuff is better left to those with much thinner content than ours.  I find many faculty members do not understand the media they try to implement in classes,  PowerPoint being a fine example.  We’re trained in discursive reading and writing.  So anything with text is a “page.”  This does not obtain with posters which should be intelligible in five seconds.  If you’re going to wrote out your entire lecture on slides and read it off from them, why bother doing the lecture?  The lecture hall is rarely dark enough to hide you, so you might as well e-mail the slides to the class and be done with it.  In our case, performance is tied to content and our individual particular relationship to, and with, it.  Oratory doesn’t hurt, but not everyone is an orator, but anyone who feels the material, and has a genuine interest in explaining and helping others think about it, everyone one of us is able to do that. If you care, it shows.   Otherwise, why be in the academe? Business pays better, as does plumbing.

I saw a sign on campus recently reading “I do Facebook on my mobile phone in class to keep awake.”  Who, for example, in the academe has ever been taught how to teach? And, more to the point, who could teach us?  So, in come the platitudes – and, as George Grant suggested – just because they’re platitudes, doesn’t mean they’re any less true.  Passion for the subject matter, a willingness, perhaps a burning desire to share knowledge, a respect and dare I say some fondness for young minds.  Teaching is a lot like being a port (in the old nautical sense of that word).  Students like ships come in, they stay a while, and then leave for other ports, and we’re left with our own research work and reflection – renewed by these, I’d think, for the next flotilla to arrive.  While not every animator is a teacher, every good teacher is an animator.  The task?  To animate minds, to encourage and foster “mobility of thought.”

We should also, I suspect, engage the resources made available to us today by the Internet.  It’s made my teaching-life easier.  There are wonderful lectures to be had off YouTube, to mention only one of many, many sources.  These, or excerpts, can either be screened for a class or assigned as “homework.”  Some of it, of course, is shallow and mistaken, but then I take it to be our job to catch that and identify it for students even though we still choose to use it in teaching practice.  I encourage students to use Wikipedia, search engines, video resources and the lot.  I caution them as well.  But, to be frank, I’d rather have a student look up something in Wikipedia than gloss over it on a book page and carry on with no clue as to what the author had in mind.  Outside of the many peer reviewed articles now available through the Net, I discourage use of such sources in citations or bibliographies – we’re still people of the book, eh!  But, that also makes people of the printed word, and there’s a profusion of that available on the screen.  True, the book is still very much with us, so on the surface regard of the statement “end of the book,” McLuhan appears to have been wrong.  Yet we know that more books are being published today than were say a generation ago.  There’s a discipline, and convenience to books the Net will never reproduce.  For one thing, as an artist friend of mine used to say, “you don’t have to plug in books or switch them on.”

I read your doctoral dissertation as an MA student. I thought it was a courageous move on your part to write about McLuhan at a time when he was just beginning to be rediscovered in light of globalization and the Internet revolution. What did you try to show in your doctoral thesis, and why do you think McLuhan never got the respect he deserved in academic circles?

I think Philip Marchand, McLuhan’s first biographer, answered this quite well.  Most of the response was a mixture of sour grapes over and envy of McLuhan’s celebrity.  I suspect that part of the rejection of McLuhan in North America had to do with his politics and religious convictions.  He was, after all, an adult convert to Roman Catholicism and not just a bit conservative in his views.  In many ways a “Red Tory,” McLuhan nonetheless thought the Protestant Reformation was a major historical catastrophe.  This kind of thinking makes you few friends among the progressively minded for whom the Reformation is an initial step toward an emancipation from superstition, oppressive and obscure ritual, an inching toward the bracingly fresh air of scientism, historical materialism, realism and all the rest of it.

There was a buffoon quality to McLuhan, a flippancy, cavalierness much loved and avidly reported by Media – but his humour was meant to bite.  Remarks like, “a specialist is someone who makes no small mistakes on the way to making a big one,” were aspects McLuhan’s Menippean humour, something rarely funny to its target, and target the academe McLuhan did.   For all that he was a celebrity for a while, and notorious, he fell from Media grace very quickly.  One of his colleagues opined that the media celebrity and notoriety seriously harmed him as a published thinker – he often spoke without working all the connections out, and then got into published tussles with usually misunderstanding critics.  Too much energy went into that for him to really polish up his ideas and insights and get it all straight.  That notoriety spawned jealousies as well as serious criticism; some of his stuff was hairy, un-thought through, and downright kooky.  And, he was aggressive, ambitious, and while very well read, he wasn’t an encyclopaedia.  There were gaps in his persona that could be attacked and were – for those who chose “to dignify” him with response and critique.  Others dismissed him.  He is by no means easy to teach, which is why there are few serious courses on his writings.  The Laws of Media stuff sort of saved him (as did WIRED), so now anyone can go on about tetrads, but to really get into the material, you reallyhave to work.

Two things come to mind about McLuhan’s style.  The first is his observation that if one writes difficult texts one is more assured of having longevity, and his texts are not easy, often repetitive, nearly all of them dictated rather than actually written (the only exceptions being War and Peace in the Global Village, and perhaps the still much ignored Mechanical Bride, which predated Barthes’Mythologies by six years).  And, that leads to the second one – the style of his writings!  Gnomic, to be sure, sometimes obscure,  No tables of contents, indexes, no effort to make it easier for the reader, much less the student.  Like many writers having the good fortune of getting an excellent education, his writing is sprinkled with allusions, oblique references, unexplained connections.  Why explain what everyone knows? Well, more correctly, not many know anymore, and not that many did back in the 60s & 70s.  For the ones who did not, he seemed profound but obscure, for the ones who did know (and were more thoughtful and perhaps generous), he appeared to be a man who had much to say but for some obscure reason chose to “pass himself off as a charlatan,” as one commentator put it.

That said, it’s interesting to watch the academic world to see how many of his ideas still come up, sometimes with no reference to him or his writings.  Among those aware of his contributions, I think of the late Leonard Shlain’s work on alphabetization, the recent debates about the technological origins of the development of the human brain and language sparked by archaeologist Timothy Taylor’s work, neurophysiologist and philosopher Merlin Donald’s account of the evolution of the modern mind in the context of the history of human consciousness, Benedict Anderson’s account of the nature and spread of nationalisms, Mark Smith’s study of the historicity of the senses, among others.   But there was a weirdness to some of McLuhan’s thinking.  Marchand, his biographer, notes that at one point McLuhan thought the space program was flawed from the get-go because it tried to overcome gravity.  McLuhan felt that science should be focused on turning gravity off rather than punching through it.  Who knows, maybe he was right?

As for my work in the dissertation, that was informed by Leiss’ work on advertising and McLuhan’s on the rhetorical properties of media.  Leiss, in collaboration with Steve Kline and Sut Jhally, had shown how advertising was meant to inject meaning into new arrays of goods and experiences being made available by the expanding consumer markets in North America.  They’d also shown how the messages of advertising media had developed in conjunction to social mores and attitudes through the 20th century.  At that stage, I wanted to know if there had been parallel kinds of thinking and practice in the realm of industrial design in the which the composition of products was imagined and articulated.

The questions I worked on in the dissertation, came from wondering why we in communication pay so little attention to the material culture that makes up our lifeworlds – the “stuff” of daily life as a medium/media of communication?  To get at the question I had to differentiate between two kinds of media – although the distinction was heuristic.  On the one hand there were explicit media – forms we think of as having no value in and of themselves other than to carry information: newspapers, TVs, radios, telephones, pictures and the like.  And on the other, in some sense as a ground for these, a whole domain of implicit media, things we think of as useful and available to tactility, sight, sometimes olfaction and taste, and having a concrete presence which obtrudes into the physical realities of life.  An exploration of Heidegger, of whom my supervisor little approved, aided the thinking about “stuff”–to-hand as did Albert Borgmann’s distinction between focal goods and devices.

Nearly all explicit media content originates in writing.  With explicit media, the things we make, that we understand to be “media,” are mainly conveyors of linguistic, verbal or audio-visual information.  One or two senses at work in this case.  What about touch, smell and taste?  For that you need tactility – the sense of touch; embodied perception of thermal conditions, comfort or discomfort; food stuffs; the smell of fresh oil on a highway or of electrical connections frying and the like.  What makes us human, in addition to the things we say to one another, is the way we encrust ourselves and articulate who we are or want to be through the stuff we make, and surround ourselves with.  The experience of the Ringstrasse in Vienna is affectively very different from Broadway Avenue in Yorkton, Saskatchewan; macaroni and cheese are very different experience from pasta with a Vongole or Bolognaise sauce.   So, my question became “In which ways is material culture a medium of communication?”  McLuhan had alluded to this throughout his media writings, but never really grappled with it, not in the sense of its psychodynamics as had Ong with orality and literacy.  I laid out the groundwork for such a project in the dissertation by drawing on the theories, anthropology and archaeology of stuff.

Media is central to this thinking, no doubt, but the definition of media is neither parsimonious nor easy to pin down.  In fact, in a sense, the field is impossible.  If everything we do, make, choose, set up and apart, if all these are all expressions of some inner impulse and a relation to some aspect of human reality (always social), then what isn’t a medium?  We might even say that a medium is anything whatsoever that affords a certain conductivity of attention and consciousness.  You know, there’s an interesting debate in the field about whether or not any gesture has to be consciously directed to communicate in order to be considered a medium in a communication process.  How dumb is that?  If that was the case then wouldn’t being observant be for naught?   What would be the point of studying body language?  A “mere” human presence, as Sartre had observed, is enough to kick off associative and cognitive chains, “a haemorrhage of consciousness.”

I walk into a room, and see you sitting there.  I say “Gee, Laureano, you look tired, are you OK?”  Now, it’s possible, you have no intention of “giving off” any information about your state, but you behaviour and appearance “speak” or “signify” on your behalf (even if that be treasonously).  Your inner state is expressed by your “thingly” aspect, your body, the detailing of your face, the state of your hair, posture, colouration, expression, demeanour.  Now, you can’t ask something the same thing, you need mechanics or technicians for that.  But we still do see things, things associated with others and ourselves, things we’ve made or obtained, some that we cherish, and some we abhor or experience as irritations.  A great deal of what goes on in the market and daily life is tied in with the stuff, and when seen in the context of consumer societies, in a sense it is on the cutting edge of where our species is going – and I’m thinking here of environmental degradation as well as advances in the health sciences and improvements in education, care for the young, and living conditions.  And, what is more, in a sotto voce, staged whisper, as anthropologist Grant McCracken puts it, it transmits all sorts of information to and about ourselves – values, commitments, priorities, repudiations, resolve, creativity. In the larger environmental setting it transmits information about what community and citizenship mean or not as the case may be; it says “We.”  That can include, and it can exclude – for example high curbs at cross walks that say “don’t try it” to anyone on a wheel chair or not fully bipedal.

I don’t know about courageous.  It just made sense to pursue these questions through McLuhan’s work.  On a personal level, here was a guy who grew up in my home town (not that you’d see any indication of this in either Winnipeg or University of Manitoba where he completed his first degree), who spent his life trying to understand what turned my crank too – how our actuation and articulation of relation shaped us and the realities we co-created and cohabited.  His Mechanical Bride was every bit as resonant as Barthes’ Mythologies (which, in a humourless and un-annotated translation, was hammered into all of us back in the 80s).  He was influential in Europe (on Baudrillard, for example), and we in Canadian schools weren’t reading or teaching him – if anything, we seemed to have forgotten he had even existed (and I’m sure that was a happy thing for many).  By the time I made my decision to tackle material culture as media through McLuhan’s work, both Marchand’s biography and the “posthumous” Laws of Media were both in print, as were a number of reappraisals in journal articles.  Arthur Kroker’s Technology and the Canadian Mind was also available.  So, I wasn’t out there all by myself.  I could see this had the makings of a reincarnation; thought I’d attend.

Do you consider yourself a Innisian or a McLuhanite?

Neither, or both.  Culturally, I always felt a chasm between my experience, background and theirs; the more I read them and about them, the wider the chasm became.  It’s not about the persons – although they are certainly “there” in the texts – but about what they saw and understood.  Reading McLuhan, especially his stuff on acoustic space and orality, sparked my interest in communication and got me into the Media; a late-70s CBC Radio Ideas series on Harold Innis got me out.  I’d recorded that series and listened to it over and over again, realizing slowly there was an impossible but profoundly compelling account of human history here, and the scope of that historical dimension appealed to me immensely.  Innis showed me the continuing relevance of history and the dangers of what he’d called “present mindedness,” and my suspicion of inability to think outside of one culture or agenda.  Here was a discourse simultaneously systematic and “global,” full of what seemed like contradictions and leaps in time, space and logic, that in a fundamental way made absolute sense.  There was such a common sensical point to it all – a new kind of “golden rule”:  Gold gives you power over the means of communication, and means of communication enable you to establish your take on reality more firmly than the next guy’s.  Relations of power were central, but so were the concrete andaffective properties of media – the spoken and then written word, and their management as key means to social organizations and relations of dominance and submission.  And, the real beauty of Innisian thought was that it taught that monopolies can’t and don’t last, and new forms of media and technique, developed usually at the social margins, can shake up the apple cart, change those relations, and the conditions they’d spawned.  True, not always for the better, nor always for everyone, but they did mean change, and change brought new opportunities for re-inventing a world and the social realities it afforded its inhabitants/architects.

Is there a difference between Media Ecology and Canadian Communication Studies in your view?

Canadian thought has a distinctive flavour to it; I think Robert Babe’s already argued that very effectively.  The Canadian Paradigm, as I’ve dubbed it, is a messy thing, “fuzzy” as they say nowadays.  But that’s why it’s distinct and interesting. Babe’s study of communication thought in Canada assimilated a number of names to it that Arthur Kroker’s pioneering study in the late 80s hadn’t.  Kroker had added George Grant to Innis and McLuhan; Babe added Northrop Frye and Grierson, among others.  There are more people we could now assimilate to this paradigm, and that’s waiting for further explorations along these lines.  That noted, it’s also the case that books on McLuhan’s work continue to appear, nor are they all repeats and rehashings.  Some very fine new scholarly work has been done over the last 20 years or so, and there very fine work appearing still.  And, the Media still keep misquoting and happily misunderstanding:  “the Global Village is/will be a niftie place”; the exact opposite of that assertion being what McLuhan had in mind!

As I understand our southern neighbours, you come to the USA to be an American.  That “sacred” flag of theirs is the totem that replaces the past, old languages, homelands, traditions, and the rest.  This doesn’t work this way in Canada.  In the 21st century, we still have a queen, to whom all new Canadians swear allegiance.  Gads!  But, we’re also a multicultural society from before when it became fashionable world-wide to admit that societies are multicultural (even though the Germans recently backed out of the fashionable discourse of late – very sobering, that!). Here, we’re a population made up of English and French, and the rest of us – native peoples, European, Asian, African, South and Central American, even American, immigrants, and generations descended from all these.  We, all of us, arrived with our cultural baggage, without jettisoning all of it on arrival, and many of us – sometimes by accident or for commemoration – still root around in that baggage finding occasional treasures challenging mainstream discursive realities.  But challenges are burdens.  The contradictions between the worlds whence we came and the capitalist, semi-royalist, post-colonial, neo-imperial world we now share, and memories of world’s other, elsewhen, elsewise make no contribution to ataraxia – peace of mind.

History – as memory – is always a burden; “love’s protest,” as Barthes once put it.  Easy to fall into phantasmatic traps of “my people’s (tribe’s, gender’s, class’) history;” the romances, not realities.  Patriarchy, exploitation of the young, stupidity, greed, cruelty of class relations, atrocities, wars, neighbourly vendettas, ancestors one would rather not meet, the whole range of possibilities blessing and cursing the human predicament – these are all part of the architecture of the past, just as much as labours of heroism, self sacrifice, goodness, emancipation from oppression, and all the enriching arts, cuisine and culture of “my/your people.”  We are all members of separate and often very significant to us consociate “tribes,” knowing we are only some amongst many others who, in some ways, are like us in this very sense, and yet different.  To be sure, not irreducibly different.  Mass culture is a great leveller, but by no means a perfect one.  Cultural fads and fashions come and go, cultures intermarry, can be assembled and reassembled from bits into new syntheses.  That being as it may, there is no “One” here, no single all encompassing identity of/with.  If anything, this political reality called Canada is a work in progress.  Without technology, transportation, education, and communication, there could never be one (nor could any other larger polity).  Back in the 80s, philosopher Leslie Armour wrote of a country made up of communities, and senate reform to turn that into the House of Communities.  So the trouble with tradition is that it unites and separates us, but leaves the question of its value open; prevents the “rigidification” or thought, as Innis would have said. Doesn’t let the myopia of “present-mindedness,” monopolization of social meaning, and will to hegemony take complete command.

Not only by cultures are we different among differences, but also by regions.  We are spread far apart – the Maritime provinces, Central Canada, then the enormity of forest and lake called the Shield, the agricultural and mineral rich prairies, the Rocky Mountains and their hospitable long deep valleys, the West Coast, and the Great White North.  Yet nearly three quarters of all of us live in the Saint Lawrence River area – a continent away from us here on the Pacific coast.  Communication, media access, transportation were and remain the very essence of what brings these regions, ethnicities, tribes, subcultures, and just plain folks into a polity.  So we have a straddling between that which makes us Canadian, and all memory this empowers, effaces, and/or enforces.  Regions, history and traditions flowing out of it, are anti-environments, to use McLuhan’s formulation.

An anti-environment is what makes the artifice, the constructedness and habituations of “normal” inhabited environment visible.  Media ecology, seems to me, is less interested in questions of tradition and anti-environmental effects than in structuration of relational ecologies by the rules, roles, and  resources imposed on/by, or enabled by, media arrangements.  I think that the Canadian Paradigm, a founding instance of media ecology, emerges from the straddling between community, the greater polity, and then it in relation to traditions and empires.  What I’ve called straddling was actually an invitation to nimbleness for Innis and McLuhan.  What’s distinctive in the Canadian tradition is the trouble with tradition.  In some sense, we’re a nation made up of losers; well, beautiful losers, but losers nonetheless.  Children of remittance men, servants, those vanquished in shooting and economic wars, the disposed, displaced, disappointed with point of origin, adventurers.  Who in the name of whatever, would want to live in Winnipeg in January?  The country is beautiful, and bounteous, but it can be bleak, biting insect infested, and of a scale that boggles the imagination.  Second largest nation state in the world after the Russian Federation, even if the USA looks bigger on most maps.  To live here – across such spaces under such climates — and participate, effective media are essential.  They are so essential that they beg the question.  Innis and McLuhan rose to the occasion.

J.F. Striegel’s sadly neglected doctoral dissertation, “McLuhan on Media” (1978) was the piece that radically changed my views about McLuhan: it became evident there and then that the charges of technological determinism were simplistic and unfounded. But I guess we kind of knew this all along, didn’t we? We’ve had more than one conversation you and me about the affinity between media ecology and phenomenology, and the points of contact between McLuhan, Heidegger, and Merleau-Ponty. Do you think this connection is worth exploring further?

Striegel, seems to me, effectively mined the phenomenological connection that was apparent in McLuhan’s teaching.  For one thing, McLuhan’s emphasis on embodiment, his “incarnation,” as it were — the concern with perception and the sensorium runs as a theme through the corpus.  This kind of awareness cannot be “conceptual” or Cartesian.  The percept is the basis of McLuhan’s thought, and the senses the royal road thereto.  If anything, precepts are experiential, constituted by perceptions and interpreted through emotion and mind, to be sure, but experiential as a point of departure.   Since media shape perceptions, some sort of sensory cleansing is in order if one is to check on the validity of the precept.  Phenomenologically informed methodology can serve this purpose – attending to experience, variations, and then reflection.  Then there’s McLuhan’s adoption of figure/ground distinctions, borrowed from the Danish phenomenologist of art Edgar Ruben.  In his journals McLuhan was dismissive of Heidegger, trashing him for obscurantism.  This, coming from the gnomic McLuhan, is pretty funny, but there it is.   McLuhan also felt a stronger affinity to the Structrualist movement than to the Phenomenological or Existential.  Yet, the use of figure/ground distinctions and the percept suggest that he was toggling between the two – seeing the value of attending to experience while simultaneously concerned with the larger implications of changes of dominant media forms on society.  He may have never got there, had it not been for his debt to Innis.

As for technological determinism.  Well, if anyone actually read McLuhan, and not even all that carefully, his insistence that “all this is inevitable so long as everyone is asleep” must have jumped out at them.  If his work, and that of Innis, is technologically deterministic, it is only so in a therapeutic sense.  Anthony Wilden used to say that “reality is what trips you up when you’re not paying attention.”  Although Wilden was no fan of McLuhan’s, the sensibility was exactly the same.  Much like a doctor telling an alcoholic that booze will kill them, so Innis and McLuhan were saying if you let these things happen and pay no attention they’ll mess up your world.  That’s what all the talk of “present mindedness,” Narcissus narcosis,” and “rear view mirrorism,” was all about.  Both were saying: “Wake up!”

Both Robert Babe’s analysis of the dialectic in McLuhan’s method, and Watson’s magisterial biography of Innis, suggest that both were deeply aware of a process wherein media shaped us socially, politically, philosophically, and in McLuhan’s case, biologically.  “We change our tools and then our tools change us,” said McLuhan,  What most folks miss here is that this is a partial phrase, which has no ending, as it is a dialectical relationship between us and our media – our technologies, if writ large.  His observation that it is invention that breeds necessity and not the other way around, can be understood as describing the driver, but by no means a determinant.  At least, not as long as someone remained “awake,” or “maladjusted,” perhaps a beautiful loser.

The following question was drafted by Professor Ian Angus: “How do you reconcile media theory with your emphasis on rhetoric in communication studies? Isn’t rhetoric content-oriented rather than oriented to media form?

Rhetoric has always been about the most effective way to arrange content relative to intention, exigence and audience.  Now, in antiquity, the medium was the spoken word, and then the written word.  Of course, there was more – tone of voice, posture and gestures, emphasis and so forth.  I don’t see much of a leap from this to questions of broadcasting, media ownership and access, media selection, formatting, production, and the rest.  My understanding of rhetoric is informed by its relationship to design.  No surprise that when the teaching of rhetoric fell out of favour, the art remerged under the rubric “composition.”  One of my issues with the whole of semiotics is the claim that the sign is the smallest unit of meaning; I sooner follow Ricoeur’s insistence that it’s the sentence that is the smallest unit of meaning.  And when you say sentence, you imply not only grammar, but also composition.  I think the point here is that composition is for naught without content – all you get is composition’s self-exposure.  But beyond that, you get no meaning effect.  The two, then, form (rhetoric, if you will, or composition) and content (meaning effect, what you’ve associated with rhetoric above) are always interconnected.  In the question, you say content=rhetoric, yes; but, content needs a vehicle for transmission, and that’s assembled in the sentence through composition – composed – paced in relation to one another.  Here too is rhetoric.  So, I cannot accept the distinction between form and content as the basis for differentiating rhetoric from  . . . from what?  From tone of sentence, from use of the right word in the right place at the right time?  An isolated abstracted word or sign may not transfer meaning, but it sure can and does in the context of other words and signs.  Composition requires rules, signs, tropes, topics; rhetoric requires much the same, if under different rubrics, perhaps.

Rhetoric has the odd privilege of being both a science and an art.  As a science it is a long tradition of rules, taxonomies of figures if speech, and a large body of theoretical writing from the most hand-on how-to right across to some of the great minds of the previous century.  There’s lots to know – covering verbal expression in living performance and written forms.  And, simultaneously, like management and leadership or any kind of community animation, it’s an art, and ability often learned through the human mimetic ability.  It helps if you have no fear of speaking in public, but there are arts of memory and for overcoming the fear of crowds, and they are all grounded in our ability to imitate each other, to imagine something elsewise, to rehearse, then do, and review the act in imagination.  To imitate, and appropriate something of that neurophysiological process in and for ourselves.  That link to imagination, to being able to imagine one’s audience’s involvement in what’s spoken, the imagination to compose something that enlightens, exhorts or exalts, a case or point or insight that’s relevant, and conduct attention to consciousness through it, that is imaginative, an art.

Now, each medium is a trade-off of some kind.  My interactions with textual material on my laptop and on my Smartphone, and on paper are completely different.  A relationship break-up communication is very different in person, face to face, than sending or receiving an e-mail to the effect that “it’s all over between us/was fun/ see ya!.”  A film on a big screen is a different perceptual event than an old tube TV or contemporary plasma screens.  Architecture programming on TV was always a bummer, the image was too small, now with larger screen that’s changing.  The dying art of handwritten letters gave off more than just the meaning of the words in the text.  There was also the imprint and mood, ability and personality of the hand that wrote, perhaps perfume, perhaps a coffee stain.  The paper spoke as well: a sheet ripped out of a notebook, printed letterhead, some cheap or perhaps imported stationery set?  Even on the way out – a plain box made of planks or an ornate bronze coffin with angelic fittings?  This is all rhetoric of media – electronic, screen, paper, typing, chirographic – with each affording amplification of some aspects of human interaction while constraining others.  I can’t smell your breath when we’re on the phone, but I can phone almost instantaneously almost anywhere, anytime, in the world.  Like a word in the right place, or a gesture that emphasizes a point, each medium is a means for shaping messages.

We can recall Aristotle’s formula for the pisteis – the “artistic proofs,” elements of discourse or demonstration that are capable of instilling judgment or belief in an audience; logosethos, and pathos, reasoning, credibility, and feeling.  Of course, in classical rhetoric, these refer to points raised or arguments enunciated; all verbal.  But has it ever been “all verbal?”  If we have an orator in mind, then bearing gesture, and appearance, the urgency of exigence, time of day and location, audience mood, would also have to be factored in.  Now, if we move to the written word or the graphical and electronic media eco-systems, the rhetoric of media form plays as much a role as appearance, demeanour, gesture and the rest.  In TV you have lighting, the set, the talent, the camera angles and movement, quality and nature of sound, the produced head and tail introducing and closing off the show, the direction and switching of angles and shots.  All these elements provide the “infrastructure” supporting the verbal content and can express and support, or decline and subvert, the intended messages.  Imagine a network prime-time news hour anchored and hosted out of a gardening shed or porn parlour.  All the elements are rhetorical devices and strategies to induce a certain response from an audience – hence the pristine, often hi-tech sets and groomed anchors on news shows; punchy, urgent fast-tempo introductory sound tracks at heads and tails; three minute items, and pacing of news broadcasts.  Another rhetorical layer supplements the anchors’ reading, the reports and video clips “from the field”; these may contain fewer elements than the news show itself, but often involve interviews (who was chosen, why, what do they look and sound like?), location shots (why these and in these camera angles?), and an announcer or journalist setting up the “B” roll illustrating, demonstrating and either supporting or subverting the script being delivered.  These are all rhetorical elements, they may not be linguistic, but each of them has the capacity to afford or condition a certain conductivity of attention and consciousness, each contributing to pistis, the state of mind that judges or believes.

Finally, if we need patrimony here, Innis thought of himself as a social scientist, McLuhan as a literary scholar.  For Innis it was the concrete properties of media and their affordances that shaped social organization, knowledge transmission and implementation, and relations of power.  McLuhan took this idea and wed it to the rhetorical tradition.  If one could study the relation between concrete properties of language use and the production of meaning, why not the material outerances/utterances also produced by people to activate and articulate relations and knowledge?  McLuhan also thought of himself as a grammarian, not a rhetorician, but it was his studies in rhetoric that grounded his media thinking.

What are you reading these days and what are you currently working on?

History of communication, techniques and technologies, media and the like.  I try to keep abreast of recent developments in archaeology and cognitive science.  When there’s a free moment, novels by Iain M. Banks, particularly his “culture” series.  I’m fascinated, if no small amount sobered, by the recent acceleration of the human impulse to animate the inanimate.  I also avidly follow developments in robotics, tele-presence and digital technologies, as I think that’s where the next “action” will be.  We may be getting close to having figured out the Internet – social media, instantaneousness, ubiquity, interactivity, granularity, immensity of extrasomatic memory, blogs, walled gardens and all the rest of it.  Well my friend, information alone can’t change your diapers on the way in or out of this world.  For that you need hands, and the intelligence not to bruise, fold, spindle or mutilate the changee.  We abhor the idea of slavery, and few want to be in service though such is the fate of more and more, but always at a cost – gotta be paid.  So, if there was some way to get the benefit without the cost or peskiness of unhappy humans, why not machines?  I think this is where we’re going next.  Servomechanisms, indeed!

From smartphones to smartstuff.  Paint changing colours on demand, doors recognizing and automatically opening for owners only, refrigerators sending orders to be filled by grocers, lavatories reporting on the user’s states of health immediately after the business is done, rooms reading occupants’ moods and adjusting temperature and lighting accordingly, domestic robots enabling tele-presence and care-giving.  A world of ease and convenience – our very deeply seated social addictions.  Sobering too: military and putative uses in surveillance on populations, war-making, reductions in users’ casualty rates while increasing those of the enemy.  As we move more of our intelligence into the inanimate world, we’re also transferring some of our demons, weaknesses and paranoias, as well as willing to power.  This may sound like science fiction, but then so did Jules Verne (and McLuhan, occasionally).  There’s a widespread global push in these directions with expensive, sometimes silly, and often scary prototypes quietly begin to colonize the lifeworld.  Generalization of smart stuff will have enormous effects on our societies, perhaps the history of the entire species.  Married to developments in evolutionary psychology and cognitive science, these new-new media may have far deeper implications than we can begin to imagine today.  Science fiction writers have made an industry of such imaginings, but they get dismissed, ‘cause it’s fiction.  Robotics nor the Internet are fictions – drones kill and we have a couple of remote controlled very sophisticated almost-dinky toys getting intimate with rocks on Mars.  Perhaps most sobering is the involvement of military establishments in this push.  The Japanese, we’re told, are trying very hard to develop technologies capable of elderly care; the Americans lead the way in battlefield and theatre of conflict technologies – both remote-controllable and “autonomous” surveillance and killing machines.

In light of McLuhan’s comment on the origin of necessity, his wry suggestions that we are the sex organs of our machines, and the sheer fact of the acceleration of technological development over the past three generations, we have to wonder whether or not this kind of development is sustainable.  Do we have the wisdom to manage the raft of new-new media, applications, technologies that leap out at us month by month?  What effects on consciousness, desire and expectation, perhaps neurophysiology, will living through avatars in virtual worlds, the increasing time-urgency prompted by being always on and watched, telepresence – being in more than one place at a time, multitasking, and interactions with/through anthropomorphic, bipedal, animate, autonomous, chiromatic technologies – robots?

I don’t think this is a Terminator scenario, nor do I think it’ll be without a whole lot of trauma.  The superpowers and most if their allies and clients survived the Cold War and the potential nuclear holocaust it implied, although not the folks who fought their proxy wars in Vietnam and Afghanistan.  The nuclear holocaust still remains a possibility, though its apparent urgency has attenuated a bit.  I find myself wondering how survivable an animate material culture will be?  For whom?  In whose interest and at whose expense?  And, what will the ratio look like between benefits and costs?

– – –

© Excerpts and links may be used, provided that full and clear credit is given to Roman Onufrijchuk
and Figure/Ground with appropriate and specific direction to the original content.

Suggested citation:

Ralon, L. (2010). “Interview with Roman Onufrijchuk,” Figure/Ground. December 23rd.
< >

Questions? Contact Laureano Ralón at

Interview with Noam Chomsky

© Noam Chomsky and Figure/Ground
Dr. Chomsky was interviewed by Laureano Ralón and Axel Eljatib. December 17th, 2010.*

Noam Chomsky is an internationally renowned scholar, author, and activist. He has taught at the Massachusetts Institute of Technology since 1955, where he developed a theory of transformational grammar that revolutionized the scientific study of language. He is a prolific author whose principal linguistic works after Syntactic Structures include Current Issues in Linguistic Theory (1964), The Sound Pattern of English (with Morris Halle, 1968), Language and Mind (1972), Studies on Semantics in Generative Grammar (1972), and Knowledge of Language (1986). In addition, he has wide-ranging political interests. He was an early and outspoken critic of U.S. involvement in the Vietnam War and has written extensively on many political issues from a generally left-wing point of view. Among his political writings are American Power and the New Mandarins (1969), Peace in the Middle East? (1974), Some Concepts and Consequences of the Theory of Government and Binding (1982),Manufacturing Consent (with E. S. Herman, 1988), Profit over People (1998), and Rogue States (2000). Chomsky’s controversial bestseller 9-11 (2002) is an analysis of the World Trade Center attack that, while denouncing the atrocity of the event, traces its origins to the actions and power of the United States, which he calls “a leading terrorist state.”

As you probably know, South America has for some time now been undergoing an intense process of democratization in the broadest sense. Most governments in the region are now popular governments, and Argentina, in particular, has adopted a number of progressive measures: the country no longer follows the neo-liberal recipes of the 1990s; its economy has been growing at an unprecedented rate; human rights became a top priority; same-sex marriage was legalized, and supplements to help low-income families with children are now in effect. Why is the United States unable to be on the same page with South America on these fundamental issues, and is the American media partly to blame for this disconnect between north and south?

Well, the United States has a long-standing policy towards Latin America. It goes back almost 200 years, to the Monroe doctrine of the 1820s, which could not be implemented at the time because we were not powerful enough. But over the years it came to be implemented, and that policy is that Latin America must be under our control. If you go back a little earlier, let’s say, to Thomas Jefferson – the most libertarian founding father – his view was that Latin America must be settled by the United States to eliminate the inferior races (the red, the black, the Latin) and replace them by Anglo-Saxons. Well, that was restricted in the Monroe doctrine to the conception that the US would control Latin America, which of course we could not do because the British were much too powerful; but by the end of the century that came to be pretty much the case. Then in the 1970s, during the Nixon administration, when there were concerns over the control of Latin America because things were getting out of hand, the National Security Council took the position that if we cannot control Latin America, how are we going to control the world? That was considered a necessity. I will not run through the whole history, but that has essentially been the history. Now, essentially in the last 10 years, Latin America has moved towards integration, which is a pre-requisite for independence – an independence which is what you describe: Argentina’s rejection of the IMF rules, and much else happening in the continent. The US government is concerned that its traditional control over Latin America is eroded, and many other factors are entering into it too; for example, the growing foreign trade with China, which is eroding Washington’s position as the leading economic partner of Latin America in terms of investment, resources, and so on. And the American press pretty much follows along whatever the proper line is. So yes, there is a disconnect.

The role of the media is also being re-assessed in South America. For example, in Argentina, a new media bill aimed to democratize the broadcasting system was recently passed into law. What do you make of this scenario?

Traditionally in Latin America, the media have been almost entirely under the control of very small sectors of extreme wealth and concentrated power. That has been the nature of the societies and it has been the nature of the media. Yes, to an extent that is changing, but exactly what the new Argentine law will do I could not say without looking at it more carefully.

Do you think that regulation can democratize the media in a capitalist framework? What would your “utopian” model of media ownership consist of: worker ownership, community oversight, a combination both?

Well, I do not really have Utopian visions. I think that a better system – and one that we ought to strive towards – is a democratic media, which is under the control of the workforce and the community. In fact, there were approaches to that in the late 19th century, which was a period in the United States of extreme proliferation of media of all kinds (labour, ethnic, community, national) with very substantial participation, very widely read, and in the case of the labor papers, they were written often by the workforce. That great variety of media was a major contribution to the functioning of the democratic system, but it collapsed under the pressure of concentration of capital, which enabled media to be owned by a few wealthy families and later conglomerates and corporations. And also reliance on advertisers: advertising reliance changed the nature of the media quite radically, because the content and choices and so on naturally catered to the market, which is advertisers. So it is almost necessary for regulations that limit media concentration, that provide for community participation and participation of the workforce and so on. That can democratize the media; it depends very much on how it is done.

What makes a good responsible citizen in this day and age?

In every day and age, a good responsible citizen is one who participates in the management of public affairs. That is what citizenship is in a democratic society. It is an ideal that is never reached but can be approached, and it holds in all institutions – from the workplace, to community, to media, to commerce, everywhere. The more opportunity there is for direct and meaningful citizen participation in making the decisions that affect our lives, the closer we reach the ideal of a functioning democracy.

What do you think about the notion of “populism”, which is often used to criticize Latin American governments from a neo-liberal perspective in Europe and the United States?

Well, populism is a term that has a great many meanings, but if populism is understood to mean participation of citizens in decision making, it is the same as democracy.

What is the importance of social media as a gateway for dissident voices, and what do you make of the contradiction that many of these outlets for “self-expression” are supported by one of the most powerful corporations on earth?

Well, it does not matter who supports them if they play no role in how they function. Of course, that is very unlikely to be the case; we have just seen it in the WikiLeaks case, where Amazon for example refused access to it. So if there is control by a sector of power, state or private, then you can be pretty confident that it is going to be misused. In fact, it should be under popular control; but in the existing society – which has very high concentrations of power – then access to social media can be a positive force. It has negative aspects too in my opinion, but in general it is fairly positive.

What are some of those negative aspects?

Well, let’s take, say, Twitter. It requires a very brief, concise form of thought and so on that tends toward superficiality and draws people away from real serious communication – which requires knowing the other person, knowing what the other person is thinking about, thinking yourself of what you want to talk about, etc. It is not a medium of a serious interchange.

Do you think your propaganda model is in need of a revision in this age of digital interactive media?

Well, the latest edition of our book on this came out in 2002, and at that point the electronic media was pretty much beginning to flourish. I did not see any changes and I do not see any dramatic changes to date. Of course there are differences: the Internet has opened up many new possibilities that did not exist before, but as far as I can see the general picture is approximately the same.

*This interview has been reprinted in Spanish by La Nave (Argentina)

© Excerpts and links may be used, provided that full and clear credit is given to Noam Chomsky
and Figure/Ground with appropriate and specific direction to the original content.

Suggested citation:

Ralon, L. & Eljatib, A. (2010). “Interview with Noam Chomsky,” Figure/Ground. December 17th.
<  >

Questions? Contact Laureano Ralón at

Interview with David Cerbone

© David Cerbone and Figure/Ground
Dr. Cerbone was interviewed by Laureano Ralón. December 5th, 2010.

David Cerbone is a Professor of Philosophy at West Virginia University, where he specializes in 20th century continental philosophy. His ongoing research focuses primarily on the phenomenological tradition (with an emphasis on the work of Martin Heidegger and Edmund Husserl), Wittgenstein, and early analytic philosophy. His research in these two domains overlaps considerably: he looks to both areas for resources for understanding and criticizing traditional philosophical problems (e.g. problems oriented around skepticism, realism, and idealism), as well as currently dominant philosophical views, most notably naturalism in various forms (scientism, physicalism, materialism). His most recent work has primarily been concerned with the latter, with the often antagonistic relation between phenomenology and scientific naturalism. He is interested in documenting both the attractions and dissatisfactions of naturalistic accounts of human beings and the world, so as to ascertain more fully just what phenomenology has to contribute to our self-understanding. Professor Cerbone is the author of Understanding Phenomenology and Heidegger: a Guide for the Perplexed.

How did you decide to become a university professor? Was it a conscious choice?

I decided very early on (first year of college), at least indirectly.  What I mean here is that I remember consciously deciding at that time to become a graduate student in philosophy, fully realizing that this would lead more than likely to becoming (or trying to become) a professor.  I’m pretty confident I can blame nearly all of it on the section leader I had for my introductory philosophy course, Randall Havas.  I thought that he was the smartest, coolest person I had ever met, and he became my exemplar and mentor.  (We subsequently became – and still are – very close friends.  He stood up for me at my wedding, and he’s become a mentor to me on dog training as well.)  Randall encouraged me to continue with graduate work in philosophy.  He had also been an undergraduate at Berkeley, so it was through him that I learned more about the faculty there, especially Hubert Dreyfus.  There was at that time a lot of Harvard-Berkeley traffic (Berkeley undergraduates becoming Harvard grad students, and Harvard undergraduates heading off to Berkeley), and I tuned into that pattern pretty quickly. I remember visiting Berkeley one summer when I was out visiting my brother, who was living near there.  I sat in on lectures by Donald Davidson and John Searle and also explored Berkeley quite a bit.  I think that cemented my decision.

How did the role of university professor evolve since you were an undergraduate student?

I don’t have a real sense of any dramatic change, in that professors still teach most classes in the usual way, publish papers and books, go to conferences, and so on.  Things have changed at the level of detail, for example in the way that technology is changing the structure of classrooms:  many professors use PowerPoint and other gizmotronics in their teaching, more things are done online through university-sponsored websites, that sort of thing.  For my part, I still prefer chalk-and-chalkboard, so I probably teach in pretty much the same way as my professors did when I was an undergraduate.

What makes a good teacher today? How do you manage to command attention in the classroom in this age of interruption characterized by attention deficit and information overflow?

This is actually a pretty sensitive subject for me right now, as the second question especially is something I’ve been actively pondering.  I teach large introductory lecture courses, among other things, and over the past couple of years I’ve started to notice students here and there – especially in the nosebleed seats – texting in the middle of class.  I very rarely interrupt class to call them on it, but it does bother me that a student considers that acceptable behaviour during class.  (It also makes me less surprised when students do poorly on the exam.)  Then again, you hear so much about texting and driving that doing it during class should hardly seem shocking.  People seem to be increasingly wedded to these powerful little devices and it’s hard (for me) to figure out how best to combat their influence.  A lot of professors in my department decided to put a “no device” statement in their syllabi this semester.  I opted not to, primarily because of questions concerning enforcement.  For one thing, I don’t want to devote any energy to watching for the clever student who keeps his or her iPhone or Blackberry on her lap, but also many students use laptops to take notes.  Do I really want to go around and check to make sure they are taking notes, rather than updating their Facebook pages or booking their spring break trip?  There’s also a part of me that feels that if students want to text rather than pay attention, then it’s their loss; it’s not like talking on the phone or noisily rustling the newspaper, so it doesn’t really affect the students around them.  College students are legally adults in most respects, so they need to figure things like that out.

I’ve actually been more worried of late about my smaller, upper-division courses, not because of texting and the like but because I’ve noticed a sharp decline in students reading the assigned material.  I’ve been amazed over the last few semesters especially at how many of the students seem to read little to none of what is assigned to them.  Instead, they try to pick up what’s what from what I say in class (which ends up being more lecturing than I’d like, because they do not come in ready to discuss the material).  I should note that I do not lay it on thick in terms of reading assignments.  The stuff I assign – phenomenology, existentialism, deconstruction, and so on – is hard, but I give it out in small doses, usually less than twenty pages for an entire week.  But no matter how small the assignment, it still does not get read.  I had started to take it personally, thinking it had to do with how I taught or the kind of thing I’m trying to get students to read, but in talking to colleagues, it seems to be a more pervasive problem.  Even in the most advanced undergraduate seminars in my department, students very frequently do not read the assigned material.  The deep worry I have in this direction is whether students are losing – or simply lack – the necessary reading skills really to do the reading, in the sense of having the patience and attentiveness to wrestle with “difficult” writing.  I have yet to hit on a winning formula for overcoming this problem.  I’ve been loathe to resort to pop quizzes and the like, but it may come to that.

Are you of the opinion that ICTs and social media make us stupid? Do you think today’s generation is the “dumbest generation”?

I would never say anything that sweeping (although when you hear about texting and driving, it is tempting).  And besides, I think there are many many positive dimensions to these new technologies.  I live in rural West Virginia.  My house is fairly isolated and even with respect to university life, I’m still relatively isolated in the sense that very few members of faculty are interested in things like Wittgenstein, Heidegger, phenomenology, or continental philosophy more generally.  It is hard to imagine what my life would be like without such things as e-mail and the Internet.  I can stay in touch with people, find things out more quickly and easily, share my work with others, and so on.  All of this would have been much more difficult in the days of only the telephone and the stamp.  (I also spend a lot of time doing photography and for this, the web has been invaluable:  there are some really great online communities where people who are passionate about photography can share ideas, techniques, and work.  I’m pretty sure that being a part of these communities has made me a better photographer.) Having said that, I do nonetheless worry about the level of stimulation and distraction these technologies seem to create.  It is very hard for people to ignore them – to turn them off, leave them at home, and so on – and so people seem to be constantly distracted (or on the verge of being distracted).  I have on occasion walked by the student lounge and noticed students talking to one another, while also having their laptops open and cell phones out.  So, there’s conversation, texting, and web surfing (usually Facebook updates) going on all at once, and it’s hard for me to see how any of those things are getting sufficient attention, especially the conversation that’s actually happening there in the room.  Can you really converse with any depth if you’re simultaneously attending to your laptop and your phone?  And to go back to something I mentioned above, I wonder about the effect of these things on reading.  Reading philosophy requires effort and patience, a willingness to read slowly and attentively.  Texting, web surfing, and the like do not really cultivate those skills; indeed they seem to make people far less patient and careful (consider how hazardous e-mail communication can be). I have seen some studies about brain development that underscore the importance of reading for deep cognitive development, so it worries me that students are less and less willing to read because, to press the worry, I wonder if it will make them less and less able ultimately to read at a deep level.

What advice would you give to young graduate students and aspiring university professors?

My dissertation advisor – Barry Stroud – gave me some really excellent advice that I do not think I can improve upon.  For those who are still in graduate school, always remember that the dissertation is the last piece of writing you’ll do as a graduate student.  I think what Barry meant was that although the dissertation is important and should be taken seriously, it is not – and should not become – your life’s work, something to try endlessly to perfect.  When I first got to Berkeley, a lot of people seemed to be taking forever to finish graduate school (the national average in Philosophy was over eight years at the time), and I think Barry’s advice is good for helping to avoid that.  For those in grad school or already professing, the second thing Barry told me was always to distinguish between the discipline and the profession.  What matters philosophically is not the same as what matters for advancing one’s career.  I think this distinction is hard to keep sight of, given the pressures and difficulties of the job-market and tenure-track (if one is lucky enough to get on tenure-track).  I don’t know if I’ve always succeeded in following it.

Who were some of your mentors in graduate school, and what did you learn from them?

I feel like I got a lot of support during graduate school.  The faculty at Berkeley were generally very supportive and encouraging.  My primary mentors were Barry Stroud, Hubert Dreyfus, Hans Sluga, and David Stern, but I also got a lot of help from Alan Code, Daniel Warren, Hannah Ginsborg, Elizabeth Lloyd, and Janet Broughton (among others).  Bert was especially influential in terms of my interest in Heidegger, as well as phenomenology more generally, while Hans seems to know something about nearly everything (Frege, Wittgenstein, Heidegger, Foucault, logic, art, politics, and so on).  I think Barry had the biggest influence on my philosophical sensibilities.  I continue to admire his incredibly meticulous approach to philosophical questions and problems.  I think Barry is the best philosophical worrier I’ve ever come across (in more than one sense of “worry”).  I have tried, over the years, to follow his example, although it most often feels like an unrealizable ideal for me.  Occasionally, people respond to something I’ve written by remarking that it is reminiscent of Stroud (even when I hadn’t been thinking especially about Barry or anything he’s written).  For me, this is tremendously flattering.

As you probably know, Hubert Dreyfus was recently featured in the movie Being in the World, directed by Tao Ruspoli; the film centers around the notion of “ongoing skilful coping” or “mindless everyday coping.” Do these terms capture the pre-reflective state of playful absorptive engagement with the world driven by operative intentionality? 

I have not yet seen Tao’s film, though I certainly know the cast of characters quite well, at least the academic end of it.  The phrase “mindless coping” strikes me – has always struck me – as particularly problematic, as the qualifier “mindless” suggests something unintelligent, maybe even zombie-like about the activity.  But it is misleading in a more fundamental way:  the whole point of, say, Heidegger’s appeal to “circumspective concern” or Merleau-Ponty’s notion of “motor intentionality” is to name and develop a phenomenologically accurate and so less philosophically distorted account precisely of mindedness.  Merleau-Ponty says in Phenomenology of Perception that “consciousness is in the first place not a matter of ‘I think that’ but of ‘I can’.”  It seems to me to be crucially important not to lose sight of the fact that Merleau-Ponty is here characterizingconsciousness, and so nothing at all “mindless.”  Rather, I read him as trying to get clearer about what it comes to for us to be minded as we are, what it means to have experience, and I don’t think you’re going to get very far in that endeavor if you simply delete the notion of experience from the start.  (Dreyfus on occasion cites approvingly Merleau-Ponty’s characterization of us as “empty heads open on a world,” but if you check the context, he is actually talking about children’sconception of other people rather than any kind optimal state.)

That said, I don’t balk particularly at some of the variants on the phrase:  “absorbed coping” and “ongoing skillful coping” strike me as less problematic.  There are some unwanted connotations that come along with “coping,” which suggests managing to deal with some adversity, but those seem easier to avoid than the ones attaching to “mindless.”  I think there is a genuine phenomenon that Heidegger and Merleau-Ponty are onto here and I think they’re right that getting clear about this phenomenon reorients how one thinks about the philosophical tradition they’re emerging from, but also how one thinks about current trends in philosophy, such as various forms of scientific naturalism.  Thus, I don’t think worries about terminology are central, though I see no reason to cede concepts like “mind,” “consciousness,” and “reason” to the tradition phenomenology generally opposes.

What you call a “labour of love” certainly has nothing mindless about it, nor should it be understood in terms of the kind of impersonal anonymity Heidegger discusses under the rubric of das Man.  To become absorbed in such a labor is not to become transparent, but instead to become more attuned, more attentive, more responsive.  Dreyfus is right that this way of being skillfully attuned is not a matter of self-consciously (or even implicitly) following a set of rules; part of the freedom that comes from mastering a skill is the way one becomes open to the situation in its particularity, but that openness is something one experiences.  It is not the absence of experience.

I guess the next ‘logical’ question might be, does the body “think”?

Putting it that way does sound odd, doesn’t it?  Perhaps it would be better to say that thinking is a bodily – or embodied – activity.  While there are cases where we are thinking in ways that approximate a more disembodied stance, e.g. when we’re running through something in our heads or thinking about something very abstract, we need to be careful not to overread those kinds of cases, and in at least two ways:  first, there are other forms of thinking that are clearly embodied.  Consider cases where I’m getting the feel of a new piece of equipment (for me, it’s often a new camera or lens).  Here, the learning process is clearly bound up with my exploration of the item:  turning it about in my hands, looking at it in various ways, probing and testing the various parts, making guesses sometimes about what this or that lever or button does.  That kind of activity is a form of thought, a way of engaging mindedly with the world, which seems unintelligible in the absence of the body (even Husserl – Dreyfus’ arch-nemesis on occasion – recognized the importance of free bodily exploration for the possibility of objective thought).  Second, though, is that even the less directly bodily forms of thinking still presuppose a more explicitly bodily engagement, i.e. without more direct bodily engagement with the world abstract forms of thought would not be possible.  Phenomenology is especially good at bringing this out.  Husserl is already on to this in Ideas II, but Merleau-Ponty develops it to the greatest extent in Phenomenology of Perception with the primacy of “I can” and his ideas about motor-intentionality.

Richard Rorty once identified Heidegger, Dewey, and Wittgenstein as the most important philosophers of the 20th century. Having read both Heidegger and Dewey, I personally could not agree with him more. In fact, I began reading the post-modernists as an undergraduate, however, by the time I got to graduate school I realized that one could not understand Baudrillard without McLuhan; that one could not make full sense of Derrida without Heidegger, and so on. So I went back and took on Being and Time and Phenomenology of Perception; and the more I studied them, the more I felt as though everything the post-modernists talked about was in a way “always already” there in those classic texts. I am curious about your take on the relationship between post-modernism/post-structuralism and existential phenomenology. Some point to a “missed” existential turn and blame it on Schutz’ influence in North American theorizing – that is, its “uncritical” appropriation of Husserl (Anton, 2001); others speak of “an unspoken goal of the post-structuralist project to render Sartre history” (Martinot, 1991), and in a recent interview, Iain Thomson declared that the father of post-modernism was none other than Heidegger. What do you make of all this and where does Wittgenstein fit in?

There’s a lot going on in this question, so it is hard to know just where to start in answering it.  I’m not familiar with some of the specific references you make – to Martinot and to Anton – so I cannot comment with any confidence on their particular claims about the appropriation of Husserl or the goals of post-structuralism.  The thing about terms like post-structuralism and, even more so,postmodernism is that they are awfully hard to pin down, and so I think it is hard to adjudicate claims made about the relations certain figures may or may not have to them.  Having said that, I do think your own discovery is generally right, i.e. that many of the figures associated with postmodernism and post-structuralism are considerably more intelligible if read against the background of phenomenology (and I would include Husserl here, who’s absolutely essential for Derrida).  Even if the relation is one of opposition, that is only going to come into view by first getting clear on just what is being opposed (and one can also then view the later criticisms more critically, e.g. is Derrida right in reading Husserl as implicated in a general “metaphysics of presence”?).  But I also think there’s something to your sense of “already there,” in that a lot of what gets emphasized in postmodernism and post-structuralism is already on display in a lot of phenomenology and existentialism.

But to say much beyond this requires having a somewhat more definite conception of postmodernism or post-structuralism in mind.  So take Lyotard’s proposal that postmodernism is marked by incredulity toward any kind of metanarrative, any sort of grand overarching account that makes sense of it all.  Postmodernism is in this sense a kind of scepticism concerning the possibility of philosophy.  Now, using this definition, there’s certainly going to be some tension between postmodernism and the Heidegger of Being and Time, at least in the sense that “fundamental ontology” looks like a kind of metanarrative.  But the more existentialist strand of Heidegger’s thought – and so Sartre’s existentialism – cannot be so readily dismissed from this perspective.  One way of cashing out Sartre’s slogan that “existence precedes essence” is that one must construct and evaluate one’s narrative in the absence of any metanarrative (or, to put it in Nietzschean terms, to construct the view that best accords with the insight that every view is an interpretation.).  You could argue, I suppose, that there’s still a metanarrative there – to the effect that God is dead – but it’s not much of one; indeed, it’s hardly distinguishable from Lyotard’s definition.  So is Lyotard’s postmodernism a new view or is it already there in existentialism or in Nietzsche?  It’s at least foreshadowed in some very significant ways.

But there’s also later Heidegger.  Though I don’t want to try to channel Iain’s views, I suspect he was thinking more of late Heidegger in his pronouncement that Heidegger was the father of postmodernism.  I know that Iain has a very definite idea of Heidegger’s relation to modernism that gets worked out in his views about art and aesthetics, but even without getting into those kinds of details, just consider Heidegger’s titling a collection of his essays Holzwege.  The image of his thinking as pathways through a forest again suggests a resistance to any kind of metanarrative:  there is no one true path, no final destination, not even necessarily a connection between one path and every other.  Heidegger’s refusal to systematize his later thinking marks a kind of postmodern mindset.  I don’t know if Heidegger is always true to this kind of piecemeal, fragmentary anti-systematicity.  For example, his appeal to dwelling seems to have a kind of normative force for all human beings, just insofar as they’re human.

You asked about where Wittgenstein fits in.  Consider the Preface to theInvestigations, where he describes the work as “a number of sketches of landscapes,” and as traveling “over a wide field of thought criss-cross in every direction.”  That certainly resonates with Heidegger’s Holzwege imagery.  Again, in Wittgenstein we see a kind of refusal of systematicity, and in that sense, a refusal of philosophy understood as kind of general explanatory project.  That Wittgenstein lurks in the background of postmodernism is hardly surprising, given Lyotard’s appropriation – for better or for worse – of Wittgenstein’s notion of language-games. (I should note here that I’m generally sympathetic to Rorty’s criticisms of Lyotard, some of which are entered on Wittgenstein’s behalf.)

What are you currently working on?

This feels like a hard question to answer during the second-to-last week of the semester, when my own work feels both far away and yet, with a break from teaching, getting perhaps a little bit closer.  I just finished up a paper on phenomenological method, which will appear in the Routledge Guidebook to Phenomenology.  I’m also in the very early stages of thinking about Heidegger and Wittgenstein in relation to architecture (and so in relation to postmodernism in a different sense).  I’m interested especially in getting clearer about Heidegger’s appeals in his later writings to dwelling, especially in relation to Wittgenstein’s philosophy as calling us back to the “rough ground.”  Beyond their interrelation, I’m wondering about the ways specific conceptions of architecture may be understood as expressing concretely these philosophical ideas.  I’m going to need to learn a lot to make progress on this project, as I’m pretty much an outsider when it comes to architecture and architectural theory.

As for the “next book,” that’s a little murkier than I’d like it to be.  I currently have a book project – What is Continental Philosophy? – that I’m working on for Cambridge (it’s meant to be a companion of sorts to the volume on analytic philosophy that Hans-Johann Glock wrote).  However, it is going very slowly at the moment.  I find that the more I think about the term “continental philosophy” and the so-called analytic-continental divide, the more problematic I find these labels and oppositions to be, and so the less I feel I have an answer to the question the book’s title poses.  The more I think about it, the more the question comes to seem loaded, or as requiring more of a meta-level answer than anything else.

© Excerpts and links may be used, provided that full and clear credit is given to David Cerbone and Figure/Ground with appropriate and specific direction to the original content.

Suggested citation:

Ralon, L. (2010). “Interview with David Cerbone,” Figure/Ground. December 5th.
<  >

Questions? Contact Laureano Ralón at