Wednesday, May 10, 2017

Transformations of Transforming Mirrors: An Interview with David Rokeby

Labels
Art and Science, Art and Technology, Conceptual Art, Electronic Art, Interactivity, Systems Art, Video Art, Installation, David Rokeby, Interview,

1. Introduction

David Rokeby began exploring questions of interactivity while studying at the Ontario College of Art (OCA) in 1981. His earliest interactive pieces were constructed with text or photography and specifically designed to be completed by the audience in one manner or another. There were no technological interfaces involved. At OCA, Rokeby discovered a small group of teachers and students in the school’s tiny Photo-Electric Art Department, where it was possible in the early ’80s to take courses like “Programming for Artists” and “Cybernetics for Art” with remarkable teachers like Norman White and Doug Back. Although Rokeby had had some experience programming computers in high school, he had not seriously considered using them in his art. His encounter with the Photo-Electric Art Department at OCA led him to bring together his interests both in audience-involvement and in computer technology.
Most of his time at OCA was occupied with the development of what was to become Very Nervous System. Advancing from interactive sound systems involving single light cells and analog electronics, this project evolved over a decade into a sensitive interactive sound installation in which everything from the audience’s small finger movements to large leaps drew out accompanying sounds that interpreted these movements in some manner.
Alongside its life as an artwork, Very Nervous System served the practical study of intense physical computer-human interaction. As a result of observing both himself and thousands of others in this installation, Rokeby generated ideas about the characteristics of the machine-human relationship. These ideas were first expressed in his 1989 text “Transforming Mirrors: Subjectivity and Control in Interactive Media.” In producing Very Nervous System, Rokeby not only designed and built his own specialized computers, he also wrote some simple computer languages, and a lot of other code. While he did this, he watched himself program and, as a result, became interested in programming as a cultural practice, and in the role of programmers as cultural producers.
While Very Nervous System focuses largely on the relationship between human bodies and computers, his next major work, The Giver of Names, looks at the relationship between human intelligence and machine intelligence. For this project, Rokeby spent more than ten years working along the edges of artificial intelligence research, developing software that attempted to replicate human perceptual and cognitive abilities. The Giver of Names was an artificial subjective entity that considered objects presented to it and responded with spoken sentences. The aim of this pursuit was not so much to succeed at replicating human behavior as to provide an inside view of the process of trying to do this, in order to open the pursuit to some sort of questioning. The installation was presented, in part, as a sort of public research space where anyone could explore issues of (artificial) perception and intelligence in a practical and playful but non-trivial way.
At the time he developed The Giver of Names, Rokeby turned his attention to surveillance systems. His surveillance installations of the late ’90s and early 2000s, such as Watch, Taken, and Sorting Daemon, brought the real-time interaction of Very Nervous System together with the more advanced perceptual and cognitive processing of The Giver of Names to examine the social implications of the proliferation of networks of sensors and attentive intelligences.
David Rokeby has received numerous awards, including the Prix Ars Electronica Golden Nica for Interactive Art (2002), Canada’s Governor Generals Award in Visual and Media Arts (2002), and the first BAFTA in interactive art from the British Academy of Film and Television Arts in 2000. His major exhibitions include the Venice Biennale (1986), the Venice Biennale of Architecture (2002), the National Gallery of Canada (2002), and the Whitney Museum of American Art (2007). He currently teaches at Ryerson University and is an adjunct professor at OCAD University (formerly OCA), both in Toronto.
This interview addresses the status and development of interactive media art in network societies. Of specific interest are those societies in which developments in networked or interactive arts occur in tandem with the emergence of a third wave of computing (understood as coming after the mainframe and the personal computer), a phenomenon usually referred to as ubiquitous computing, pervasive computing, ambient intelligence, and the Internet of Things. This is the third in a series of four interviews conducted in February 2014. All four interviews took place via daily e-mail exchanges over the course of several weeks, and each was followed by a set of revisions undertaken by both the interviewee and the interviewer. These interviews had their beginnings in a presentation given by Rokeby in 2011 at Invisibility and Unawareness: Ethico-Political Implications of Embeddedness and the Culture of Surveillance, a conference in Copenhagen, Denmark held by “The Culture of Ubiquitous Information,” the Nordic research network, and supported by the NordForsk research organization. The three other interviews titled “Complexity and Reduction,” “Context-Awareness and Meaning,” and “Politics and Aesthetics of Interactive Media Art Today” constitute key parts of Rokeby’s contribution to the final publication project in this series, the forthcoming anthology titled Ubiquitous Computing, Complexity and Culture (Routledge).

2. Interview

Ulrik Ekman:
In network societies, which today have entered their second phase, an intensification of network logics is underway. Interactive media art finds itself in a context that includes the interactivity of the Internet, social media, and mobile media. It also includes situations and events relating to the pursuit of the goals of other technocultural developments in a so-called third wave of computing. Parts of the major initiatives towards ubiquitous computing, pervasive computing, ambient intelligence, and the Internet of Things are being rolled out, and they are said to be human-oriented.1
The multiplication of names for third wave computing makes clear both that its history is still to be written, the history of the present, and that several potential lines of development are at stake. If these names are not synonymous, they nevertheless index an effort to realize the promise of out-of-the-box computing, which involves billions of computational units. They all imply a socio-cultural and technical thrust to integrate and/or embed computing pervasively, to have information processing thoroughly integrated with or embedded into everyday objects and activities, including those pertaining to human bodies and their bodily parts.
We now find a great many projects moving towards concretization of a heuristic idea of computation qua environmentally embodied virtuality. The diagram today for this is supposed to be an intelligently context-aware and more or less “calm” computing. In an information-intensive environment this seems to map out in practical concretizations of multitudes of wired and wireless computational infrastructures with decentralized distributions of sometimes highly specialized units, many demonstrating mobility and ad hoc networking.
I know you regard this with skepticism, seeing here sources of beneficial and/or perilous complexification of human and technical context-awareness and adaptation to context, including the production and recognition of what makes sense for humans and machines. You affirm a need to safeguard humanist concerns and you insist on a certain critical distance from developments of a machinic intelligence that may well be invisible. It would be interesting to hear your position as compared with the remarks made earlier in your text on interactivity, “Transforming Mirrors.”2
Do you think that interactivity in cultures developing in company with ubiquitous computing (ubicomp) will increasingly involve technical processes that mirror human self-reference? Or is it more likely that most such ubicomp processes will parenthesize mirrorings of human self-reference in favor of other technical feedback loops and interruptions, as in invisible computational infrastructures and networks populated by autonomous intelligent agents with their own modes of operation and reference?
David Rokeby:
It is hard to predict long-term trends in this field. We are living with technologies that were not visible on the horizon thirty years ago.
But I think that the answer rests as much in the realm of shifting cultural attitudes as in that of technological breakthroughs. The shift on the iPhone from skeuomorphism, where the familiar physical world was the reference for most interface elements, to iOS7, which makes assumptions about the interface literacy of its users, shows how the terms of engagement continue to change. It is not clear what kinds of interfacing relationships we will feel comfortable with in the future. Are piercing and body modification unconscious preparation for the embedding of sensors and actuators in the body?
It will likely end up being a question of efficiency and convenience. The two directions you pose are towards dialogue on one side and augmentation on the other, and hinge on the degree to which the ambient algorithmic environment is felt as an attribute of self, coherent other, or environment. The membrane separating self and environment is fairly porous if the environment is ubiquitous and homogenous, and so in most cases it is a question of whether we sense the responsive environment to be part of us or not.
When the system becomes involuntarily internalized as a part of one’s own identity, there is a question of where to turn when something goes wrong. “This pervasive anxiety that I am feeling … should I see a therapist or the system administrator?” I think that this is more serious than it might at first appear. The ubicomp component of a workplace is an extension of its corporate culture; the cost of not fitting in is tangible both socially and in terms of career advancement. The natural response is to change oneself to fit the environment better, and much of this happens involuntarily, especially if the source of the issue is hard to put your finger on (i.e., ubiquitous and transparent).
An algorithmic environment where the engagement is dialogic is easier to critique, but distracting and inefficient.
UE:
Yes, perhaps not least because this demands a more explicit recognition of another agency?
DR:
Norbert Wiener proposes that autonomy can be determined by whether the amount of information transfer within a system is greater than the information transfer across its boundaries. At some point in the discussion of sensor spaces and ubicomp, I think this measure starts to become relevant. We can think of the relative locality of parts of the system in terms of the intensity of the connecting information flow. Is a camera observing you closer (in this informational sense) to being part of you than a mouse or track pad that you are actually touching? Or is there insufficient information flow in the opposite direction to make this stick?
You interact with Siri, the iPhone voice-activated assistant, in a simple dialogic manner and it is clearly experienced as “other.” In theory this service could be delivered as a technological extension of one’s own cognitive processes, and therefore experienced as part of one’s self. Intent is an important part of this. If we must consciously engage a behavior, and that behavior includes a delay, then it is experienced very differently than if it were ubiquitous, always active, and instant.
UE:
Is this set of distinctions reflected in your works?
DR:
In 1995, I created a video installation called Watch, in which an artificial perception system parses the video signal of a camera looking onto the street corner outside the gallery, separating movement from stillness, “verbs” from “nouns.” In Watch, you effectively wear the installation as a set of real-time filters on your perceptual field; you do not look “at” this work so much as look “through” it. The processes that the computer is applying to the live video feel almost internal.
At the same time, I was creating installations that intentionally played with different rates of processing. Very Nervous System, which responds to people’s movements with a real-time sonic accompaniment, was as real-time as possible, and Watch attempted the same thing. The Giver of Names pushed in the opposite direction. In this work, a computer looks at objects that visitors have chosen and placed on a pedestal. Through processes of visual analysis, association and grammatical construction, the computer responds to the objects, constructing sentences, which it speaks aloud. Here the processing loop was extended to create the space for mental reflection inside the feedback loop, both because it was doing more processing and because The Giver of Names was intended to be experienced as a self-contained entity. It was also a response to my observation that people interacting with Very Nervous System were often so enthralled with the intensity of the interaction that they did not think more deeply about the experience and its implications.
But immediacy and fluidity are certainly desirable in many situations. I was making art and trying to tease out difficult questions. That is not the goal of most interactive systems. For the broader practical applications of ubiquitous intelligent sensing systems, my main concern would be our mental health.
UE:
My immediate impression here is that you answer by way of reinscribing versions of the undecidable. A remarkable series of your works are not altogether foreign to this mode of operation. One might be tempted to see here something of a more general import to many of your installations. Your work bears witness to an insistence on the differential repetition of an immanent critique of human and technical decision.
Your installations return human interactants to an undecidability interior to given decisions or decisions that are taken for granted. They also include a computational “perversion,” as you call it: algorithms turning back against and into themselves to reinvoke undecidability, typically as a ghost of the classical halting problem. I am thinking of the problem of deciding, given a program and an input, whether that program will eventually halt when run with that input, or will run forever. As you know, Alan Turing proved long ago that a general algorithm to solve this does not and cannot exist.
The near obsession in your work with mirroring feedback could be approached as the more decidable side. Mirroring feedback happens when interactions take place through solvable or semi-solvable decision problems. Your repeated use of the mirror metaphor is a particularly apt gesture, since in computational complexity theory these two types of problems concern a recursive set or a recursively enumerable set.
However, your “natural” drift towards undecidability as a dynamic space of potential for complexification makes this a little less apt. Perhaps this is why you use the phrase “transforming mirrors,” which points in the direction of heterogenesis rather than autogenesis.
DR:
I suspect that feedback is necessary for emergence. Can you think of an emergent phenomenon that does not require feedback?
Mirroring feedback is only interesting to me when it is faulty or incomplete: transformative, modulated, or otherwise opened out towards the world. I prefer Echo’s transformed and delayed reflections to Narcissus’s servo-mechanical relationship to his own image (McLuhan 63). Recursion is interesting when a structurally or functionally coherent meta-phenomenon emerges that is open enough to not be purely self-referential. I think it was Daisetsu Suzuki who suggested that Heidegger, of all Western philosophers, had come closest to Zen, but that he approached it backwards, through an infinite regression that never achieves its goal.3
UE:
In this adherence to the transformative potential residing with Echo, I also hear an affirmation of a growing complexity. I have to admit my admiration for your work on this. But I also find here two kinds of reduction leaving me uncertain as to the reach of your work in relation to current ubicomp cultures. I think they stem from your ethical responsibility toward the anthropos. A certain safeguarding reduces the question concerning Echo to one of human complexification. Echo is heard as another technical irritation internal to the human – and you grant priority to the human orchestration of technology. This entails a reduction of technological agency,4 or a reduction of the autonomy of a universal technical tendency.5
DR:
Yes, I privilege the human and wish to safeguard it, but not in a purely reactionary or conservative way. There are a lot of problems in the human realm. There are a lot of ways that technological developments could lead to positive changes in this situation. But I think there is a good chance that an emergent intelligence would decide that humanity is not worth preserving. If an emergent, silicon/binary/logic/network-based intelligence is incapable of appreciating some of the factors that might bear on that decision, then perhaps this would be a tragedy. It comes down to a question of values. What values should bear on decisions that might emerge in an autonomous or semi-autonomous ubicomp scenario? Or, less loaded: what parameters should such a scenario be responsive to? As with all neural net, machine, etc. scenarios, much is determined by this choice of inputs.
Secondly, we cannot discount the value of the robustness of the natural world, and of ourselves. Today’s life forms are the products of absolutely continuous, unbroken lines of aliveness leading back to the origins of life. Millions of years of continuous “beta testing” have generated an unimaginably valuable body of information, partly held genetically, partly held in evolved and integrated biological ecosystems. While the speed of twenty-first-Century computers allows them to compress evolutionary processes by orders of magnitude, they still fail the robustness test if they do not take all relevant opportunities and threats into account.
So my argument for “safeguarding” is the following: We created the technology. It is our responsibility. It is a reflection of our desires and aspirations. Before deciding to allow it autonomy, we have a responsibility to put the maximum effort into the task of preparing it for autonomy. I consider my process and line of inquiry as part of this effort. As a father, I am engaged in a somewhat analogous process with my daughter. I am consciously adopting a paternalistic relationship to the technology. There is a danger that I might choose to quash “objectively” positive behaviors that I find threatening. (And this comes full circle to the question of whether humanity is “objectively” a net positive presence on the planet.)
Even if our intention is to develop technological entities or systems that will endure past our perhaps inevitable self-destruction, and we ignore human values, we still need to make sure that we are adequately preparing our creations for long -term adaptive existence— that we are not ignoring abilities that may be crucial to long -term adaptation but are not easy to describe or program. Most of the computed environment is produced in a rush to get products to market. And most research is now channeled towards product development. Philosophical speculation is not conducive to meeting product-shipping deadlines. This is a bad way to design the future, or to set up the conditions for the future to design itself.
It comes down to this question: Do I trust emergent autonomous entities with my fate and that of my culture? Similarly, do I trust emergent social entities like government with my fate? Trust necessarily involves a leap of faith, even if that leap is supported by promising statistics. I do not trust autonomous technologies to have my best interests in “mind.” Since it is almost inevitable that it will become increasingly present and common, I need to keep asking the best questions I can, to call it (or at least those developing it) to account.
UE:
One can hardly avoid sensing the call for a balanced, coevolutionary attunement here. Your deep investment in a complex and wide-ranging “harmonics of interactivity” continuously makes itself heard (Rokeby, “Harmonics”).
If this deserves the status of an ideal, maybe a utopian one, most of the time it involves individuations qua temperings. Dynamically uncertain, these individuations are disharmonic, asymmetrically inclined, always attunements to someone and something other. Perhaps this explains the important place reserved in your installations and texts for “transformative mirrors,” with emphasis on the “transformative.”
You mentioned earlier being fond of the Narcissus and Echo myth. You grant Echo the privileged position as a patron deity of interactive art. In a sense, you have always been at work on transformative mirrorings, which return to interactants the same expressing itself in delayed and displaced ways as something or someone other. I wonder how you see the asymmetries in mirrorings and assign relative weights to self-reference, other-reference, and undecidability in human and technical individuations?
DR:
My interest in mirroring must be understood in relation to my understanding of my role as an artist, expressing myself to people who are, through no fault of their own, essentially self-centered and attached to their personal world view and life experience. I follow Varela and Maturana in thinking that we do not transmit messages into the minds of others when we communicate; rather we perturb their surface and cause a rearrangement of what is already inside.6
Transformed mirrors become ways of using the intensity of other people’s self-awareness as a “carrier” that can be used to enable communication, much as the FM radio station’s frequency is used as a carrier that is modulated with the sonic signal. In my installations that use transforming mirroring, your image/action/sound is modulated by my system, and that distorting signal is decoded by you as a difference between your inner sense of self and this reflected self.
This is therefore not as user-centric or mirror-obsessed as it might seem. It is a strategy that seeks to sneak past our defenses against otherness.
UE:
I wonder whether your creations of interactive installations and automata and their interaction designs could be said to reserve for themselves a certain second-order status.
If both technical and human becoming through interactivity are at stake, are the artist and the automata rather to be called “transforming transforming mirrors” whose activities may be self-generative, heterogenerative, or undecidable, if not entropic?
DR:
I have often explored second-order artistic expression. In most of my work, I have de-emphasized the “surface content” of the work. All the interesting stuff I have put in these works is at least second-order. People tend to overestimate the empowerment that most interactivity provides. The artist/technologist has given the user control over surface content, but is generally reserving for him- or herself the control at one level of abstraction above. I have played with this in the past – allowing, for instance, users to change the responsive behavior of the program using a simple UI, gestures, etc. (thus giving them some second-order control, and taking the third-order control for myself…). How might we describe the role of a programmer programming a learning system that recalibrates the responsive character to engender a certain overall system behavior itself?
But a real second-order transforming mirror would need two levels of selves being mirrored. The first-order users experience a transformation of themselves. Does the second-order user (the artist creating the transforming mirror) experience a self-transformation reflected back by the authoring system and what is created with it? Certainly some of the ideas I have been expressing in this conversation are plausibly the result of such second-order transforming mirroring. I have described the development of The Giver of Names explicitly as a performance piece performed for myself, in which I dress up as an artificial-intelligence researcher and feel myself affected by the process of doing the research, watching the way my decisions are guided by the task and the limits of the tools, etc.
UE:
How do the echoings of echo become, and what is the role of the technical and human audience of interactants?
DR:
This is a good question. It was in order to open more of the second-order experience to the users that I made the feedback loop in The Giver of Names so slow. You are allowed to be a bit of a researcher yourself. That is not entirely satisfying the idea of really passing on the second-order experience, however. It is perhaps why I wrote articles like “Transforming Mirrors,” and this is perhaps even truer for “Constructing Experience,”7 which is really a kind of guidebook for people creating experiences in this second-order manner. Writing and talking about my experiences is a way to shed a little light on the second-order. But that is not really the point of your question either. Can one imagine an interactive relationship where all participants are operating on all imaginable levels of responsibility – having the base-level experience, modulating it, modifying it in permanent ways and generating mechanisms for continued automatic modification? This sounds like an interaction between two conscious entities, capable of understanding that there is always one more step up the chain of abstraction, and along the chain of recursion, and ready to act on any of these levels, to grasp the concepts of recursion and abstraction and see them shoot off into infinity, to abstract recursion and abstraction themselves. As long as we have to open each of these doors for our synthetic intelligences, we have to consider how to describe and encode each surrounding context and we continue to have responsibility for their resulting actions. Consciousness does not substantially increase our ability to do harm. It does increase our ability to accept responsibility for the harm we might do and to work to preempt it.
As for the question of entropy, second-order (and higher) agencies need to maintain a careful balance between entropic and anti-entropic tendencies. This would require a sort of entropy governor that prevents uncontrolled growth and allows for renewal but keeps the system from dissolving. Is this a plausible minimal definition of some sort of ethics for autonomous systems, or is this perhaps built-in – in that systems with an excessive tendency towards entropy will simply dwindle away? Perhaps the most important thing is to rein in excessively anti-entropic systems because those are the ones that will persist.
UE:
I was trying to keep open a question of both technical and human interactants as an audience, given your safeguarding of the human. This openness would concern all orders of abstraction and recursion to see how far you go towards complexity or rather introduce reductions. My focus was on second-order interactivity (“transforming transforming mirrors”) and the becoming of “echoings of echo.”
I tried to indicate major potential directions for this second-order interactivity – the self-generative, heterogenerative, undecidable, and the entropic – not only to hear you on epistemological quandaries for interactivity (circularity, infinite regress, undecidability), and on the quasi-ontological inclinations towards energetic complexification and passing away in the play of the negative.
It was also, perhaps primarily, to hear you on the tempering tilts in the practical reason embodied in your installations. Do these installations tend towards inviting the audience to engage ethical responsibility, goal-oriented political action, and a presencing of interactive potentiality, or do they tend rather towards the points and waves of energy in interactive practice that delimits live and living systems (technical, biological, and human)?
DR:
In my text “Predicting the Weather,” I end by asking: “How does one best function within a situation one cannot hope to entirely understand???” I was explicitly talking about accepting responsibility for your actions even when the results of those actions are not predictable. I was struggling to find a model of responsibility that could work in the contemporary world. In my early vision of interactive utopia, I saw interactive installations as ways of developing and practicing this kind of responsibility. The idea was to engineer an interactive space in which one could simultaneously grasp that one had agency and that one did not always see a clear causal line from one’s actions to their results. In such a space, you could come to terms with influencing without controlling, and perhaps imagine a way to live like that, perceptive and active at all times – literally responsive, and perhaps by extension, responsible.
The dark cloud that enveloped me through the ’90s was partly a result of my growing understanding that people were excited to participate, but not so interested in bearing any responsibility—that, indeed, interactive technologies were just as good at creating a fake enfranchisement, a fake empowerment, as they were in encouraging actual engagement. This is not surprising in retrospect, but it was a surprise to my younger, utopian self.
I still present my interactive installations to the public as opportunities to play directly with issues like surveillance in the hope that I will further the dialogue, and in large part to increase interactive literacy incrementally. It was the ease with which we can be fooled or too easily satisfied by interactions that pushed me out of the interactive Eden, and so this interactive literacy question is very important to me. This is a way to develop ethical responsibility and goal-oriented political action. But I generally shy away from strong political statements in my work because I am not interested in preaching to the converted or simply polarizing debate. My role is to churn the soil so that people can be surprised by their responses to something, perhaps enough to get under their assumptions and actually change their mind. I do not really have a firm “position” on most aspects of surveillance, ubicomp, etc. But I have a feeling that we as a society are not engaging in a sufficient discussion about the future we are inventing and allowing to settle into place.
So if my installations have a mission, it is to undermine assumptions, to destabilize familiar experiences and habits of perception and mind. This is in direct response and opposition to the fact that so much is underexamined – things (technologies and ideas and attitudes) are left to cool and crystallize too quickly, becoming hard yet brittle.
As for points and waves of energy in interaction: for me these are no less politically charged. Understanding autonomy and feedback and permeability and transparency and internalization of tech and externalization of self are all things we need to become literate in if we are to make good decisions. This is particularly problematic given the momentous shift in the locus of policy-making (especially in the USA), where policy is now largely made through consumer choice and corporate lobbying. The deep suspicion of intellectuals in the United States means that informed top-down decision-making is regarded as elitist. So how do we come to make smart decisions about the future?
UE:
You trust in the development of a smarter interactive literacy via invitations to the audience to encounter interactivity of another order—Hence a certain trust of yours in the responsiveness and responsibility of the audience. But you also trust that the audience will just participate (staying on a first -order plane, presumably for and with themselves), rather than being or becoming responsible unto the other on another plane?
DR:
It is important to distinguish between what I hoped for in the early years, and what I learned to fear later on. I had to revise my position of naive trust to one that is a bit more tempered. I learned that I had to be a better and more thoughtful artist in order to nurture a smarter interactive literacy.
One of my biggest concerns was that people interacting with my installations have often not grasped that their interaction was clearly limited to the first order. I did make some experiments allowing interactants to change the behavior of the piece through a UI and some mouse gestures. This was interesting, but I did not continue these initial explorations of second -order interactivity with the audience because I was more interested in understanding better why people were satisfied with the first order. I think our human grasp of interactive relationships is often pretty limited. I think that we like the fact that we have some responsibility but that it is clearly circumscribed. We do not want to take it all on. This leaves the playing field pretty open for unguided emergence of entities and “evil geniuses.”
UE:
My question concerning the audience was meant as a sincere and respectful bow in the direction of these technical and human others. I am struck by a tension in your remarks. You acknowledged early that the audience is the primary medium for an interactive artist. One could see an affirmation here that your privileging of mirroring, feedback loops, recursion, and responsibility implies a rather humble recognition of the audience as the very condition of existence of your installations. This recognition seems to coexist with but also be less privileged than the creative act of the artist of interactive media art installations. This remains a decisive act that opens the stage for interactive exploration of structures of possibility of a certain complexity but also always already delimits these in a reduction of complexity. This asymmetry is clearly tempered by the inclusion in your installations of experiments with co-responsible “audiences” (or “co-creators”). However, you seem to parenthesize this, granting primacy to human reductions of the complexity of interactivity to a first-order plane. This seems out of tune with your harmonics of interactivity. Here I am left wondering whether one should hear a kind of disenchantment?
DR:
Certainly there is disenchantment. It was inevitable, considering the utopian place I started from. On the other hand, if we take an enlarged view of what interactive toolset robust enough for others to use. Sharing these tools is a level of sharing and co-creation beyond what is normally possible in any kind of installation. The reason is “time.” Substantive meta-creation takes time. Simply creating the possibility of higher order engagement with the audience in the code does not add up to much unless you can provide the proper conditions for its use.
Artists who have used my tools often comment that they appreciate the “character” of my tools. They feel my thinking in them. I am excited to share the potentials that these tools opened up for exploring interactivity.
As for creating interactive systems that emerge or evolve, let me note that this is easier said than done in a practical and satisfying way. I am an artist and so my motivations, while sometimes parallel to those of a researcher or academic, are also often quite different. I am not sure how to characterize the differences, but I could say that my audience is much broader. For my own pleasure I may play with algorithms in my studio that will never be of interest to my audience, but my aim is usually to find ways to share aspects of my “research” with everyone (and those “everyones” are all human). This is one great promise of interactivity, occasionally fulfilled: it allows one to make accessible things that are normally hidden behind firewalls of ultra-specialized language and slowly evolved, deeply invested mental constructs.
In “Transforming Mirrors” I talk about the experience I had in 1984 with the earliest versions of Body Language, in which I handed as much control as I could to the participant.9 I found that the experience had too many dimensions of interaction, and so the sense of interactivity was, for many users, completely lost. Reducing the dimensions of interactivity produced a greater sense of interaction. This was another facet of my loss of innocence – I was very disappointed by this apparent paradox. Of course, time is one of the important issues here. With enough exposure, perhaps people would come to have a more satisfying experience with the more complex interaction. But I need to acknowledge the duration of interaction that I can expect with my audience and work within its frame.
These may seem like bizarre limitations from a pure research perspective, but I am not a pure researcher.
Ulrik Ekman
ekman@hum.ku.dk
University of Copenhagen
Ulrik Ekman is Associate Professor at the Department of Arts and Cultural Studies, University of Copenhagen. Ekman’s main research interests are in cybernetics and ICT, the network society, new media art, critical design and aesthetics, as well as recent cultural theory. He is the head of the Nordic research network “The Culture of Ubiquitous Information,” with more than 150 participating researchers. Ekman is currently involved in the publication of Ubiquitous Computing, Complexity and Culture (Routledge, forthcoming 2015), a comprehensive anthology treating the question whether and how the development of network societies with a third wave of computing may have brought about the emergence of a new kind of technocultural complexity. Ekman’s publications include “Of the Untouchability of Embodiment I: Rafael Lozano-Hemmer's Relational Architectures," in C-Theory (2012); “Irreducible Vagueness: Augmented Worldmaking in Diller & Scofidio’s Blur Building,” in Postmodern Culture 19.2; and “Of Transductive Speed – Stiegler,” in Parallax 13.4. He is also the editor of Throughout: Art and Culture Emerging with Ubiquitous Computing (MIT Press, 2013).

Footnotes

1. For the first book-length engagement with the sociocultural, aesthetic, and artistic implications of these developments, see Ekman, Throughout. At least three earlier monographs have contributed to an understanding of these developments in the contexts of interaction design, architecture, and the cultural ethics of ubiquitous computing: Dourish; McCullough; Greenfield. Interesting and technically well-informed introductions presented from the perspectives of different disciplines can be found in: Abowd and Mynatt; Beigl; Bell and Dourish; Galloway; Rogers; Symonds.
2. See Rokeby, “Transforming Mirrors.”
3. Upon reading a book by D.T. Suzuki, Heidegger is reported to have said, “If I understand this man correctly, this is what I have been trying to say in all my writings” (Barrett xi).
4. See Latour.
5. See Leroi-Gourhan.
6. See Maturana and Varela.
7. See Rokeby, “The Construction of Experience.”
8. In n-cha(n)t, seven computers form a small community of entities with significant cognitive and linguistic skills. They slowly fall into unison chanting when left alone, sharing ideas amongst themselves until a consensus is reached. The chant scatters into a jumble of independent voices when disrupted by words spoken by gallery visitors, disrupting the coherence of the group.
9. See Rokeby, Body Language.

Works Cited

Abowd, Gregory D., and Elizabeth D. Mynatt. “Charting Past, Present, and Future Research in Ubiquitous Computing.” ACM Trans. Comput.-Hum. Interact. 7.1 (2000): 29-58. Web.
Barrett, William. “Zen for the West.” Introduction. Zen Buddhism. By D.T. Suzuki. Garden City: Doubleday, 1956. Print.
Beigl, Michael. "Ubiquitous Computing - Computation Embedded in the World." Disappearing Architecture: From Real to Virtual to Quantum. Eds. Michael Beigl and Peter Weibel. Berlin: Birkhäuser, 2005. 52-61. Print.
Bell, Genevieve, and Paul Dourish. “Yesterday's Tomorrows: Notes on Ubiquitous Computing's Dominant Vision.” Personal and Ubiquitous Computing 11.2 (2007): 133-43. Web.
Dourish, Paul. Where the Action Is: The Foundations of Embodied Interaction. Cambridge: MIT P, 2001. Print.
Ekman, Ulrik, ed. Throughout: Art and Culture Emerging with Ubiquitous Computing. Cambridge: MIT P, 2013. Print.
Galloway, Alexander. “Intimations of Everyday Life - Ubiquitous Computing and the City.” Cultural Studies 18.2-3 (2004): 384-408. Web.
Greenfield, Adam. Everyware: The Dawning Age of Ubiquitous Computing. Berkeley: New Riders, 2006. Print.
Latour, Bruno. “Where Are the Missing Masses? The Sociology of a Few Mundane Artifacts.” Shaping Technology/Building Society. Ed. Wiebe E. Bijker and John Law. Cambridge: MIT P, 1992. 225-58. Print.
Leroi-Gourhan, André. Évolution et techniques. Paris: Albin Michel, 1943. Print.
Maturana, Humberto R., and Francisco J. Varela. Autopoiesis and Cognition: The Realization of the Living. Boston: D. Reidel, 1980. Print.
McCullough, Malcolm. Digital Ground: Architecture, Pervasive Computing, and Environmental Knowing. Cambridge: MIT P, 2004. Print.
McLuhan, Marshall. Understanding Media: The Extensions of Man: Critical Edition. Ed. W. Terrence Gordon. Berkeley: Ginko Press, 2003. Print.
Rogers, Yvonne. “The Changing Face of Human-Computer Interaction in the Age of Ubiquitous Computing.” HCI and Usability for e-Inclusion. Eds. Andreas Holzinger and Klaus Miesenberger. Berlin: Springer, 2009. 1-19. Print.
Rokeby, David. Body Language. 1984. Sound Installation. Justina M. Barnicke Gallery, Toronto. DavidRokeby.com. Web. 28 Feb. 2015.
---. “The Construction of Experience: Interface as Content.” Digital Illusion: Entertaining the Future with High Technology. Ed. Clark Dodsworth, Jr. New York: ACM Press, 1998. Print.
---. The Giver of Names. 1990. Multimedia Installation. Inter/Access, Toronto. DavidRokeby.com. Web. 28 Feb. 2015.
---. “Predicting the Weather.” Musicworks: Starting All Observations from Scratch 33 (1985). DavidRokeby.com. Web. 28 Feb. 2015.
---. “The Harmonics of Interaction.” Musicworks: Sound and Movement 46 (1990). DavidRokeby.com. Web. 28 Feb. 2015.
---. “Transforming Mirrors: Subjectivity and Control in Interactive Media.” Critical Issues in Electronic Media. Ed. Simon Penny. Albany: State U of NY P, 1995. 133-58. DavidRokeby.com. Web. 28 Feb. 2015.
---. Very Nervous System. 1986. Multimedia Installation. Venice Biennale, Venice. DavidRokeby.com. Web. 28 Feb. 2015.
---. Watch. 1995. Video Installation. Gwangju Biennale, Gwangju. DavidRokeby.com. Web. 28 Feb. 2015.
Symonds, Judith. Ubiquitous and Pervasive Computing: Concepts, Methodologies, Tools, and Applications. 3 vols. Hershey: Information Science Reference, 2010. Print.
Wiener, Norbert. Cybernetics: or Control and Communication in the Animal and the Machine. Cambridge: MIT Press, 1961.
Copyright © 2015-1990 Postmodern Culture & the Johns Hopkins University Press



Chicago Style Citation
Ulrik Ekman and David Rokeby. "Transformations of Transforming Mirrors: An Interview with David Rokeby." Postmodern Culture 24, no. 2 (2014) https://muse-jhu-edu.prxy4.ursus.maine.edu/ (accessed March 07, 2017).

1 comment:

Blogger said...

Did you know you can create short links with AdFly and earn dollars from every visit to your shortened links.