Tuesday, June 8, 2010

Image Future, Lev Manovich




Uneven Development

What kinds of images will dominate visual culture a number of decades from now? Would they still be similar to the typical image that surrounds us today—photographs that are digitally manipulated and often combined with various graphical elements and type? Or would future images be completely different? Would photographic code fade away in favor of something else?

There are good reasons to assume that the future images would be photograph-like. Like a virus, a photograph turned out to be an incredibly resilient representational code: it survived waves of technological change, including computerization of all stages of cultural production and distribution. The reason for this persistence of photographic code lies in its flexibility: photographs can be easily mixed with all other visual forms—drawings, 2D and 3D designs, line diagrams, and type. As a result, while photographs truly dominate contemporary visual culture, most of them are not pure photographs but various mutations and hybrids: photographs which went through various filters and manual adjustments to achieve a more stylized look, a more flat graphic look, more saturated color, etc.; photographs mixed with design and type elements; photographs which are not limited to the part of the spectrum visible to a human eye (night vision, X-ray); simulated photographs done with 3D computer graphics; and so on. Therefore, while we can say that today we live in a "photographic culture," we also need to start reading the word "photographic" in a new way. "Photographic" today is really photo-GRAPHIC, the photo providing only an initial layer for the overall graphical mix.

One way is which change happens in nature, society, and culture is inside-out. The internal structure changes first, and this change affects the visible skin only later. For instance, according to Marxist theory of historical development, infrastructure (i.e., mode of production in a given society, also called "base") changes well before superstructure (ideology and culture in this society). In a different example, think of technology design in the twentieth century: typically, a new type of machine was at first fitted within old, familiar skin (for instance, early twentieth-century cars emulated the form of horse carriages). The familiar McLuhan idea that the new media first emulates old media is another example of this type of change. In this case, a new mode of media production, so to speak, is first used to support old structures of media organization, before the new structure emerges. For instance, the first typeset books were designed to emulate hand-written books; cinema first emulated theater; and so on.

This concept of uneven development can be useful in thinking about the changes in contemporary visual culture. Since it beginnings fifty years ago, computerization of photography (and cinematography) has by now completely changed the internal structure of a photographic image; yet its "skin," i.e. the way the image looks, still largely remains the same. It is therefore possible that at some point in the future the "skin" of an image would also become completely different, but this has not happened yet. So we can say at present our visual culture is characterized by a new computer "base" and old photographic "superstructure."

The Matrix trilogy of films provides us with a very rich set of examples perfect for thinking further about these issues. The trilogy is an allegory about how its visual universe is constructed. That is, the films tell us about the Matrix, the virtual universe which is maintained by computers—and of course, visually the images of the Matrix which we the viewers see in the films were all indeed assembled with the help of software (the animators sometimes used Maya but mostly relied on custom-written programs). So there is a perfect symmetry between us, the viewers of a film, and the people who live inside the Matrix—except while the computers running the Matrix are capable of doing it in real time, most scenes in each of The Matrix films took months and even years to put together. (So The Matrix can be also interpreted as the futuristic vision of computer games at a point in a future when it would become possible to render The Matrix-style visual effects in real time.)

The key to the visual universe of The Matrix trilogy is the new set of computer graphic processes that over the years were developed by John Gaeta and his colleagues at ESC Entertainment. Gaeta coined names for these processes: "virtual cinema," "virtual human," "universal capture," "image-based rendering," and others. Together, these processes represent a true milestone in the history of computer-driven special effects. They take to their logical conclusion the developments of the 1990s, such as motion capture, and simultaneously open a new stage. [1] We can say that with The Matrix, the old "base" of photography has finally been completely replaced by a new computer-driven one. What remains to be seen is how the "superstructure" of a photographic image—what it represents and how—will change to accommodate this "base."




Reality Simulation Versus Reality Sampling

In order to understand better the significance of Gaeta's method, let us briefly run through the history of 3D photo-realistic image synthesis and its use in the film industry. In 1963, Lawrence G. Roberts (who later in the 1960s became one of the key people behind the development of ARPANET but at that time was a graduate student at MIT) published a description of a computer algorithm to construct images in linear perspective. These images represented the objects through lines; in contemporary language of computer graphics they can be called "wire frames." Approximately ten years later, computer scientists designed algorithms that allowed for the creation of shaded images (so-called Gouraud shading and Phong shading, named after the computer scientists who create the corresponding algorithms). From the middle of the 1970s to the end of the 1980s, the field of 3D computer graphics went through rapid development. Every year new fundamental techniques were arrived at: transparency, shadows, image mapping, bump texturing, particle system, compositing, ray tracing, radiosity, and so on. [2] By the end of this creative and fruitful period in the history of the field, it was possible to use a combination of these techniques to synthesize images of almost every subject that often were not easily distinguishable from traditional cinematography.

All this research was based on one fundamental assumption: in order to re-create an image of reality identical to the one captured by a film camera, we need to systematically simulate the actual physics involved in construction of this image. This means simulating the complex interactions between light sources, the properties of different materials (cloth, metal, glass, etc.), and the properties of physical cameras, including all their limitations such as depth of field and motion blur. Since it was obvious to computer scientists that if they exactly simulate all this physics, a computer would take forever to calculate even a single image, they put their energy in inventing various short cuts which would create sufficiently realistic images while involving fewer calculation steps. So in fact each of the techniques for image synthesis I mentioned in the above paragraph is one such "hack"—a particular approximation of a particular subset of all possible interactions between light sources, materials, and cameras.

This assumption also means that you are re-creating reality step-by-step, from scratch. Every time you want to make a still image or an animation of some object or scene, the story of creation from the Bible is being replayed.

(I imagine God creating the Universe by going through the numerous menus of a professional 3D modeling, animation, and rendering program such as Maya. First he has to make all the geometry: manipulating splines, extruding contours, adding bevels...Next for every object and creature he has to choose the material properties: specular color, transparency level, image, bump, and reflexion maps, and so on. He finishes one page of menus, wipes his forehead, and starts working on the next menu page. Now on to defining the lights: again, dozens of menu options need to be selected. He renders the scene, looks at the result, and admires his creation. But he is far from being done: the universe he has in mind is not a still image but an animation, which means that the water has to flow, the grass and leaves have to move under the blow of the wind, and all the creatures also have to move. He sights and opens another set of menus where he has to define the parameters of algorithms that simulate the physics of motion. And on, and on, and on. Finally the world itself is finished and it looks good; but now God wants to create the Man so he can admire his creation. God sighs again, and takes from the shelf a set of Maya manuals...)

Of course we are in somewhat better position than God was. He was creating everything for the first time, so he could not borrow things from anywhere. Therefore everything had to be built and defined from scratch. But we are not creating a new universe but instead visually simulating a universe that already exists, i.e. physical reality. Therefore computer scientists working on 3D computer graphics techniques realized early on that in addition to approximating the physics involved they can sometimes take another shortcut. Instead of defining something from scratch through the algorithms, they can simply sample it from existing reality and incorporate these samples into the construction process.

Examples of the application of this idea are the techniques of texture mapping and bump mapping which were introduced already in the second part of the 1970s. With texture mapping, any 2D digital image—which can be a close-up of some texture such as wood grain or bricks, but which can be also anything else, for instance a logo, a photograph of a face or of clouds—is mathematically wrapped around virtual geometry. This is a very effective way to add visual richness of a real world to a virtual scene. Bump texturing works similarly, but in this case the 2D image is used as a way to quickly add complexity to the geometry itself. For instance, instead of having to manually model all the little cracks and indentations which make up the 3D texture of a wall made from concrete, an artist can simply take a photograph of an existing wall, convert into a grayscale image, and then feed this image to the rendering algorithm. The algorithm treats grayscale image as a depth map, i.e. the value of every pixel is being interpreted as relative height of the surface. So in this example, light pixels become points on the wall that are a little in front while dark pixels become points that are a little behind. The result is enormous saving in the amount of time necessary to recreate a particular but very important aspect of our physical reality: a slight and usually regular 3D texture found in most natural and many human-made surfaces, from the bark of a tree to a woven cloth.

Other 3D computer graphics techniques based on the idea of sampling existing reality include reflection mapping and 3D digitizing. Despite the fact that all these techniques have been always widely used as soon as they were invented, many people in the field (as far as I can see) always felt that they were cheating. Why? I think this feeling was there because the overall conceptual paradigm for creating photorealistic computer graphics was to simulate everything from scratch through algorithms. So if you had to use the techniques based on directly sampling reality, you somehow felt that this was just temporary—because the appropriate algorithms were not yet developed or because the machines were too slow. You also had this feeling because once you started to manually sample reality and then tried to include these samples in your perfect algorithmically defined image, things rarely would fit exactly right, and painstaking manual adjustments were required. For instance, texture mapping would work perfectly if applied it to straight surface, but if the surface was curved, inevitable distortion would occur.

(I am using "we" here and in other places in this text because I spent approximately seven years working professionally in the field of 3D computer animation between 1984 and 1992, so I still feel a certain identification with this field. At IMAGINA 2003 festival in Barcelona I met John Gaeta and Greg Juby from ESC, who were there to lecture on the making of The Matrix. Slowly it became clear that the three of use were connected by multiple threads. In 1984 I went to work for a company in New York called Digital Effects that at the time was one among seven companies in the world focused on 3D computer animation for television and film. Company president Jeff Kleiser later founded another company—Kleiser- Walczak—where Greg Juby worked for a few years in the 1990s. Juby graduated from Syracuse University where—as we discovered over dinner—he was my student in the very first university class in digital arts I ever taught [1992]. While working at Kleiser's company, Juby met John Gaeta and eventually went to work for him at ESC. Finally, it also turned out that before we turned to computer graphics both Gaeta and I were students at New York University film school.)

Throughout the 1970s and 1980s the "reality simulation" paradigm and "reality sampling" paradigm co-existed side-by-side. More precisely, as I suggested above, the sampling paradigm was "imbedded" within the reality simulation paradigm. It was common sense that the way to create photorealistic images of reality is by simulating its physics as precisely as one could. Sampling existing reality and then adding these samples to a virtual scene was a trick, a shortcut within the otherwise honest game of simulation.




"Total Capture": Building The Matrix

So far we have looked at the paradigms of the 3D computer graphics field without considering the uses of the simulated images. So what happens if you want to incorporate photorealistic images into a film? This introduces a new constraint. Not only does every simulated image have to be consistent internally, with the cast shadows corresponding to the light sources, and so on, but now it also has to be consistent with the cinematography of a film. The simulated universe and live action universe have to match perfectly (I am talking here about the "normal" use of computer graphics in narrative films and not more graphical aesthetics of TV graphics and music videos which often deliberately juxtaposes different visual codes). As can be seen in retrospect, this new constraint eventually changed the relationship between the two paradigms in favor of the sampling paradigm. But this is only visible now, after The Matrix films made the sampling paradigm the cornerstone of its visual universe. [3]

At first, when filmmakers started to incorporate synthetic 3D images in films, this did not have any effect on how people thought about 3D image synthesis. The first feature film that had 3D computer images was Looker (1980). Throughout the 1980s, a number of films were made which used computer images but always only as a very small element within the overall film narrative (Tron, which was released in 1982 and which can be compared to The Matrix, since its universe is situated inside a computer and created through computer graphics, was an exception). For instance, one of Star Trek films contained a scene of a planet coming to life; it was created using the very first particle system. But this was a single scene, and it had no interaction with any other scenes in the film.

In the early 1990s, the situation had started to change. With pioneering films such as The Abyss (James Cameron, 1989), Terminator 2: Judgment Day (James Cameron, 1991), and Jurassic Park (Steven Spielberg, 1993), computer generated characters became the key protagonists of film narratives. This meant that they would appear in dozens or even hundreds of shots throughout a film, and that in most of these shots computer characters would have to be integrated with real environments and human actors captured via live action photography (or what in the business is called "live plate.") Examples are the T-1000 cyborg character in Terminator 2, or dinosaurs in Jurassic Park. These computer-generated characters are situated inside the live-action universe (obtained by sampling physical reality via 35mm film camera). The simulated world is located inside the captured world, and the two have to match perfectly.

As I pointed out in The Language of New Media in the discussion of compositing, perfectly aligning elements that come from different sources is one of fundamental challenges of computer-based realism. Throughout the 1990s, filmmakers and special effects artists dealt with this challenge using a variety of techniques and methods. What Gaeta realized earlier than the others is that the best way to align the two universes of live action and 3D computer graphics was to build a single new universe [4] .

Rather than treating reality sampling as just one technique to be used along with many other "proper" algorithmic techniques of image synthesis, Gaeta turned it into the key foundation of his process. The process systematically takes physical reality apart and then systematically reassembles the elements into a virtual computer-based representation. The result is a new kind of image that has photographic/cinematographic appearance and detail yet internally is structured in a completely different way.

How does the process work? The geometry of an actor's head is captured with the help of a 3D scanner. Next, an actor's performance is recorded using three high-resolution cameras. This includes everything an actor will say in a film and all possible facial expressions. [5] (During the production the studio was capturing over five terabytes of data each day.) Next special algorithms are used to align the three images by tracking a number of points on the face in order to stitch three images into one. This new image is then mapped onto the geometry model. The information in the image is used not only as a texture map but also as a kind of bump map to transform locally the geometry of the model, in correspondence with facial movements. The end result is a perfect reconstruction of the captured performance, now available as 3D computer graphics data—with all the advantages that come from having such representation.

This process is significantly different from the commonly accepted methods used to create computer-based special effects, namely "keyframing" and physically-based modeling. With the first method, an animator specifies the key positions of a 3D model, and the computer calculates in-between frames. With the second method, all the animation is automatically created by software that simulates the physics underlying the movement. (This method thus represents a particular instance of the "reality simulation" paradigm I already discussed.) For instance, to create a realistic animation of a moving creature, the programmers model its skeleton, muscles, and skin, and specify the algorithms that simulate the actual physics involved. Often the two methods are combined: for instance, physically based modeling can be used to animate a running dinosaur while manual animation can be used for shots where the dinosaur interacts with human characters.

At the time of this writing, the most impressive achievement in physically-based modeling was the battle in The Lord of the Rings: Return of the King (Peter Jackson, 2003), which involved tens of thousands of virtual soldiers all driven by Massive software. [6] Similar to the Non-human Players (or bots) in computer games, each virtual soldier was given the ability to "see" the terrain and other soldiers, a set of priorities and an independent "brain," i.e. an A.I. program which directs a character's actions based on the perceptual inputs and priorities. But because Massive software, in contrast to the A.I. of games, does not have to run in real time, it can create the scenes with hundreds of thousands of realistically behaving agents (one commercial created with the help of Massive software featured 146,000 virtual characters.)

Gaeta's method uses neither manual animation nor simulation of the underlying physics. Instead, it directly captures reality, including color, texture and movement. Short sequences of the actor's performances are encoded as 3D computer animations; these animations form a library from which the filmmakers can then draw as they compose a scene. The analogy with musical sampling is obvious here. As Gaeta pointed out, his team never used manual animation to try to tweak the motion of a character's face; however, just as a musician may, they would often "hold" a particular expression before going to the next one. [7] This suggests another analogy—editing videotape. But this is second-degree editing, so to speak: instead of simply capturing segments of reality on video and then joining them together, Gaeta's method produces complete virtual recreations of particular phenomena— self-contained micro-worlds—which can be then further edited and embedded within a larger 3D simulated space.




"Image Rendering": Reality Re-assembled

Such a method combines the best of both worlds: physical reality as captured by lens-based cameras, and synthetic 3D computer graphics. While it is possible to recreate the richness of the visible world through manual painting and animation, as well as through various computer graphics techniques (texture mapping, bump mapping, physical modeling, etc.), it is expensive in terms of labor involved. Even with physically based modeling techniques, countless parameters have to be tweaked before the animation looks right. In contrast, capturing visible reality through lens on film, tape, DVD-R, computer hard drive, or other media is cheap: just point the camera and press the "record" button.

The disadvantage of such recordings is that they lack the flexibility demanded by contemporary remix culture. This culture demands not self-contained aesthetic objects or self-contained records of reality but smaller units—parts that can be easily changed and combined with other parts in endless combinations. However, because the lens-based recording process flattens the 3-D semantic structure of reality, converting a space filled with discrete objects into a flat field of pixels, any kind of editing operation—deleting objects, adding new ones, compositing, etc.—becomes quite different.

In contrast, 3D computer-generated worlds have the exact flexibility one would expect from media in the Information Age. (It is not therefore accidental that 3D computer representation —along with hypertext and other new computer-based data representation methods—was conceptualized in the same decade when the transformation of advanced industrialized societies into information societies became visible.) In a 3D computer-generated world everything is discrete: objects are defined by points described in terms of their XYZ coordinates; other properties of objects such as color, transparency, and reflectivity are similarly described in terms of discrete numbers. To duplicate an object a hundred times requires only a few mouse clicks or typing a short command on a command line; similarly, all other properties of a world can always be easily changed. Just as a sequence of genes contains the code, which is expanded into a complex organism, a compact description of a 3D world can be quickly transmitted through the network, with the client computer reconstructing the full world (this is how online multi-player computer games and simulators work).

Beginning in the late 1970s when James Blinn introduced texture mapping [8], computer scientists, designers, and animators were gradually expanding the range of information that can be recorded in the real world and then incorporated into a computer model. Until the early 1990s this information mostly involved the appearance of the objects: color, texture, light effects. The next significant step was probably the development of motion capture, which during the first half of the 1990s was quickly adopted in the movie and game industries. Now computer-synthesized worlds relied not only on sampling the appearance of the real world but also the recordings of movements of animals and humans. Building on all these techniques, Gaeta's method takes them to a new stage: capturing just about everything that at present can be captured and then reassembling the samples to create a digital (and thus completely malleable) recreation. Put in a larger context, the resulting 2D/3D hybrid representation perfectly fits with the most progressive trends in contemporary culture which are all based on the idea of a hybrid.




The New Hybrid

It is my strong feeling that the emerging "information aesthetics" (i.e., the new cultural features specific to information society) has or will have a very different logic from Modernism. The latter was driven by a strong desire to erase the old—visible as much in the avant-garde artists' (particularly the Futurists') statements that museums should be burned, as well as in the dramatic destruction of all social and spiritual realities of many people in Russia after the 1917 revolution, and in other countries after they became Soviet satellites after 1945. Culturally and ideologically, Modernists wanted to start with a tabula rasa, radically distancing themselves from the past. It was only in the 1960s that this move started to feel inappropriate, as manifested both in the loosening of ideology in Communist countries and the beginnings of a new Postmodern sensibility in the West. To quote the title of a famous book by Robert Venturi et al (published in 1972, it was the first systematic manifestation of the new sensibility), Learning from Las Vegas involved admitting that organically developing vernacular cultures involved bricolage and hybridity, rather than the purity seen, for instance, in "international style" which was still practiced by architects world-wide at that time. Driven less by the desire to imitate vernacular cultures and more by the new availability of previous cultural artifacts stored on magnetic and soon digital media, in the 1980s commercial culture in the West systematically replaced purity by stylistic heterogeneity montage. Finally, when the Soviet Empire collapsed, Postmodernism won the world over.

Today we have a very real danger of being imprisoned by a new "international style"— something which we can call "global style." The cultural globalization, of which cheap international flights and the Internet are the two most visible carriers, erases certain cultural specificity with an energy and speed impossible for Modernism. Yet we also witness today a different logic at work: the desire to creatively place together old and new, local and transnational, in various combinations. It is this logic, for instance, which made cities such as Barcelona (where I talked with John Gaeta in the context of Art Futura 2003 festival which led to this article) such a "hip" and "in" place today. All over Barcelona, architectural styles of many past centuries co-exist with new, "cool" spaces of bars, hotels, museums, and so on. Medieval meets multi-national, Gaudi meets Dolce and Gabana, Mediterranean time meets Internet time. The result is the incredible sense of energy which one feels physically just walking along the street. It is this hybrid energy, which characterizes in my view the most successful cultural phenomena today. [9] The hybrid 2D/3D image of The Matrix is one such hybrid.

The historians of cinema often draw a contrast between the Lumières and Marey. Along with a number of inventors in other countries all working independently from each other, the Lumières created what we now know as cinema—the effect of motion based on the synthesis of discrete images. Earlier Muybridge had already developed a way to take successive photographs of a moving object such as horse; eventually the Lumières and others figured out how to take enough samples so that when projected they perceptually fuse into continuous motion. Being a scientist, Marey was driven by an opposite desire: not to create a seamless illusion of the visible world but rather to be able to understand its structure by keeping subsequent samples discrete. Since he wanted to be able to easily compare these samples, he perfected a method whereby the subsequent images of moving objects were combined within a single image, thus making the changes clearly visible.

The hybrid image of The Matrix in some ways can be understand as the synthesis of these two approaches which for a hundred years remained in opposition. Like the Lumières, Gaeta's goal is to create a seamless illusion. At the same time, like Marey, he also wants to be able to edit and sequence the individual recordings.

At the beginning of this article I evoked the notion of uneven development, pointing out that often the inside structure ("infrastructure") completely changes before the surface ("superstructure") catches up. What does this idea imply for the future of images and in particular 2D/3D hybrids as developed by Gaeta and others? As Gaeta pointed out, while his method can be used to make all kinds of images, so far it has been used in the service of realism as it is defined in cinema—i.e., anything the viewer will see has to obey the laws of physics. [10] So in the case of The Matrix, its images still have a traditional "realistic" appearance while internally they are structured in a completely new way. In short, we see the old "superstructure" which still sits on top of the new "infrastructure." What kinds of images would we see if the superstructure would finally catch up with the infrastructure?

Of course, while the special effects of Hollywood movies so far follow the constraint of realism, i.e. obeying the laws of physics, they are not exactly the same as before. In order to sell movie tickets, DVDs, and all other merchandise, each new special effects film tries to top the previous one in terms of showing something that nobody has seen before. In The Matrix it was "bullet time"; in The Matrix Reloaded it was the Burly Brawl scene where dozens of identical clones fight Neo. The fact that the image is constructed differently internally does allow for all kinds of new effects; listening to Gaeta it is clear that for him the key advantage of such images is the possibility it offers for virtual cinematography. That is, if before camera movement was limited to a small and well-defined set of moves—pan, dolly, roll—now it can move in any trajectory imaginable for as long as the director wants. Gaeta talks about the Burly Brawl scene in terms of virtual choreography: both choreographing the intricate and long camera moves and also all the bodies participating in the flight (all of them are digital recreations assembled using Gaeta's method as described above).

According to Gaeta, creating this one scene took about three years. So while in principle Gaeta's method represents the most flexible way to recreate visible reality in a computer so far, it will be years before this method is streamlined and standardized enough for these advantages to become obvious. But when it happens, the artists will have an extremely flexible hybrid medium at their disposal: completely virtualized cinema. Rather than expecting that any of the present pure forms will dominate the future of visual culture, I think this future belongs to such hybrids. In other words, the future images would probably be still photographic—although only on the surface.







[1] Not all of special effects in The Matrix rely on new process by Gaeta, and of course many other Hollywood films already use some of the same strategies. I decided to focus on his process as it was used in The Matrix in this text because it articulates a new approach to image construction most systematically—and also because in contrast to many others in the special effects industry, Gaeta has extensively reflected on the process he developed, coming up with a number of terms to describe its different stages such as "universal capture" and "image rendering."

[2] Although not everybody would agree with this analysis, I feel that after the end of 1980s the field significantly slowed down: on the one hand, all key techniques which can be used to create photorealistic 3D images have already been discovered; on the other hand, rapid development of computer hardware in the 1990s meant that computer scientists no longer had to develop new techniques to make the rendering faster, since the already developed algorithms would now run fast enough.

[3] The terms "reality simulation" and "reality sampling" were made up for this text; the terms "virtual cinema," "virtual human," "universal capture" and "image rendering" belong to John Gaeta.

[4] Therefore, while the article in Wired which positioned Gaeta as a groundbreaking pioneer and as a rebel working outside of Hollywood contained the typical journalistic exaggeration, it was not that far from the truth. Steve Silberman, "Matrix 2," Wired 11.05 (May, 2003).

[5] The method captures only the geometry and images of an actor's head; body movements are recorded separately using motion capture.

[6] See www.massivesoftware.com.

[7] John Gaeta, presentation during a workshop on the making of The Matrix, Art Futura 03 festival, Barcelona, October 12, 2003.

[8] J. F Blinn,."Simulation of Wrinkled Surfaces," Computer Graphics (August 1978): 286-92.

[9] From this perspective, my earlier book The Language of New Media can be seen as a systematic investigation of a particular slice of contemporary culture driven by this hybrid aesthetics: the slice where the logic of the digital networked computer intersects the numerous logics of already established cultural forms.

[10] John Gaeta, a workshop on the making of The Matrix.



Lev Manovich is an artist and new media theorist. He is the author of The Language of New Media (The MIT Press, 2001) as well as over fifty articles which have been published in twenty-eight countries. He is in high demand worldwide as a lecturer on new media. Manovich's awards include: Guggenheim Fellowship 2002-2003, Mellon Fellowship from Cal Arts, 2002 Digital Cultures Fellowship from UC Santa Barbara, and 2002 Fellowship from The Zentrum für Literaturforschung, Berlin.


Above copied from: http://www.braintrustdv.com/essays/image-future.html

1 comment:

Blogger said...

Did you know that you can create short urls with LinkShrink and make dollars for every click on your short links.