Thursday, June 10, 2010

The Rise of Art World 2.0, Ted Mooney

Two years into what we seem to have agreed, in full supine position, to call the Great Recession, it is clear to almost everyone that something has indeed taken its course, and that in many respected fields of endeavor things will never be the same. As someone who has pursued with equal commitment two parallel careers throughout my life─one in the art world (as an editor, a writer, and now as an educator at Yale’s graduate School of Art), and another in the literary world (as a novelist, essayist, and short-story writer)─I am struck, if not exactly surprised, by the similarity of the changes the recent financial meltdown has wrought on both fields, changes long in development but only now openly validated. I say changes, but in fact they are paradigm shifts, since both the art and literary worlds are undergoing transformations that will prove to be game-changingly radical. This much is certain: what was before, will be no more. The sooner we realize this, the more options we will have in the future.

Talking about this paradigm shift in regard to the art world is strangely difficult, for the very reason that the term “art world,” so casually bandied about by almost everyone, is most often used to refer to something that is at best a suspiciously convenient myth. No such all-encompassing art “world” exists. Most likely it is this misuse of the word “world” that has left the term “art world” itself open to such a wide range of misunderstandings. But in fact “art world” does have a very specific meaning, one quite different from the fuzzy globalist entity most often summoned up by those who use it so indiscriminately. Simply put, the art world consists of all those involved in the commission, creation, valuation, promotion, presentation, sale, criticism, documentation, and preservation of art. And with that established, many other matters grow a good deal clearer.

While we have always had artists in the U.S., for example, the American art world─one including all the elements listed above─is a much more recent development. Borrowing from the nomenclature of the software industry, I will call the U.S. art world’s earliest incarnation Art World 1.0 and locate its emergence somewhere in the mid- to late 1930s, when a number of forces came together in New York to create it. These forces included the sudden arrival here of European artists fleeing the onset of World War II; the convergence in New York of other European emigré artists who had moved to the U.S. much earlier but now joined their fellow exiles in the nascent art capital; and a similar movement of American artists away from the heartland to the growing artistic ferment in the East. In addition to these historical migrations, the federal Works Progress Administration, at that time the largest employer in the post-Depression U.S., provided substantial support for the arts, allowing, for example, Willem de Kooning (who had reached New York in 1927) to earn more than three times the salary of a typical Macy’s employee of the same period, all while painting public art works for which he was paid with federal dollars. What’s more, several other elements of the art world as I have defined it were already in place: among them Alfred Barr’s then artist-friendly MOMA, such prescient commissioners of art work as Peggy Guggenheim, a handful of important galleries, soon followed by art and culture periodicals like In the Tiger’s Eye, and View—enough of the necessary elements, anyway, to give critical mass to the first genuine U.S. art world.

Money was certainly made within this self-contained enclave, but not very much, and the institutions and collectors who acquired the art works emerging from these artists’ studios did so mainly out of their acute awareness that this was a historic moment, unprecedented in the U.S. Few if any of these collectors imagined that the works they were buying would in a very short time increase astronomically in monetary value, so speculation was a negligible factor. It was sufficient that enough money be circulated through the art scene to keep the artists alive and productive. And with only short periods of stasis, indirection or revolt, the New York art world evolved from that first iteration into what we know today, its characteristic elements continuously shifting in relative importance as the art eco-system grew in volume and self-confidence.

I will leave the reader to decide when exactly the art world’s incremental upgrades occurred─when 1.0 became 1.1, and so on─confining myself instead to a few obviously watershed moments. By the late 1940s and early ’50s, the Abstract Expressionists had became pop-cultural stars whom the average American saw in equal measure as perplexing oddities (“My kid could’ve done that”) and gratifying emblems of postwar American dominance (“Take that, Europe”). Indeed, they became such accepted emblems of a newly prosperous U.S. that the federal government (in the form of the U.S. Information Agency, now known to have functioned abroad as a propaganda arm of the C.I.A.) sent their works on extended worldwide tour as a potent psychological asset in waging the Cold War.

In the late 1950s and early ’60s another major change occurred, as the soul-baring feats of the Abstract Expressionists and their progeny gave way to the cool ironies of artists like Jasper Johns, Robert Rauschenberg, Andy Warhol, and all those other artists so hastily lumped together by the media under the rubric of Pop art. I’ll call this moment the arrival of Art World 1.5, since, in my opinion, it brought us halfway to where we are now. With it came a shift in sensibility that suggested a new detachment of artist from artwork, one that allowed the expansion into that same ironic distance of the other art-world elements, those that till now had usually played a secondary role. The valuation, promotion, presentation, and sale of art began to take on more weight, and the normative practices in these areas showed signs of changing and evolving in ways till then unforeseeable.

Simultaneously, a new breed of collector emerged, exemplified by people like Robert Scull, who made his fortune from a Manhattan taxi fleet inherited from his father-in-law, had no real background in art and simply bought what he liked—in quantity. Pricing became more aggressive, promotion took on a glamour of its own, and the social aspect of the art world veered increasingly toward spectacle. Not only did this period mark the peak of postwar prosperity in this country, but it was also a genuinely thrilling time for U.S. art, one as innovative in its way as that ushered in by the Abstract Expressionists. What’s more─and here Warhol is the obvious example─the subject of much of this art was commerce, money, glamour, and popular culture. So for the first time the economic elements of the art world were unapologetically accorded billing equal to the art and the artists themselves. And with this acceptance, the ethos that has brought us to our present pass was unequivocally established.

To be sure, there were those who worked in explicit rebellion against the overall commercial trend of the art world─the Conceptualists, the Earthwork artists, performance artists, and others─many of whom produced art that, though intended in part to be “uncollectable,” must be accounted major work by any standards. But the expansion and increasing commercialization of the art world─punctuated by the booms and busts intrinsic to any market-linked community─continued apace. Among the innovations that contributed to the art world’s rapid development from the 1960s on were the decreasing cost and consequent proliferation of color reproductions in art magazines, the rise of graduate art-school programs (which implicitly presented art as a solid career path, comparable to, say, dentistry, in the security it offered), the increasing acceptance of the nakedly commercial art fair as a legitimate forum for presenting art, the ascension of the artist super-stars of the 1980s, the exploding resale market for contemporary art at venerable auction houses, the construction boom for contemporary art museums as they became more and more widely perceived as tourist magnets that no respectable mid-sized municipality could do without, and, finally, the vastly accelerated exchange of ideas and images that the Internet allowed. By the time the Museum of Modern Art reopened its doors after its lavish renovations of 2002-04, it is safe to say that Art World 1.9 had reached its apotheosis and begun its ongoing decline. Art World 2.0, a complete reset of the original, with consequences only beginning to be known, was taking shape.

Why do I locate this changeover at that moment? It’s tempting to point to the example of the newly expanded MOMA, a grotesquely misconceived distortion of its former self. As far back as the 1970s there had been talk, both within the museum and outside it, of declaring MOMA a historical museum, a museum precisely of modern art, which is generally seen as having come to a triumphant end with Minimalism. Watching MOMA continue to show contemporary art in its ostentatiously corporate, quintessentially modernist quarters seems to many like watching a 75-year-old man (the Modern opened in 1929, the very year the Great Depression began) attempt a kickflip indy at the local skateboarding park─a sight unseemly at best, but in any case a clear indication that all self-awareness has long since departed the scene. That may sound like a cheap shot, but within it lies a kernel of ineradicable truth. Every three generations (and I’m using “generation” to mean the traditional 25 years, not the five to ten years implied by advertisements and over-wrought publicists) the living memory of how the world was “back then,” what its inhabitants at that time aspired to and how they went about getting it, begins to die off. The number of eyewitnesses rapidly diminishes until we become reliant on second-hand accounts, self-serving memoirs, and conflicting rumor to summon up a vision of the original. Soon nothing can be verified with certainty, the “real” past is lost, and it can only carry on as a simulacrum of itself. Not until then, paradoxically, can the genuinely new, brought into being of its own necessity, be born and thrive on its own terms. Now, for the American art world, that time is upon us.

While it must be emphasized that Art World 2.0 remains in its earliest stage of development, it currently seems characterized by a studious rejection, quiet but steely, of the corporatization of art so enthusiastically and profitably pursued during the Art World 1.9 years. Among my grad students and many of their confreres in their mid-thirties or younger, there is a strong preference for art that in one way or another emphasizes “secrecy,” subversion, withholding, ephemerality, word-of-mouth invitation, sub-visible presence, intimate one-on-one interaction between artist and “viewer” and strong artist-to-artist dialogue. Where there’s institutional critique (and there’s a lot of it) it’s unlikely to be cast in the flamboyantly confrontational style of, say, Hans Haacke. Instead it might take place within a chosen “major New York museum”─the default term proposed by some leading museums after the almost Talmudic deliberations of their legal departments─among a group of invited participants, the results to be documented, if at all, by the museum’s security cameras. (It seems significant in itself that “surveillance”—both inside museums and elsewhere--is a hot topic among Art World 2.0 artists, who are acutely aware of the demise of privacy and ingenious in their attempts to resuscitate it.)

In addition, widespread revelations about the recent criminal activities of respected banks and international corporations, crimes that have almost without exception gone unpunished, have made virtually all corporate practice suspect to a sizable proportion of Americans, and to many of the Art World 2.0 artists it is, at least for now, anathema. As they attempt to reclaim for artists the freedom and respect once naturally accorded the best art work, independent of the by-now customary financial indices and career-path signposts, they have begun to seek alternative ways of structuring their art world. And they have learned how to adapt for their own purposes the near-universal corporate response to the recent financial meltdown; they have developed ways to eliminate (or at least minimize) the middlemen.

This strategy is only possible because the old Art World, as well as modernism and the various morbid offshoots that followed its demise, is dead to them. They accept this as a fact, but don’t dwell on it, since the corporatization that marked Art World 1.9’s endgame is of little relevance to them, except as a warning. What most concerns these emerging Art World 2.0 artists from a practical standpoint is regaining control of how their work is presented and, given the ever-greater importance of the Internet, how the reproduced images of that work are disseminated. To those ends, many prefer to avoid long-term gallery affiliation altogether, seeking instead the most suitable venue for each new project as it is completed. By stringing together a series of one-shot appearances in places of their own choosing, they are better able to maximize their freedom and shape their work’s development. This may be seen as a direct response to the financial hysteria of the Art World 1.9 years, which led to the virtual extinction (though always with magnificent exceptions) of long-term, even life-long relationships between gallery and artist. In times not nearly as distant as they now seem, galleries saw it as their role to nurture and develop their artists, eventually to arrive at a mutually beneficial outcome through patience and hard work. But the frenetic financial pace of Art World 1.9─whereby new artists would typically be given two or three shows to demonstrate their financial viability before, if their work failed to sell impressively, being summarily dropped─did away with these nurturing relationships, and Art World 2.0 artists have quite sensibly concluded that their best option is to nurture themselves. This shift in thinking explains the primary emphasis Art World 2.0 artists place on building and maintaining their connections with other artists; they want a mutual support structure that they can depend on. Not only does this preference mirror their widely shared interest in reconnecting with their audience personally, in many cases on an intimate one-to-one basis, but underscores their deep-seated resistance to any form of outside control whatsoever, from any quarter. This insistence by Art World 2.0 artists on setting their own terms is yet another bit of bad news for galleries, who have already begun to see their high-end artists’ new work bypass them altogether, going straight from studio to auction house, where a successful sale can establish an artist’s financial worth much more effectively than a sold-out gallery show, which is by nature not at all transparent. The gallery dealer can and does make use of a whole range of tricks to achieve the desired public perception of a given show’s outcome, including under-the-table discounts, non-existent waiting lists for a particular artist’s work, failure to disclose the prices at which works really sold, and many other sleights of hand. At auction, the process is far more transparent: a work is sold to the highest bidder, conferring on the artist a more certain status and financial validation. Foreign collectors, especially from emerging markets such as Asia, are far more comfortable buying the work of an artist with whom they may not be familiar if that artist has already received the imprimatur of a respected auction house.

Obviously, auction houses are unlikely to accept a consignment from an untested artist, and that fact, along with the innate predisposition of Art World 2.0 artists to avoid entangling commercial alliances, assures that there will be many more artist-organized shows in the future. This polarization of sales venues corresponds, interestingly enough, to the drastically increased concentration of U.S. wealth into far fewer hands, beginning with Ronald Reagan’s election in 1980 and grossly accelerated during the Bush years. I have heard more than a few highly informed art-world players speculate pessimistically about the continued viability of art galleries in Art World 2.0. Personally, I think this is more a form of complaint than prognostication, but the possibility cannot be dismissed out of hand. At the very least it is a recognition, however indirect, that the reset implicit in the arrival of Art World 2.0 has already occurred.

What direction Art World 2.0 will eventually take cannot, of course, be known at this early date, but its initial aspirations are clear. Precisely because the monetary valuation of art, while based on some established indices, is at base irrational and subjective to the point of arbitrariness, the artist who is alert to these matters can work with a freedom unavailable to virtually all other workers in all other professions. This freedom was highly valued in the early stages of Art World 1.0─not only by artists but also by those composing the other necessary elements of the art world eco-system. Somewhere along the way, however, that freedom was lost to many artists as the art-world “support system” arrogated to itself more and more power, which came to influence artists themselves to an equally inflated degree. Those days may not be entirely over, but Art World 2.0 has arrived with the express intention of reclaiming its freedom and resisting outside influences, especially institutional ones, with all the energy and inventiveness its members can bring to bear.

They have already dismissed any notion of a master scenario for art’s development, simply by virtue of their arrival. They have the tools and weapons to take back their freedom, with its mix of blessings. Now we will see if they have the will.

Copyright © 2010 by Ted Mooney

Author: Ted Mooney was a senior editor at Art in America magazine for more than 30 years and now teaches a graduate seminar at Yale University’s School of Art. He is also the author of three award-winning novels, as well a number of essays and shorter works of fiction. His most recent novel, to be published by Alfred A. Knopf on May 11, 2010, is titled The Same River Twice. A video trailer for this book was created in collaboration with new-media artist John Gara and can be found at .

above copied from:

Tuesday, June 8, 2010

Image Future, Lev Manovich

Uneven Development

What kinds of images will dominate visual culture a number of decades from now? Would they still be similar to the typical image that surrounds us today—photographs that are digitally manipulated and often combined with various graphical elements and type? Or would future images be completely different? Would photographic code fade away in favor of something else?

There are good reasons to assume that the future images would be photograph-like. Like a virus, a photograph turned out to be an incredibly resilient representational code: it survived waves of technological change, including computerization of all stages of cultural production and distribution. The reason for this persistence of photographic code lies in its flexibility: photographs can be easily mixed with all other visual forms—drawings, 2D and 3D designs, line diagrams, and type. As a result, while photographs truly dominate contemporary visual culture, most of them are not pure photographs but various mutations and hybrids: photographs which went through various filters and manual adjustments to achieve a more stylized look, a more flat graphic look, more saturated color, etc.; photographs mixed with design and type elements; photographs which are not limited to the part of the spectrum visible to a human eye (night vision, X-ray); simulated photographs done with 3D computer graphics; and so on. Therefore, while we can say that today we live in a "photographic culture," we also need to start reading the word "photographic" in a new way. "Photographic" today is really photo-GRAPHIC, the photo providing only an initial layer for the overall graphical mix.

One way is which change happens in nature, society, and culture is inside-out. The internal structure changes first, and this change affects the visible skin only later. For instance, according to Marxist theory of historical development, infrastructure (i.e., mode of production in a given society, also called "base") changes well before superstructure (ideology and culture in this society). In a different example, think of technology design in the twentieth century: typically, a new type of machine was at first fitted within old, familiar skin (for instance, early twentieth-century cars emulated the form of horse carriages). The familiar McLuhan idea that the new media first emulates old media is another example of this type of change. In this case, a new mode of media production, so to speak, is first used to support old structures of media organization, before the new structure emerges. For instance, the first typeset books were designed to emulate hand-written books; cinema first emulated theater; and so on.

This concept of uneven development can be useful in thinking about the changes in contemporary visual culture. Since it beginnings fifty years ago, computerization of photography (and cinematography) has by now completely changed the internal structure of a photographic image; yet its "skin," i.e. the way the image looks, still largely remains the same. It is therefore possible that at some point in the future the "skin" of an image would also become completely different, but this has not happened yet. So we can say at present our visual culture is characterized by a new computer "base" and old photographic "superstructure."

The Matrix trilogy of films provides us with a very rich set of examples perfect for thinking further about these issues. The trilogy is an allegory about how its visual universe is constructed. That is, the films tell us about the Matrix, the virtual universe which is maintained by computers—and of course, visually the images of the Matrix which we the viewers see in the films were all indeed assembled with the help of software (the animators sometimes used Maya but mostly relied on custom-written programs). So there is a perfect symmetry between us, the viewers of a film, and the people who live inside the Matrix—except while the computers running the Matrix are capable of doing it in real time, most scenes in each of The Matrix films took months and even years to put together. (So The Matrix can be also interpreted as the futuristic vision of computer games at a point in a future when it would become possible to render The Matrix-style visual effects in real time.)

The key to the visual universe of The Matrix trilogy is the new set of computer graphic processes that over the years were developed by John Gaeta and his colleagues at ESC Entertainment. Gaeta coined names for these processes: "virtual cinema," "virtual human," "universal capture," "image-based rendering," and others. Together, these processes represent a true milestone in the history of computer-driven special effects. They take to their logical conclusion the developments of the 1990s, such as motion capture, and simultaneously open a new stage. [1] We can say that with The Matrix, the old "base" of photography has finally been completely replaced by a new computer-driven one. What remains to be seen is how the "superstructure" of a photographic image—what it represents and how—will change to accommodate this "base."

Reality Simulation Versus Reality Sampling

In order to understand better the significance of Gaeta's method, let us briefly run through the history of 3D photo-realistic image synthesis and its use in the film industry. In 1963, Lawrence G. Roberts (who later in the 1960s became one of the key people behind the development of ARPANET but at that time was a graduate student at MIT) published a description of a computer algorithm to construct images in linear perspective. These images represented the objects through lines; in contemporary language of computer graphics they can be called "wire frames." Approximately ten years later, computer scientists designed algorithms that allowed for the creation of shaded images (so-called Gouraud shading and Phong shading, named after the computer scientists who create the corresponding algorithms). From the middle of the 1970s to the end of the 1980s, the field of 3D computer graphics went through rapid development. Every year new fundamental techniques were arrived at: transparency, shadows, image mapping, bump texturing, particle system, compositing, ray tracing, radiosity, and so on. [2] By the end of this creative and fruitful period in the history of the field, it was possible to use a combination of these techniques to synthesize images of almost every subject that often were not easily distinguishable from traditional cinematography.

All this research was based on one fundamental assumption: in order to re-create an image of reality identical to the one captured by a film camera, we need to systematically simulate the actual physics involved in construction of this image. This means simulating the complex interactions between light sources, the properties of different materials (cloth, metal, glass, etc.), and the properties of physical cameras, including all their limitations such as depth of field and motion blur. Since it was obvious to computer scientists that if they exactly simulate all this physics, a computer would take forever to calculate even a single image, they put their energy in inventing various short cuts which would create sufficiently realistic images while involving fewer calculation steps. So in fact each of the techniques for image synthesis I mentioned in the above paragraph is one such "hack"—a particular approximation of a particular subset of all possible interactions between light sources, materials, and cameras.

This assumption also means that you are re-creating reality step-by-step, from scratch. Every time you want to make a still image or an animation of some object or scene, the story of creation from the Bible is being replayed.

(I imagine God creating the Universe by going through the numerous menus of a professional 3D modeling, animation, and rendering program such as Maya. First he has to make all the geometry: manipulating splines, extruding contours, adding bevels...Next for every object and creature he has to choose the material properties: specular color, transparency level, image, bump, and reflexion maps, and so on. He finishes one page of menus, wipes his forehead, and starts working on the next menu page. Now on to defining the lights: again, dozens of menu options need to be selected. He renders the scene, looks at the result, and admires his creation. But he is far from being done: the universe he has in mind is not a still image but an animation, which means that the water has to flow, the grass and leaves have to move under the blow of the wind, and all the creatures also have to move. He sights and opens another set of menus where he has to define the parameters of algorithms that simulate the physics of motion. And on, and on, and on. Finally the world itself is finished and it looks good; but now God wants to create the Man so he can admire his creation. God sighs again, and takes from the shelf a set of Maya manuals...)

Of course we are in somewhat better position than God was. He was creating everything for the first time, so he could not borrow things from anywhere. Therefore everything had to be built and defined from scratch. But we are not creating a new universe but instead visually simulating a universe that already exists, i.e. physical reality. Therefore computer scientists working on 3D computer graphics techniques realized early on that in addition to approximating the physics involved they can sometimes take another shortcut. Instead of defining something from scratch through the algorithms, they can simply sample it from existing reality and incorporate these samples into the construction process.

Examples of the application of this idea are the techniques of texture mapping and bump mapping which were introduced already in the second part of the 1970s. With texture mapping, any 2D digital image—which can be a close-up of some texture such as wood grain or bricks, but which can be also anything else, for instance a logo, a photograph of a face or of clouds—is mathematically wrapped around virtual geometry. This is a very effective way to add visual richness of a real world to a virtual scene. Bump texturing works similarly, but in this case the 2D image is used as a way to quickly add complexity to the geometry itself. For instance, instead of having to manually model all the little cracks and indentations which make up the 3D texture of a wall made from concrete, an artist can simply take a photograph of an existing wall, convert into a grayscale image, and then feed this image to the rendering algorithm. The algorithm treats grayscale image as a depth map, i.e. the value of every pixel is being interpreted as relative height of the surface. So in this example, light pixels become points on the wall that are a little in front while dark pixels become points that are a little behind. The result is enormous saving in the amount of time necessary to recreate a particular but very important aspect of our physical reality: a slight and usually regular 3D texture found in most natural and many human-made surfaces, from the bark of a tree to a woven cloth.

Other 3D computer graphics techniques based on the idea of sampling existing reality include reflection mapping and 3D digitizing. Despite the fact that all these techniques have been always widely used as soon as they were invented, many people in the field (as far as I can see) always felt that they were cheating. Why? I think this feeling was there because the overall conceptual paradigm for creating photorealistic computer graphics was to simulate everything from scratch through algorithms. So if you had to use the techniques based on directly sampling reality, you somehow felt that this was just temporary—because the appropriate algorithms were not yet developed or because the machines were too slow. You also had this feeling because once you started to manually sample reality and then tried to include these samples in your perfect algorithmically defined image, things rarely would fit exactly right, and painstaking manual adjustments were required. For instance, texture mapping would work perfectly if applied it to straight surface, but if the surface was curved, inevitable distortion would occur.

(I am using "we" here and in other places in this text because I spent approximately seven years working professionally in the field of 3D computer animation between 1984 and 1992, so I still feel a certain identification with this field. At IMAGINA 2003 festival in Barcelona I met John Gaeta and Greg Juby from ESC, who were there to lecture on the making of The Matrix. Slowly it became clear that the three of use were connected by multiple threads. In 1984 I went to work for a company in New York called Digital Effects that at the time was one among seven companies in the world focused on 3D computer animation for television and film. Company president Jeff Kleiser later founded another company—Kleiser- Walczak—where Greg Juby worked for a few years in the 1990s. Juby graduated from Syracuse University where—as we discovered over dinner—he was my student in the very first university class in digital arts I ever taught [1992]. While working at Kleiser's company, Juby met John Gaeta and eventually went to work for him at ESC. Finally, it also turned out that before we turned to computer graphics both Gaeta and I were students at New York University film school.)

Throughout the 1970s and 1980s the "reality simulation" paradigm and "reality sampling" paradigm co-existed side-by-side. More precisely, as I suggested above, the sampling paradigm was "imbedded" within the reality simulation paradigm. It was common sense that the way to create photorealistic images of reality is by simulating its physics as precisely as one could. Sampling existing reality and then adding these samples to a virtual scene was a trick, a shortcut within the otherwise honest game of simulation.

"Total Capture": Building The Matrix

So far we have looked at the paradigms of the 3D computer graphics field without considering the uses of the simulated images. So what happens if you want to incorporate photorealistic images into a film? This introduces a new constraint. Not only does every simulated image have to be consistent internally, with the cast shadows corresponding to the light sources, and so on, but now it also has to be consistent with the cinematography of a film. The simulated universe and live action universe have to match perfectly (I am talking here about the "normal" use of computer graphics in narrative films and not more graphical aesthetics of TV graphics and music videos which often deliberately juxtaposes different visual codes). As can be seen in retrospect, this new constraint eventually changed the relationship between the two paradigms in favor of the sampling paradigm. But this is only visible now, after The Matrix films made the sampling paradigm the cornerstone of its visual universe. [3]

At first, when filmmakers started to incorporate synthetic 3D images in films, this did not have any effect on how people thought about 3D image synthesis. The first feature film that had 3D computer images was Looker (1980). Throughout the 1980s, a number of films were made which used computer images but always only as a very small element within the overall film narrative (Tron, which was released in 1982 and which can be compared to The Matrix, since its universe is situated inside a computer and created through computer graphics, was an exception). For instance, one of Star Trek films contained a scene of a planet coming to life; it was created using the very first particle system. But this was a single scene, and it had no interaction with any other scenes in the film.

In the early 1990s, the situation had started to change. With pioneering films such as The Abyss (James Cameron, 1989), Terminator 2: Judgment Day (James Cameron, 1991), and Jurassic Park (Steven Spielberg, 1993), computer generated characters became the key protagonists of film narratives. This meant that they would appear in dozens or even hundreds of shots throughout a film, and that in most of these shots computer characters would have to be integrated with real environments and human actors captured via live action photography (or what in the business is called "live plate.") Examples are the T-1000 cyborg character in Terminator 2, or dinosaurs in Jurassic Park. These computer-generated characters are situated inside the live-action universe (obtained by sampling physical reality via 35mm film camera). The simulated world is located inside the captured world, and the two have to match perfectly.

As I pointed out in The Language of New Media in the discussion of compositing, perfectly aligning elements that come from different sources is one of fundamental challenges of computer-based realism. Throughout the 1990s, filmmakers and special effects artists dealt with this challenge using a variety of techniques and methods. What Gaeta realized earlier than the others is that the best way to align the two universes of live action and 3D computer graphics was to build a single new universe [4] .

Rather than treating reality sampling as just one technique to be used along with many other "proper" algorithmic techniques of image synthesis, Gaeta turned it into the key foundation of his process. The process systematically takes physical reality apart and then systematically reassembles the elements into a virtual computer-based representation. The result is a new kind of image that has photographic/cinematographic appearance and detail yet internally is structured in a completely different way.

How does the process work? The geometry of an actor's head is captured with the help of a 3D scanner. Next, an actor's performance is recorded using three high-resolution cameras. This includes everything an actor will say in a film and all possible facial expressions. [5] (During the production the studio was capturing over five terabytes of data each day.) Next special algorithms are used to align the three images by tracking a number of points on the face in order to stitch three images into one. This new image is then mapped onto the geometry model. The information in the image is used not only as a texture map but also as a kind of bump map to transform locally the geometry of the model, in correspondence with facial movements. The end result is a perfect reconstruction of the captured performance, now available as 3D computer graphics data—with all the advantages that come from having such representation.

This process is significantly different from the commonly accepted methods used to create computer-based special effects, namely "keyframing" and physically-based modeling. With the first method, an animator specifies the key positions of a 3D model, and the computer calculates in-between frames. With the second method, all the animation is automatically created by software that simulates the physics underlying the movement. (This method thus represents a particular instance of the "reality simulation" paradigm I already discussed.) For instance, to create a realistic animation of a moving creature, the programmers model its skeleton, muscles, and skin, and specify the algorithms that simulate the actual physics involved. Often the two methods are combined: for instance, physically based modeling can be used to animate a running dinosaur while manual animation can be used for shots where the dinosaur interacts with human characters.

At the time of this writing, the most impressive achievement in physically-based modeling was the battle in The Lord of the Rings: Return of the King (Peter Jackson, 2003), which involved tens of thousands of virtual soldiers all driven by Massive software. [6] Similar to the Non-human Players (or bots) in computer games, each virtual soldier was given the ability to "see" the terrain and other soldiers, a set of priorities and an independent "brain," i.e. an A.I. program which directs a character's actions based on the perceptual inputs and priorities. But because Massive software, in contrast to the A.I. of games, does not have to run in real time, it can create the scenes with hundreds of thousands of realistically behaving agents (one commercial created with the help of Massive software featured 146,000 virtual characters.)

Gaeta's method uses neither manual animation nor simulation of the underlying physics. Instead, it directly captures reality, including color, texture and movement. Short sequences of the actor's performances are encoded as 3D computer animations; these animations form a library from which the filmmakers can then draw as they compose a scene. The analogy with musical sampling is obvious here. As Gaeta pointed out, his team never used manual animation to try to tweak the motion of a character's face; however, just as a musician may, they would often "hold" a particular expression before going to the next one. [7] This suggests another analogy—editing videotape. But this is second-degree editing, so to speak: instead of simply capturing segments of reality on video and then joining them together, Gaeta's method produces complete virtual recreations of particular phenomena— self-contained micro-worlds—which can be then further edited and embedded within a larger 3D simulated space.

"Image Rendering": Reality Re-assembled

Such a method combines the best of both worlds: physical reality as captured by lens-based cameras, and synthetic 3D computer graphics. While it is possible to recreate the richness of the visible world through manual painting and animation, as well as through various computer graphics techniques (texture mapping, bump mapping, physical modeling, etc.), it is expensive in terms of labor involved. Even with physically based modeling techniques, countless parameters have to be tweaked before the animation looks right. In contrast, capturing visible reality through lens on film, tape, DVD-R, computer hard drive, or other media is cheap: just point the camera and press the "record" button.

The disadvantage of such recordings is that they lack the flexibility demanded by contemporary remix culture. This culture demands not self-contained aesthetic objects or self-contained records of reality but smaller units—parts that can be easily changed and combined with other parts in endless combinations. However, because the lens-based recording process flattens the 3-D semantic structure of reality, converting a space filled with discrete objects into a flat field of pixels, any kind of editing operation—deleting objects, adding new ones, compositing, etc.—becomes quite different.

In contrast, 3D computer-generated worlds have the exact flexibility one would expect from media in the Information Age. (It is not therefore accidental that 3D computer representation —along with hypertext and other new computer-based data representation methods—was conceptualized in the same decade when the transformation of advanced industrialized societies into information societies became visible.) In a 3D computer-generated world everything is discrete: objects are defined by points described in terms of their XYZ coordinates; other properties of objects such as color, transparency, and reflectivity are similarly described in terms of discrete numbers. To duplicate an object a hundred times requires only a few mouse clicks or typing a short command on a command line; similarly, all other properties of a world can always be easily changed. Just as a sequence of genes contains the code, which is expanded into a complex organism, a compact description of a 3D world can be quickly transmitted through the network, with the client computer reconstructing the full world (this is how online multi-player computer games and simulators work).

Beginning in the late 1970s when James Blinn introduced texture mapping [8], computer scientists, designers, and animators were gradually expanding the range of information that can be recorded in the real world and then incorporated into a computer model. Until the early 1990s this information mostly involved the appearance of the objects: color, texture, light effects. The next significant step was probably the development of motion capture, which during the first half of the 1990s was quickly adopted in the movie and game industries. Now computer-synthesized worlds relied not only on sampling the appearance of the real world but also the recordings of movements of animals and humans. Building on all these techniques, Gaeta's method takes them to a new stage: capturing just about everything that at present can be captured and then reassembling the samples to create a digital (and thus completely malleable) recreation. Put in a larger context, the resulting 2D/3D hybrid representation perfectly fits with the most progressive trends in contemporary culture which are all based on the idea of a hybrid.

The New Hybrid

It is my strong feeling that the emerging "information aesthetics" (i.e., the new cultural features specific to information society) has or will have a very different logic from Modernism. The latter was driven by a strong desire to erase the old—visible as much in the avant-garde artists' (particularly the Futurists') statements that museums should be burned, as well as in the dramatic destruction of all social and spiritual realities of many people in Russia after the 1917 revolution, and in other countries after they became Soviet satellites after 1945. Culturally and ideologically, Modernists wanted to start with a tabula rasa, radically distancing themselves from the past. It was only in the 1960s that this move started to feel inappropriate, as manifested both in the loosening of ideology in Communist countries and the beginnings of a new Postmodern sensibility in the West. To quote the title of a famous book by Robert Venturi et al (published in 1972, it was the first systematic manifestation of the new sensibility), Learning from Las Vegas involved admitting that organically developing vernacular cultures involved bricolage and hybridity, rather than the purity seen, for instance, in "international style" which was still practiced by architects world-wide at that time. Driven less by the desire to imitate vernacular cultures and more by the new availability of previous cultural artifacts stored on magnetic and soon digital media, in the 1980s commercial culture in the West systematically replaced purity by stylistic heterogeneity montage. Finally, when the Soviet Empire collapsed, Postmodernism won the world over.

Today we have a very real danger of being imprisoned by a new "international style"— something which we can call "global style." The cultural globalization, of which cheap international flights and the Internet are the two most visible carriers, erases certain cultural specificity with an energy and speed impossible for Modernism. Yet we also witness today a different logic at work: the desire to creatively place together old and new, local and transnational, in various combinations. It is this logic, for instance, which made cities such as Barcelona (where I talked with John Gaeta in the context of Art Futura 2003 festival which led to this article) such a "hip" and "in" place today. All over Barcelona, architectural styles of many past centuries co-exist with new, "cool" spaces of bars, hotels, museums, and so on. Medieval meets multi-national, Gaudi meets Dolce and Gabana, Mediterranean time meets Internet time. The result is the incredible sense of energy which one feels physically just walking along the street. It is this hybrid energy, which characterizes in my view the most successful cultural phenomena today. [9] The hybrid 2D/3D image of The Matrix is one such hybrid.

The historians of cinema often draw a contrast between the Lumières and Marey. Along with a number of inventors in other countries all working independently from each other, the Lumières created what we now know as cinema—the effect of motion based on the synthesis of discrete images. Earlier Muybridge had already developed a way to take successive photographs of a moving object such as horse; eventually the Lumières and others figured out how to take enough samples so that when projected they perceptually fuse into continuous motion. Being a scientist, Marey was driven by an opposite desire: not to create a seamless illusion of the visible world but rather to be able to understand its structure by keeping subsequent samples discrete. Since he wanted to be able to easily compare these samples, he perfected a method whereby the subsequent images of moving objects were combined within a single image, thus making the changes clearly visible.

The hybrid image of The Matrix in some ways can be understand as the synthesis of these two approaches which for a hundred years remained in opposition. Like the Lumières, Gaeta's goal is to create a seamless illusion. At the same time, like Marey, he also wants to be able to edit and sequence the individual recordings.

At the beginning of this article I evoked the notion of uneven development, pointing out that often the inside structure ("infrastructure") completely changes before the surface ("superstructure") catches up. What does this idea imply for the future of images and in particular 2D/3D hybrids as developed by Gaeta and others? As Gaeta pointed out, while his method can be used to make all kinds of images, so far it has been used in the service of realism as it is defined in cinema—i.e., anything the viewer will see has to obey the laws of physics. [10] So in the case of The Matrix, its images still have a traditional "realistic" appearance while internally they are structured in a completely new way. In short, we see the old "superstructure" which still sits on top of the new "infrastructure." What kinds of images would we see if the superstructure would finally catch up with the infrastructure?

Of course, while the special effects of Hollywood movies so far follow the constraint of realism, i.e. obeying the laws of physics, they are not exactly the same as before. In order to sell movie tickets, DVDs, and all other merchandise, each new special effects film tries to top the previous one in terms of showing something that nobody has seen before. In The Matrix it was "bullet time"; in The Matrix Reloaded it was the Burly Brawl scene where dozens of identical clones fight Neo. The fact that the image is constructed differently internally does allow for all kinds of new effects; listening to Gaeta it is clear that for him the key advantage of such images is the possibility it offers for virtual cinematography. That is, if before camera movement was limited to a small and well-defined set of moves—pan, dolly, roll—now it can move in any trajectory imaginable for as long as the director wants. Gaeta talks about the Burly Brawl scene in terms of virtual choreography: both choreographing the intricate and long camera moves and also all the bodies participating in the flight (all of them are digital recreations assembled using Gaeta's method as described above).

According to Gaeta, creating this one scene took about three years. So while in principle Gaeta's method represents the most flexible way to recreate visible reality in a computer so far, it will be years before this method is streamlined and standardized enough for these advantages to become obvious. But when it happens, the artists will have an extremely flexible hybrid medium at their disposal: completely virtualized cinema. Rather than expecting that any of the present pure forms will dominate the future of visual culture, I think this future belongs to such hybrids. In other words, the future images would probably be still photographic—although only on the surface.

[1] Not all of special effects in The Matrix rely on new process by Gaeta, and of course many other Hollywood films already use some of the same strategies. I decided to focus on his process as it was used in The Matrix in this text because it articulates a new approach to image construction most systematically—and also because in contrast to many others in the special effects industry, Gaeta has extensively reflected on the process he developed, coming up with a number of terms to describe its different stages such as "universal capture" and "image rendering."

[2] Although not everybody would agree with this analysis, I feel that after the end of 1980s the field significantly slowed down: on the one hand, all key techniques which can be used to create photorealistic 3D images have already been discovered; on the other hand, rapid development of computer hardware in the 1990s meant that computer scientists no longer had to develop new techniques to make the rendering faster, since the already developed algorithms would now run fast enough.

[3] The terms "reality simulation" and "reality sampling" were made up for this text; the terms "virtual cinema," "virtual human," "universal capture" and "image rendering" belong to John Gaeta.

[4] Therefore, while the article in Wired which positioned Gaeta as a groundbreaking pioneer and as a rebel working outside of Hollywood contained the typical journalistic exaggeration, it was not that far from the truth. Steve Silberman, "Matrix 2," Wired 11.05 (May, 2003).

[5] The method captures only the geometry and images of an actor's head; body movements are recorded separately using motion capture.

[6] See

[7] John Gaeta, presentation during a workshop on the making of The Matrix, Art Futura 03 festival, Barcelona, October 12, 2003.

[8] J. F Blinn,."Simulation of Wrinkled Surfaces," Computer Graphics (August 1978): 286-92.

[9] From this perspective, my earlier book The Language of New Media can be seen as a systematic investigation of a particular slice of contemporary culture driven by this hybrid aesthetics: the slice where the logic of the digital networked computer intersects the numerous logics of already established cultural forms.

[10] John Gaeta, a workshop on the making of The Matrix.

Lev Manovich is an artist and new media theorist. He is the author of The Language of New Media (The MIT Press, 2001) as well as over fifty articles which have been published in twenty-eight countries. He is in high demand worldwide as a lecturer on new media. Manovich's awards include: Guggenheim Fellowship 2002-2003, Mellon Fellowship from Cal Arts, 2002 Digital Cultures Fellowship from UC Santa Barbara, and 2002 Fellowship from The Zentrum für Literaturforschung, Berlin.

Above copied from:

Monday, June 7, 2010


YouTube has radically changed not only the way we look at video, but also the role of the audience and the language of video. No longer a mere spectator for the images created by the Author, Youtube users are increasingly involved in the making of unique and peculiar videos, unexpected “viral” trends that generate in turn new influences – and all shared on the Web.

Certainly YouTube is the product of a technology revolution that allows practically anyone to record, film, edit and perform postproduction tasks with basic means – which are almost intuitive by now. But the originality and variety of products that circulate on video portals (Youtube, ma but also Vimeo, or the more recent Seesmic), has exceeded all expectations. From extremely basic segments such as the “long portraits” – where a person stares at the camera, to the message chains – like the one started by MadV; from “hallucinated” videos made by fans -where a whining Chris Crocker tells the world to leave Britney Spears alone, to real music videos, that sometimes brought a band to its commercial success – like with the band OK Go…

And the lifespan of a video on the Web doesn't stop here: once it's posted and shared, it can be recast, mashed-up, and mixed to create a new piece of art where the original author's images and purposes are mixed with other authors' purposes and means (that could be other images, thematic connections or even software).

An example -among many- of the artistic and social effects of the ” YouTube phenomenon” was the evening dedicated to video triptychs held at Postmasters Gallery in New York. Organized by the art collective “Artists Meeting” and curated by Thomas Hutchison, Maria Joao Salema and James Andrews, the event showed a compilation of “accidental video art”: video triptychs where the most diverse products of the Internet were aggregated thanks to a custom software, You3b, an Eyebeam project produced by Jeff Crouse and Andrew Mahon.

These three-channel videos –that have extremely diverse features and themes- are products of the “YouTube” mine: “The project takes off from the freedom that you find in YouTube itself, from the eclectic nature of it. These videos are a sort of modern narrative, made not with words but with images. There's a hunger for information, for images, for stories and there's also a library that is expanding very rapidly” explains Raphaele Shirley, one of the organizers.

“For me, what was interesting about YouTube are just the crazy, fascinating, wild videos that people publish” says James Andrews, “It's a little bit like public access TV, a kind of ad hoc personal media that can be very serious and profound, or can be very absurd and random. My favorite discovery on YouTube was the subculture of people who documented their own washing machine -- and this is a very large subculture, pretty massive at this point-- from the aesthetic or technical point of view –. In the end I made a triptych of washing machines and I'm convinced that on social level in a way is connected to the American tradition of tinkering with your own car or motorcycle…”

An interesting aspect of the event is definitely the “authorial shift” of the final works: “This show mutilates the whole idea of ownership”, notes James Andrews, “You really can't figure out who on Earth owns these triptychs: you have one set of people who created the triptych software, then you have three different groups of people who potentially created the three videos, you have the three different YuoTube accounts that published the videos – which may or may not be the same people who created them, you then have the curators, then you have Artist Meeting and you then have Postmasters Gallery. And each of the original videos could be removed from the Net at any time…”

In a sense, this project takes to the extreme not only the concept of authoriality, and the fact that new media art often needs team work in order to be produced, but also, says James Andrews, “it could simply obliviate the human factor, which I think is even more interesting. No one in particular got credit for anything in the show – it was just something that happened…”

An event in which each viewer – the last link of this long chain – looks at the triptychs, getting his or her own, very personal narrative: for Raphaele Shirley “this kind of triptych really lends itself to non-linearity: you can't grasp a single image or a single story and follow it through from beginning to end. It's really about cross connections, cross pollination of ideas, like unconscious thinking and the way you create a dream, or the way you connect data…”

above copied from: