Friday, June 18, 2010

VITTORE BARONI: FROM MAIL ART TO WEB 2.0, Tatiana Bazzichelli


Eng: Luisa Bertolatti


Vittore Baroni was born in 1956 in Forte dei Marmi, Italy, and lives and works in Viareggio. A Music critic and investigator of countercultures, he has been one of the most active operators in the mail art planetary circuit for the past three decades.

Since 1978, he has been promoting exhibitions, events, publications and collective projects on Art Networking and net cultures that anticipated the Internet. He also dealt with visual and audio poetry, street art and comics meticulously. I met Vittore in Viareggio, in his house in Via Cesare Battisti, a treasury of wonders for all those interested in the dynamics of the net and creative correspondence, seeing as he collects hundreds of materials, envelopes, stamps, records and works from collective projects accumulated over thirty years of postal communication.

We spent a lovely afternoon together, surrounded by the surprises hidden in his archive that often come back to life in order to pleasantly entertain the many guests who visit Vittore and his family (cat and rabbit included) and we thought about the current dynamics of social networking, relating them to the artistic experiences and practices on the net of the past few decades.

Tatiana Bazzichelli: Do you think that networking platforms defined with the term Web 2.0 (Facebook, Myspace, Youtube etc.) are important in order for younger generations to get more involved with the concept of networks, or are they a mirror of a involution in net practices? What do you think the term “making the net” means today?

Vittore Baroni: Vittore Baroni: First of all I think that we have to accept the fact that there's a new generation of social networks that have sprouted from a constant evolution of technological and communicative instruments. By doing some research it's evident that there are not only the most commonly known social networks, but it's such a vast phenomenon that it's difficult to give an opinion on the value of these and consider them more or less positive compared to the more “traditional” networking practices. In a few words, I believe that the presence of these instruments is useful; what becomes decisive is understanding how to use them and for what purpose. Even Ray Johnson, known as the “father” of mail art, in reality didn't just use mail in his networking activities, but for example he also used the phone, for physical encounters with groups of people based on particular strategies. Networkers are people who don't close themselves in their study in order to make their art; instead their primary objective is the desire to build nets and communicate with other people. The act of communication becomes a work of art.

I find it very useful that there are millions of people who are beginning to approach the theme of networking by connecting with one another. It seems obvious to me that the social networks that have had and will have more success are those where people find a certain amount of practical use. Before the Internet existed, I was part of the music circuits and music collectors, and this network had already expanded worldwide. The Internet, with all it's affiliations defined by the term Web 2.0, is an instrument with enormous potential, but, and this seems impossible, we're still not able to use it fully yet. Every time I come across collective creative projects online, I'm surprised to state that the results obtained are often inferior in terms of networking than what was capable of being achieved with postal art through a simple postage stamp.

There were decades of countercultures and political battles, during which the most common means of communication used was the simple instrument called “mimeograph”. The flyer given to people on the street, during 1968 (which in Italy was actually 1969), the fanzines or punkzines from the punk era, created an efficient and disruptive action in certain circuits. A process such as this should be even more exponential with the Internet, because it is directed to a much wider audience. But I believe that we are still not capable of using the instrument efficiently. Maybe we should wait for Web 3.0? You realise that many of the peer-2-peer networks are still widely used to exchange pornographic material; a large part of the world's population is subject to the digital divide and problems of primary survival; others seem to prefer an “individual” life concept more than a “social” one; I don't think we've gotten to an arrival point and a total maturity of network practices, despite the fact that many think the opposite.

The hope is, as it was in the 1960's when hundreds of thousands of copies of independent magazines per printed, that today new forms of social networks can be developed that are more elastic and really favour creative work. Usually the “inventors” of these platforms are young adults that start doing this kind of thing just for fun, and this is positive. We're still in a very immature phase: to be on Facebook is more like going to a piazza and listening to people chatting, with someone who shouts a little louder once in a while. But the structure of networks is not thought of in order to create a constructive dialogue. If tomorrow social networks made with fewer restrictions and no strict rules will be successful, this will be a step forward and perhaps there could be a new counterculture. In the past these practices arose because the “entertainment” was so restricted that the subversive content would leap up and become evident more quickly. Today we are saturated by data and voices. On the one hand it's positive, but on the other the message becomes diluted and no one knows how to take part actively.

Why was Luther Blisset so right in the 90's? Because we realised that we were missing a certain mythology of attitudes and ways of thinking; what Franco Battiato described as “a permanent centre of gravity” was missing. Once upon a time you would go to see a concert like Woodstock and on the stage you would see “mythological” people who were like lightning conductors and diffusers of energy. Today in the network dynamics we have an explosion of artificial democracy and equality, which creates a total opacity; the subject of individual responsibilities tends to crumble in an undifferentiated mass of fragmented data. Once upon a time people like John Lennon created a song that had meaning because it was created by people like him, but also because it was perceived by an audience that contributed to distributing it and spreading it through people's imagination. If a song of his like “Give Peace a Chance” was sung by millions of people in a piazza during a protest, a virtuous circuit would be created that contained the artist and the public. In the frenzy of equality of Web 2.0 the most satisfied are those who have the power, because the problems are still the same but people have the illusion that there's a full freedom and the possibility for everyone to have their 15 minutes of fame. I think that current social networks are reality shows for everyone. Now we just have to transform the reality show into a cultural program (which is a difficult task!).

Tatiana Bazzichelli: I ask myself whether the network of mail art and the net practices that were inspired by this, much before the Fluxus experience, can find a place in the current generations of social networks. I think of the fact that the mail artists themselves and the networkers can use social networks as a territory for artistic criticism. Do you think these practices on the net are reversible or immeasurable?

Vittore Baroni: The creation of mail art coincided with the diffusion of postage as a means of communication in different circles, like that of the New York Correspondence School founded by Ray Johnson, but also those of visual poets in South America, or artists within the Iron Curtain. The genesis coincided with a practical need: the idea of connecting in the most simple and functional way possible. George Maciunas understood that there was a network of people who shared artistic and creative objectives but they were in Europe, America or Japan. It was no surprise that Fluxus came about through a festival in Germany, spinning a web via the post and telephone between different people who had common interests. There was an intense epistolary correspondence activity behind Fluxus. Ray Johnson the most efficient and economical way to create his artistic network through the post and telephone, but if he were still alive today he would probably use to internet too.

In fact the true primary essence of mail art was not the fact of “using the post”, but to begin sharing projects. No one who practices mail art has an absolute or fetishist attachment to the post as a means, and no one has a refusal toward different means of communication and new technologies. On the contrary, the computer has revealed itself as being very useful for filing archives, for managing large databases, to layout and print your own pages of stamps and much more. A lot of postal artists, like Piermario Ciani, began using the Commodore 64, the first computers, the fax machine, trying to learn the first programming language, mostly to be able to communicate. Some people tried to create “alternative” postal systems for artistic correspondence, like Ulises Carrion and Peter Kuestermann.

When the use of the computer spread, many mail artists began to use email while using traditional mail, in so doing not giving up on the pleasure of the “physical” manufacture, but using email to send invites quickly, for daily communication, etc. Mail art is a non-profit and non-commercial artistic practice, but can be very expensive when you start to communicate to hundreds of people. The natural generational replacement is always moving toward electronic ways of communicating.

To answer your question I believe that these practices are reversible, within the limits of the fact that in some countries the computer has become the most common means for communication. But for mail artists in Africa or China the computer is still an expensive means and so postage is still widely used. In the projects of the past few years, I have tried to send mail art project invites via post but also via the Internet. In this way I saw how many participants I could involve through these different channels of communication. In 2008 I managed a project on the artist's book, based on the theme of Utopia, sending the invites through different art websites, like those dedicated to the continuity of Fluxus, to the creators of stickers or street art. The invite was open to everyone and bounced around different websites and blogs, with the risk of receiving an unprecedented amount of work. I received 170 artists' books in total from 26 countries. Of these, about two thirds came from the mail art circuit that I built over thirty years of correspondence. Not more than 50 came from authors that saw the invite on the Internet.

People on the net are unfamiliar with the mechanism of mail art and perhaps they believe that it isn't possible that everyone can take part in a project and everyone gets free documentation. Perhaps they think that there's some trick underneath it all. There's a strange discrepancy between “analogical” networking and digital networking. I took part in various creative web projects, conceived with a similar structure to that of mail art, and I realised that the number of participants was considerably reduced compared to Internet's potential. In order to create digital networking, paradoxically much more work is needed, despite the fact that there are more potential users. People are used to the fact that web projects are for rapid and of immediate fruition, Internet language is rapid and intuitive. To participate in a web project can be reduced to the fact of writing a simple line, or sending a ready-made image. For mail art there are slower and more personal preparation times, but people seem more inclined to use their time creatively, with a view to the work they will receive in exchange for their own in their letterbox. Mail art is communication, but it is also a gift, a potlatch. The speed of Internet often brings to a weakening of the content of a message.

Tatiana Bazzichelli: How do you explain the diffusion of net art projects in the 90's then?

Vittore Baroni: Thinking about net art, when I go to the various Biennale events or make use of projects on the net, I have the impression that these sharing platforms, that are often conceptually innovative and well-built, are actually used by very few people over the course of time. It reminds me of conceptual art: put side-by-side with great enunciations, the work often seemed a little stale. Often the net art works, after an initial moment of enthusiasm, don't have a future on the net, and “good games” are invented that remain static in the enunciation stage. They aren't used by a vast audience and (perhaps) they only make the net artist look good, or whoever created the work and exhibited it somewhere. The work remains in the history of net art, but is poor in its social use and quickly becomes obsolete and cannot be found on the net any longer.

From my point of view net art is an elitist practice, common in the cultural industry, which must continue to produce catalogues and make imaginary capital. I'm almost happy that mail art was not historicised and studied. Maybe it's not worth much, or maybe there's something uncomfortable in its “openness” and “gratuity”, in its pointing the finger at regal nudity, which is best not to talk about too much... The sum of art + Internet should, in theory, trigger revolutionary projects, which are being delayed and are difficult to concretise.

If we take the computer-generated music field for example, it's been at least 30 years since people like Brian Eno have experimented with robotic music, without having created any masterpieces. We've gotten used to a world that travels at light speed, but maybe cultural paradigms have gestation periods that don't coincide with our consumerist frenzy. Since I was a teenager, I was fascinated by artistic movements and cultural phenomena of opposition, from Dadaism to Fluxus, from the Beat Generation to Hippies, which brought a fertile movement of strong ideas for a change in real life, capable of really giving you something. In the recent years of postmodernism, that I hope are ending, we took elements of modernity and limited ourselves to rearranging them, without really going beyond (post), and without opening ourselves to a new dimension of art thanks to the net.

Perhaps hacking or net art practices have not yet found a way to leave a real “gift” to those who use them. When, as a boy, I looked for underground magazines by sending a few dollars sealed in an envelope to New York, hoping that I would get something in return, the experience enriched me, those materials provoked cultural shocks in me. Until we can create a strong mentality through current media, the conclusion will always be something like Facebook. Lots of people put together, but who exchange poor and superficial content.

Tatiana Bazzichelli: Perhaps what's missing in Web 2.0, which was included in Mail Art, is the fact of creating a “real” exchange and generating a criticism of the medium itself. Perhaps we should try to create strategies in order to move social network users to more “traditional” forms of networking?

Vittore Baroni: Well, it would be strange to go back to mailed fanzines instead of blogs... Actually I think the Historical phase of mail art was over at least a decade ago, mail art has had its time and we cannot go back and take the same route. But mail art can be useful as a Historic memory, as well as the preceding forms of analogical art. What should be favoured at this point in time is an attentive analysis and study of the past of networking, its many small threads that are often hidden, which could give useful advice to those who work with computers today. Like the case of the forty years or more of the History of Mail Art whose interior and humus developed phenomena that few know anything about, like Neoism, Multiple Names, Plagiarism, Impossibleism, Trax, the Luther Blissett project, that represent the tip of the iceberg. We should have the patience and curiosity to study the origins of artistic networking, taking example from those few critics and authors that have triggered a theoretical observation on the subject (Chuck Welch, John Held, Craig Saper, not to mention your book on Networking Art), to avoid to continue to discover what has already been discovered but to also try to comprehend how to develop less superficial sharing mechanisms in digital networks that aren't so connected to commercial frames of mind.

I hope that in the not so distant future platforms like Myspace or Second Life, where you are caged in enforced structures and behaviour – personally I feel like they are “concentration camps” – leave room for other possibilities where you can really feel comfortable. More suitable experiences that are more similar to mail art are for example Flash Mob, which use the web to give life to physical meetings, to live a common albeit brief “aesthetical” experience, a bit like the meetings organised by Ray Johnson. In my opinion, we must overcome the distinctions between art and non-art, between personal identity and artistic work, between worker and work, between individuality and collectives, between virtual and physical encounters. With a few friends I'm trying to move in this direction, through a project that uses the web but needs real life interventions, thought out to involve people who know nothing about net art or hacking or art in general.

The project is called Wandering Places and pushes toward collective exploration, on convocation of the single participants, of concrete places that mean something to these particular people, like a door that opens onto a memory, a dream, a utopia. The first meeting was on top of Mount Borla, in the Alps, in a natural valley where the forest opens up to a circular vision of the sky. These places are documented on a blog on a social network Ning (where Italian mail artists created the extremely active Dododada: http://dododada.ning.com/ ), in so doing creating a kind of geo-psychic shared map, where every participant can suggest new situations for meetings or other “wandering places” suggested by others.

“Creative” social networks should be usable by anyone, not just those who have specific knowledge in the artistic environment. Luther Blissett was created in this way, like a unique “public” icon that anyone could take on, deprived of any specific characterisation in the art world. I think there can only be a new generation of artists/networkers when people begin to find pleasure in taking part in different projects, when they will find a way to enrich themselves in them. There must be something constructive to exchange and share. It's not necessary to always hide behind nicknames for efficient results, the fact of getting together physically without barriers is perhaps the true objective of “making the net”. When an experience is created where we find ourselves alone with other professionals of the field, then we know that something is not working. Even if I am against the star system, I realise tat today we are missing “reference figures” that are used at catalysts in order to coagulate forces.

Tatiana Bazzichelli: In social networks you don't create the platform, but find yourself in someone else home, trying to furnish it as you prefer and producing content for third parties. Why do you think there are so many users on Facebook today, despite the speed and superficiality of exchange, whereas in the BBS's we were always so few, even if talking about ways of thinking and utopia?

Vittore Baroni: Communication on Facebook is very easy and at the same time all social networks allow an integrated way of sharing that was impossible before: with your mobile you can film something and put it on Youtube, then link it to Facebook. All this, at the time of BBS, was unimaginable. The procedure is immediate and easy, you can promote yourself easily and reach a lot of people with one click. On Facebook everyone is curious about what happened to old classmates, or people they haven't seen for a long time, and through word of mouth the number of users has rapidly and excessively increased. People join out of curiosity but they stay because the chatting is company and it's also a kind of authorised voyeurism, but it could also be a nine-days' wonder. As soon as there will be a similar network, but a little more pleasant and useful, the users will move on. Look what happened with Myspace. It had Facebook's function, but now it has become an obligation to anyone who has a band and often the record companies themselves create pages for their artists.

There are a lot of new ideas for Social Networks that are in incubation, waiting to have success or to be abandoned. For example, Arturo di Corinto created a great network with the objective of monitoring Italian politicians, to make the dialogue between voters and politicians as transparent as possible. The doubt is whether the project will really be capable of capturing the interest of people and make them participate, so that the platform can be updated regularly, the truth of information can be verified, etc. Perhaps a social network about a single politician would be easier to concretise, where voters and adversaries alike can confront each other.

Wikipedia is a good example of a virtuous platform, which works with the contribution of everyone and provides a useful service. The system works if everyone uses it correctly, starting from and open source concept, if you feel part of it. In many social networks you feel uncomfortable, you have the sensation that you're a temporary guest. I believe that from the moment we really feel at home, at ease and certain of the possibility of sharing an experience with others without a Big Brother breathing down our necks, then we can really talk about social networking.

Above copied from: http://www.digicult.it/digimag/article.asp?id=1423

Thursday, June 10, 2010

The Rise of Art World 2.0, Ted Mooney



Two years into what we seem to have agreed, in full supine position, to call the Great Recession, it is clear to almost everyone that something has indeed taken its course, and that in many respected fields of endeavor things will never be the same. As someone who has pursued with equal commitment two parallel careers throughout my life─one in the art world (as an editor, a writer, and now as an educator at Yale’s graduate School of Art), and another in the literary world (as a novelist, essayist, and short-story writer)─I am struck, if not exactly surprised, by the similarity of the changes the recent financial meltdown has wrought on both fields, changes long in development but only now openly validated. I say changes, but in fact they are paradigm shifts, since both the art and literary worlds are undergoing transformations that will prove to be game-changingly radical. This much is certain: what was before, will be no more. The sooner we realize this, the more options we will have in the future.

Talking about this paradigm shift in regard to the art world is strangely difficult, for the very reason that the term “art world,” so casually bandied about by almost everyone, is most often used to refer to something that is at best a suspiciously convenient myth. No such all-encompassing art “world” exists. Most likely it is this misuse of the word “world” that has left the term “art world” itself open to such a wide range of misunderstandings. But in fact “art world” does have a very specific meaning, one quite different from the fuzzy globalist entity most often summoned up by those who use it so indiscriminately. Simply put, the art world consists of all those involved in the commission, creation, valuation, promotion, presentation, sale, criticism, documentation, and preservation of art. And with that established, many other matters grow a good deal clearer.

While we have always had artists in the U.S., for example, the American art world─one including all the elements listed above─is a much more recent development. Borrowing from the nomenclature of the software industry, I will call the U.S. art world’s earliest incarnation Art World 1.0 and locate its emergence somewhere in the mid- to late 1930s, when a number of forces came together in New York to create it. These forces included the sudden arrival here of European artists fleeing the onset of World War II; the convergence in New York of other European emigrĂ© artists who had moved to the U.S. much earlier but now joined their fellow exiles in the nascent art capital; and a similar movement of American artists away from the heartland to the growing artistic ferment in the East. In addition to these historical migrations, the federal Works Progress Administration, at that time the largest employer in the post-Depression U.S., provided substantial support for the arts, allowing, for example, Willem de Kooning (who had reached New York in 1927) to earn more than three times the salary of a typical Macy’s employee of the same period, all while painting public art works for which he was paid with federal dollars. What’s more, several other elements of the art world as I have defined it were already in place: among them Alfred Barr’s then artist-friendly MOMA, such prescient commissioners of art work as Peggy Guggenheim, a handful of important galleries, soon followed by art and culture periodicals like In the Tiger’s Eye, and View—enough of the necessary elements, anyway, to give critical mass to the first genuine U.S. art world.

Money was certainly made within this self-contained enclave, but not very much, and the institutions and collectors who acquired the art works emerging from these artists’ studios did so mainly out of their acute awareness that this was a historic moment, unprecedented in the U.S. Few if any of these collectors imagined that the works they were buying would in a very short time increase astronomically in monetary value, so speculation was a negligible factor. It was sufficient that enough money be circulated through the art scene to keep the artists alive and productive. And with only short periods of stasis, indirection or revolt, the New York art world evolved from that first iteration into what we know today, its characteristic elements continuously shifting in relative importance as the art eco-system grew in volume and self-confidence.

I will leave the reader to decide when exactly the art world’s incremental upgrades occurred─when 1.0 became 1.1, and so on─confining myself instead to a few obviously watershed moments. By the late 1940s and early ’50s, the Abstract Expressionists had became pop-cultural stars whom the average American saw in equal measure as perplexing oddities (“My kid could’ve done that”) and gratifying emblems of postwar American dominance (“Take that, Europe”). Indeed, they became such accepted emblems of a newly prosperous U.S. that the federal government (in the form of the U.S. Information Agency, now known to have functioned abroad as a propaganda arm of the C.I.A.) sent their works on extended worldwide tour as a potent psychological asset in waging the Cold War.

In the late 1950s and early ’60s another major change occurred, as the soul-baring feats of the Abstract Expressionists and their progeny gave way to the cool ironies of artists like Jasper Johns, Robert Rauschenberg, Andy Warhol, and all those other artists so hastily lumped together by the media under the rubric of Pop art. I’ll call this moment the arrival of Art World 1.5, since, in my opinion, it brought us halfway to where we are now. With it came a shift in sensibility that suggested a new detachment of artist from artwork, one that allowed the expansion into that same ironic distance of the other art-world elements, those that till now had usually played a secondary role. The valuation, promotion, presentation, and sale of art began to take on more weight, and the normative practices in these areas showed signs of changing and evolving in ways till then unforeseeable.

Simultaneously, a new breed of collector emerged, exemplified by people like Robert Scull, who made his fortune from a Manhattan taxi fleet inherited from his father-in-law, had no real background in art and simply bought what he liked—in quantity. Pricing became more aggressive, promotion took on a glamour of its own, and the social aspect of the art world veered increasingly toward spectacle. Not only did this period mark the peak of postwar prosperity in this country, but it was also a genuinely thrilling time for U.S. art, one as innovative in its way as that ushered in by the Abstract Expressionists. What’s more─and here Warhol is the obvious example─the subject of much of this art was commerce, money, glamour, and popular culture. So for the first time the economic elements of the art world were unapologetically accorded billing equal to the art and the artists themselves. And with this acceptance, the ethos that has brought us to our present pass was unequivocally established.

To be sure, there were those who worked in explicit rebellion against the overall commercial trend of the art world─the Conceptualists, the Earthwork artists, performance artists, and others─many of whom produced art that, though intended in part to be “uncollectable,” must be accounted major work by any standards. But the expansion and increasing commercialization of the art world─punctuated by the booms and busts intrinsic to any market-linked community─continued apace. Among the innovations that contributed to the art world’s rapid development from the 1960s on were the decreasing cost and consequent proliferation of color reproductions in art magazines, the rise of graduate art-school programs (which implicitly presented art as a solid career path, comparable to, say, dentistry, in the security it offered), the increasing acceptance of the nakedly commercial art fair as a legitimate forum for presenting art, the ascension of the artist super-stars of the 1980s, the exploding resale market for contemporary art at venerable auction houses, the construction boom for contemporary art museums as they became more and more widely perceived as tourist magnets that no respectable mid-sized municipality could do without, and, finally, the vastly accelerated exchange of ideas and images that the Internet allowed. By the time the Museum of Modern Art reopened its doors after its lavish renovations of 2002-04, it is safe to say that Art World 1.9 had reached its apotheosis and begun its ongoing decline. Art World 2.0, a complete reset of the original, with consequences only beginning to be known, was taking shape.

Why do I locate this changeover at that moment? It’s tempting to point to the example of the newly expanded MOMA, a grotesquely misconceived distortion of its former self. As far back as the 1970s there had been talk, both within the museum and outside it, of declaring MOMA a historical museum, a museum precisely of modern art, which is generally seen as having come to a triumphant end with Minimalism. Watching MOMA continue to show contemporary art in its ostentatiously corporate, quintessentially modernist quarters seems to many like watching a 75-year-old man (the Modern opened in 1929, the very year the Great Depression began) attempt a kickflip indy at the local skateboarding park─a sight unseemly at best, but in any case a clear indication that all self-awareness has long since departed the scene. That may sound like a cheap shot, but within it lies a kernel of ineradicable truth. Every three generations (and I’m using “generation” to mean the traditional 25 years, not the five to ten years implied by advertisements and over-wrought publicists) the living memory of how the world was “back then,” what its inhabitants at that time aspired to and how they went about getting it, begins to die off. The number of eyewitnesses rapidly diminishes until we become reliant on second-hand accounts, self-serving memoirs, and conflicting rumor to summon up a vision of the original. Soon nothing can be verified with certainty, the “real” past is lost, and it can only carry on as a simulacrum of itself. Not until then, paradoxically, can the genuinely new, brought into being of its own necessity, be born and thrive on its own terms. Now, for the American art world, that time is upon us.

While it must be emphasized that Art World 2.0 remains in its earliest stage of development, it currently seems characterized by a studious rejection, quiet but steely, of the corporatization of art so enthusiastically and profitably pursued during the Art World 1.9 years. Among my grad students and many of their confreres in their mid-thirties or younger, there is a strong preference for art that in one way or another emphasizes “secrecy,” subversion, withholding, ephemerality, word-of-mouth invitation, sub-visible presence, intimate one-on-one interaction between artist and “viewer” and strong artist-to-artist dialogue. Where there’s institutional critique (and there’s a lot of it) it’s unlikely to be cast in the flamboyantly confrontational style of, say, Hans Haacke. Instead it might take place within a chosen “major New York museum”─the default term proposed by some leading museums after the almost Talmudic deliberations of their legal departments─among a group of invited participants, the results to be documented, if at all, by the museum’s security cameras. (It seems significant in itself that “surveillance”—both inside museums and elsewhere--is a hot topic among Art World 2.0 artists, who are acutely aware of the demise of privacy and ingenious in their attempts to resuscitate it.)

In addition, widespread revelations about the recent criminal activities of respected banks and international corporations, crimes that have almost without exception gone unpunished, have made virtually all corporate practice suspect to a sizable proportion of Americans, and to many of the Art World 2.0 artists it is, at least for now, anathema. As they attempt to reclaim for artists the freedom and respect once naturally accorded the best art work, independent of the by-now customary financial indices and career-path signposts, they have begun to seek alternative ways of structuring their art world. And they have learned how to adapt for their own purposes the near-universal corporate response to the recent financial meltdown; they have developed ways to eliminate (or at least minimize) the middlemen.

This strategy is only possible because the old Art World, as well as modernism and the various morbid offshoots that followed its demise, is dead to them. They accept this as a fact, but don’t dwell on it, since the corporatization that marked Art World 1.9’s endgame is of little relevance to them, except as a warning. What most concerns these emerging Art World 2.0 artists from a practical standpoint is regaining control of how their work is presented and, given the ever-greater importance of the Internet, how the reproduced images of that work are disseminated. To those ends, many prefer to avoid long-term gallery affiliation altogether, seeking instead the most suitable venue for each new project as it is completed. By stringing together a series of one-shot appearances in places of their own choosing, they are better able to maximize their freedom and shape their work’s development. This may be seen as a direct response to the financial hysteria of the Art World 1.9 years, which led to the virtual extinction (though always with magnificent exceptions) of long-term, even life-long relationships between gallery and artist. In times not nearly as distant as they now seem, galleries saw it as their role to nurture and develop their artists, eventually to arrive at a mutually beneficial outcome through patience and hard work. But the frenetic financial pace of Art World 1.9─whereby new artists would typically be given two or three shows to demonstrate their financial viability before, if their work failed to sell impressively, being summarily dropped─did away with these nurturing relationships, and Art World 2.0 artists have quite sensibly concluded that their best option is to nurture themselves. This shift in thinking explains the primary emphasis Art World 2.0 artists place on building and maintaining their connections with other artists; they want a mutual support structure that they can depend on. Not only does this preference mirror their widely shared interest in reconnecting with their audience personally, in many cases on an intimate one-to-one basis, but underscores their deep-seated resistance to any form of outside control whatsoever, from any quarter. This insistence by Art World 2.0 artists on setting their own terms is yet another bit of bad news for galleries, who have already begun to see their high-end artists’ new work bypass them altogether, going straight from studio to auction house, where a successful sale can establish an artist’s financial worth much more effectively than a sold-out gallery show, which is by nature not at all transparent. The gallery dealer can and does make use of a whole range of tricks to achieve the desired public perception of a given show’s outcome, including under-the-table discounts, non-existent waiting lists for a particular artist’s work, failure to disclose the prices at which works really sold, and many other sleights of hand. At auction, the process is far more transparent: a work is sold to the highest bidder, conferring on the artist a more certain status and financial validation. Foreign collectors, especially from emerging markets such as Asia, are far more comfortable buying the work of an artist with whom they may not be familiar if that artist has already received the imprimatur of a respected auction house.

Obviously, auction houses are unlikely to accept a consignment from an untested artist, and that fact, along with the innate predisposition of Art World 2.0 artists to avoid entangling commercial alliances, assures that there will be many more artist-organized shows in the future. This polarization of sales venues corresponds, interestingly enough, to the drastically increased concentration of U.S. wealth into far fewer hands, beginning with Ronald Reagan’s election in 1980 and grossly accelerated during the Bush years. I have heard more than a few highly informed art-world players speculate pessimistically about the continued viability of art galleries in Art World 2.0. Personally, I think this is more a form of complaint than prognostication, but the possibility cannot be dismissed out of hand. At the very least it is a recognition, however indirect, that the reset implicit in the arrival of Art World 2.0 has already occurred.

What direction Art World 2.0 will eventually take cannot, of course, be known at this early date, but its initial aspirations are clear. Precisely because the monetary valuation of art, while based on some established indices, is at base irrational and subjective to the point of arbitrariness, the artist who is alert to these matters can work with a freedom unavailable to virtually all other workers in all other professions. This freedom was highly valued in the early stages of Art World 1.0─not only by artists but also by those composing the other necessary elements of the art world eco-system. Somewhere along the way, however, that freedom was lost to many artists as the art-world “support system” arrogated to itself more and more power, which came to influence artists themselves to an equally inflated degree. Those days may not be entirely over, but Art World 2.0 has arrived with the express intention of reclaiming its freedom and resisting outside influences, especially institutional ones, with all the energy and inventiveness its members can bring to bear.

They have already dismissed any notion of a master scenario for art’s development, simply by virtue of their arrival. They have the tools and weapons to take back their freedom, with its mix of blessings. Now we will see if they have the will.

Copyright © 2010 by Ted Mooney

Author: Ted Mooney was a senior editor at Art in America magazine for more than 30 years and now teaches a graduate seminar at Yale University’s School of Art. He is also the author of three award-winning novels, as well a number of essays and shorter works of fiction. His most recent novel, to be published by Alfred A. Knopf on May 11, 2010, is titled The Same River Twice. A video trailer for this book was created in collaboration with new-media artist John Gara and can be found at http://tiny.cc/TSRT-video .

above copied from: http://artistorganizedart.org/commons/

Tuesday, June 8, 2010

Image Future, Lev Manovich




Uneven Development

What kinds of images will dominate visual culture a number of decades from now? Would they still be similar to the typical image that surrounds us today—photographs that are digitally manipulated and often combined with various graphical elements and type? Or would future images be completely different? Would photographic code fade away in favor of something else?

There are good reasons to assume that the future images would be photograph-like. Like a virus, a photograph turned out to be an incredibly resilient representational code: it survived waves of technological change, including computerization of all stages of cultural production and distribution. The reason for this persistence of photographic code lies in its flexibility: photographs can be easily mixed with all other visual forms—drawings, 2D and 3D designs, line diagrams, and type. As a result, while photographs truly dominate contemporary visual culture, most of them are not pure photographs but various mutations and hybrids: photographs which went through various filters and manual adjustments to achieve a more stylized look, a more flat graphic look, more saturated color, etc.; photographs mixed with design and type elements; photographs which are not limited to the part of the spectrum visible to a human eye (night vision, X-ray); simulated photographs done with 3D computer graphics; and so on. Therefore, while we can say that today we live in a "photographic culture," we also need to start reading the word "photographic" in a new way. "Photographic" today is really photo-GRAPHIC, the photo providing only an initial layer for the overall graphical mix.

One way is which change happens in nature, society, and culture is inside-out. The internal structure changes first, and this change affects the visible skin only later. For instance, according to Marxist theory of historical development, infrastructure (i.e., mode of production in a given society, also called "base") changes well before superstructure (ideology and culture in this society). In a different example, think of technology design in the twentieth century: typically, a new type of machine was at first fitted within old, familiar skin (for instance, early twentieth-century cars emulated the form of horse carriages). The familiar McLuhan idea that the new media first emulates old media is another example of this type of change. In this case, a new mode of media production, so to speak, is first used to support old structures of media organization, before the new structure emerges. For instance, the first typeset books were designed to emulate hand-written books; cinema first emulated theater; and so on.

This concept of uneven development can be useful in thinking about the changes in contemporary visual culture. Since it beginnings fifty years ago, computerization of photography (and cinematography) has by now completely changed the internal structure of a photographic image; yet its "skin," i.e. the way the image looks, still largely remains the same. It is therefore possible that at some point in the future the "skin" of an image would also become completely different, but this has not happened yet. So we can say at present our visual culture is characterized by a new computer "base" and old photographic "superstructure."

The Matrix trilogy of films provides us with a very rich set of examples perfect for thinking further about these issues. The trilogy is an allegory about how its visual universe is constructed. That is, the films tell us about the Matrix, the virtual universe which is maintained by computers—and of course, visually the images of the Matrix which we the viewers see in the films were all indeed assembled with the help of software (the animators sometimes used Maya but mostly relied on custom-written programs). So there is a perfect symmetry between us, the viewers of a film, and the people who live inside the Matrix—except while the computers running the Matrix are capable of doing it in real time, most scenes in each of The Matrix films took months and even years to put together. (So The Matrix can be also interpreted as the futuristic vision of computer games at a point in a future when it would become possible to render The Matrix-style visual effects in real time.)

The key to the visual universe of The Matrix trilogy is the new set of computer graphic processes that over the years were developed by John Gaeta and his colleagues at ESC Entertainment. Gaeta coined names for these processes: "virtual cinema," "virtual human," "universal capture," "image-based rendering," and others. Together, these processes represent a true milestone in the history of computer-driven special effects. They take to their logical conclusion the developments of the 1990s, such as motion capture, and simultaneously open a new stage. [1] We can say that with The Matrix, the old "base" of photography has finally been completely replaced by a new computer-driven one. What remains to be seen is how the "superstructure" of a photographic image—what it represents and how—will change to accommodate this "base."




Reality Simulation Versus Reality Sampling

In order to understand better the significance of Gaeta's method, let us briefly run through the history of 3D photo-realistic image synthesis and its use in the film industry. In 1963, Lawrence G. Roberts (who later in the 1960s became one of the key people behind the development of ARPANET but at that time was a graduate student at MIT) published a description of a computer algorithm to construct images in linear perspective. These images represented the objects through lines; in contemporary language of computer graphics they can be called "wire frames." Approximately ten years later, computer scientists designed algorithms that allowed for the creation of shaded images (so-called Gouraud shading and Phong shading, named after the computer scientists who create the corresponding algorithms). From the middle of the 1970s to the end of the 1980s, the field of 3D computer graphics went through rapid development. Every year new fundamental techniques were arrived at: transparency, shadows, image mapping, bump texturing, particle system, compositing, ray tracing, radiosity, and so on. [2] By the end of this creative and fruitful period in the history of the field, it was possible to use a combination of these techniques to synthesize images of almost every subject that often were not easily distinguishable from traditional cinematography.

All this research was based on one fundamental assumption: in order to re-create an image of reality identical to the one captured by a film camera, we need to systematically simulate the actual physics involved in construction of this image. This means simulating the complex interactions between light sources, the properties of different materials (cloth, metal, glass, etc.), and the properties of physical cameras, including all their limitations such as depth of field and motion blur. Since it was obvious to computer scientists that if they exactly simulate all this physics, a computer would take forever to calculate even a single image, they put their energy in inventing various short cuts which would create sufficiently realistic images while involving fewer calculation steps. So in fact each of the techniques for image synthesis I mentioned in the above paragraph is one such "hack"—a particular approximation of a particular subset of all possible interactions between light sources, materials, and cameras.

This assumption also means that you are re-creating reality step-by-step, from scratch. Every time you want to make a still image or an animation of some object or scene, the story of creation from the Bible is being replayed.

(I imagine God creating the Universe by going through the numerous menus of a professional 3D modeling, animation, and rendering program such as Maya. First he has to make all the geometry: manipulating splines, extruding contours, adding bevels...Next for every object and creature he has to choose the material properties: specular color, transparency level, image, bump, and reflexion maps, and so on. He finishes one page of menus, wipes his forehead, and starts working on the next menu page. Now on to defining the lights: again, dozens of menu options need to be selected. He renders the scene, looks at the result, and admires his creation. But he is far from being done: the universe he has in mind is not a still image but an animation, which means that the water has to flow, the grass and leaves have to move under the blow of the wind, and all the creatures also have to move. He sights and opens another set of menus where he has to define the parameters of algorithms that simulate the physics of motion. And on, and on, and on. Finally the world itself is finished and it looks good; but now God wants to create the Man so he can admire his creation. God sighs again, and takes from the shelf a set of Maya manuals...)

Of course we are in somewhat better position than God was. He was creating everything for the first time, so he could not borrow things from anywhere. Therefore everything had to be built and defined from scratch. But we are not creating a new universe but instead visually simulating a universe that already exists, i.e. physical reality. Therefore computer scientists working on 3D computer graphics techniques realized early on that in addition to approximating the physics involved they can sometimes take another shortcut. Instead of defining something from scratch through the algorithms, they can simply sample it from existing reality and incorporate these samples into the construction process.

Examples of the application of this idea are the techniques of texture mapping and bump mapping which were introduced already in the second part of the 1970s. With texture mapping, any 2D digital image—which can be a close-up of some texture such as wood grain or bricks, but which can be also anything else, for instance a logo, a photograph of a face or of clouds—is mathematically wrapped around virtual geometry. This is a very effective way to add visual richness of a real world to a virtual scene. Bump texturing works similarly, but in this case the 2D image is used as a way to quickly add complexity to the geometry itself. For instance, instead of having to manually model all the little cracks and indentations which make up the 3D texture of a wall made from concrete, an artist can simply take a photograph of an existing wall, convert into a grayscale image, and then feed this image to the rendering algorithm. The algorithm treats grayscale image as a depth map, i.e. the value of every pixel is being interpreted as relative height of the surface. So in this example, light pixels become points on the wall that are a little in front while dark pixels become points that are a little behind. The result is enormous saving in the amount of time necessary to recreate a particular but very important aspect of our physical reality: a slight and usually regular 3D texture found in most natural and many human-made surfaces, from the bark of a tree to a woven cloth.

Other 3D computer graphics techniques based on the idea of sampling existing reality include reflection mapping and 3D digitizing. Despite the fact that all these techniques have been always widely used as soon as they were invented, many people in the field (as far as I can see) always felt that they were cheating. Why? I think this feeling was there because the overall conceptual paradigm for creating photorealistic computer graphics was to simulate everything from scratch through algorithms. So if you had to use the techniques based on directly sampling reality, you somehow felt that this was just temporary—because the appropriate algorithms were not yet developed or because the machines were too slow. You also had this feeling because once you started to manually sample reality and then tried to include these samples in your perfect algorithmically defined image, things rarely would fit exactly right, and painstaking manual adjustments were required. For instance, texture mapping would work perfectly if applied it to straight surface, but if the surface was curved, inevitable distortion would occur.

(I am using "we" here and in other places in this text because I spent approximately seven years working professionally in the field of 3D computer animation between 1984 and 1992, so I still feel a certain identification with this field. At IMAGINA 2003 festival in Barcelona I met John Gaeta and Greg Juby from ESC, who were there to lecture on the making of The Matrix. Slowly it became clear that the three of use were connected by multiple threads. In 1984 I went to work for a company in New York called Digital Effects that at the time was one among seven companies in the world focused on 3D computer animation for television and film. Company president Jeff Kleiser later founded another company—Kleiser- Walczak—where Greg Juby worked for a few years in the 1990s. Juby graduated from Syracuse University where—as we discovered over dinner—he was my student in the very first university class in digital arts I ever taught [1992]. While working at Kleiser's company, Juby met John Gaeta and eventually went to work for him at ESC. Finally, it also turned out that before we turned to computer graphics both Gaeta and I were students at New York University film school.)

Throughout the 1970s and 1980s the "reality simulation" paradigm and "reality sampling" paradigm co-existed side-by-side. More precisely, as I suggested above, the sampling paradigm was "imbedded" within the reality simulation paradigm. It was common sense that the way to create photorealistic images of reality is by simulating its physics as precisely as one could. Sampling existing reality and then adding these samples to a virtual scene was a trick, a shortcut within the otherwise honest game of simulation.




"Total Capture": Building The Matrix

So far we have looked at the paradigms of the 3D computer graphics field without considering the uses of the simulated images. So what happens if you want to incorporate photorealistic images into a film? This introduces a new constraint. Not only does every simulated image have to be consistent internally, with the cast shadows corresponding to the light sources, and so on, but now it also has to be consistent with the cinematography of a film. The simulated universe and live action universe have to match perfectly (I am talking here about the "normal" use of computer graphics in narrative films and not more graphical aesthetics of TV graphics and music videos which often deliberately juxtaposes different visual codes). As can be seen in retrospect, this new constraint eventually changed the relationship between the two paradigms in favor of the sampling paradigm. But this is only visible now, after The Matrix films made the sampling paradigm the cornerstone of its visual universe. [3]

At first, when filmmakers started to incorporate synthetic 3D images in films, this did not have any effect on how people thought about 3D image synthesis. The first feature film that had 3D computer images was Looker (1980). Throughout the 1980s, a number of films were made which used computer images but always only as a very small element within the overall film narrative (Tron, which was released in 1982 and which can be compared to The Matrix, since its universe is situated inside a computer and created through computer graphics, was an exception). For instance, one of Star Trek films contained a scene of a planet coming to life; it was created using the very first particle system. But this was a single scene, and it had no interaction with any other scenes in the film.

In the early 1990s, the situation had started to change. With pioneering films such as The Abyss (James Cameron, 1989), Terminator 2: Judgment Day (James Cameron, 1991), and Jurassic Park (Steven Spielberg, 1993), computer generated characters became the key protagonists of film narratives. This meant that they would appear in dozens or even hundreds of shots throughout a film, and that in most of these shots computer characters would have to be integrated with real environments and human actors captured via live action photography (or what in the business is called "live plate.") Examples are the T-1000 cyborg character in Terminator 2, or dinosaurs in Jurassic Park. These computer-generated characters are situated inside the live-action universe (obtained by sampling physical reality via 35mm film camera). The simulated world is located inside the captured world, and the two have to match perfectly.

As I pointed out in The Language of New Media in the discussion of compositing, perfectly aligning elements that come from different sources is one of fundamental challenges of computer-based realism. Throughout the 1990s, filmmakers and special effects artists dealt with this challenge using a variety of techniques and methods. What Gaeta realized earlier than the others is that the best way to align the two universes of live action and 3D computer graphics was to build a single new universe [4] .

Rather than treating reality sampling as just one technique to be used along with many other "proper" algorithmic techniques of image synthesis, Gaeta turned it into the key foundation of his process. The process systematically takes physical reality apart and then systematically reassembles the elements into a virtual computer-based representation. The result is a new kind of image that has photographic/cinematographic appearance and detail yet internally is structured in a completely different way.

How does the process work? The geometry of an actor's head is captured with the help of a 3D scanner. Next, an actor's performance is recorded using three high-resolution cameras. This includes everything an actor will say in a film and all possible facial expressions. [5] (During the production the studio was capturing over five terabytes of data each day.) Next special algorithms are used to align the three images by tracking a number of points on the face in order to stitch three images into one. This new image is then mapped onto the geometry model. The information in the image is used not only as a texture map but also as a kind of bump map to transform locally the geometry of the model, in correspondence with facial movements. The end result is a perfect reconstruction of the captured performance, now available as 3D computer graphics data—with all the advantages that come from having such representation.

This process is significantly different from the commonly accepted methods used to create computer-based special effects, namely "keyframing" and physically-based modeling. With the first method, an animator specifies the key positions of a 3D model, and the computer calculates in-between frames. With the second method, all the animation is automatically created by software that simulates the physics underlying the movement. (This method thus represents a particular instance of the "reality simulation" paradigm I already discussed.) For instance, to create a realistic animation of a moving creature, the programmers model its skeleton, muscles, and skin, and specify the algorithms that simulate the actual physics involved. Often the two methods are combined: for instance, physically based modeling can be used to animate a running dinosaur while manual animation can be used for shots where the dinosaur interacts with human characters.

At the time of this writing, the most impressive achievement in physically-based modeling was the battle in The Lord of the Rings: Return of the King (Peter Jackson, 2003), which involved tens of thousands of virtual soldiers all driven by Massive software. [6] Similar to the Non-human Players (or bots) in computer games, each virtual soldier was given the ability to "see" the terrain and other soldiers, a set of priorities and an independent "brain," i.e. an A.I. program which directs a character's actions based on the perceptual inputs and priorities. But because Massive software, in contrast to the A.I. of games, does not have to run in real time, it can create the scenes with hundreds of thousands of realistically behaving agents (one commercial created with the help of Massive software featured 146,000 virtual characters.)

Gaeta's method uses neither manual animation nor simulation of the underlying physics. Instead, it directly captures reality, including color, texture and movement. Short sequences of the actor's performances are encoded as 3D computer animations; these animations form a library from which the filmmakers can then draw as they compose a scene. The analogy with musical sampling is obvious here. As Gaeta pointed out, his team never used manual animation to try to tweak the motion of a character's face; however, just as a musician may, they would often "hold" a particular expression before going to the next one. [7] This suggests another analogy—editing videotape. But this is second-degree editing, so to speak: instead of simply capturing segments of reality on video and then joining them together, Gaeta's method produces complete virtual recreations of particular phenomena— self-contained micro-worlds—which can be then further edited and embedded within a larger 3D simulated space.




"Image Rendering": Reality Re-assembled

Such a method combines the best of both worlds: physical reality as captured by lens-based cameras, and synthetic 3D computer graphics. While it is possible to recreate the richness of the visible world through manual painting and animation, as well as through various computer graphics techniques (texture mapping, bump mapping, physical modeling, etc.), it is expensive in terms of labor involved. Even with physically based modeling techniques, countless parameters have to be tweaked before the animation looks right. In contrast, capturing visible reality through lens on film, tape, DVD-R, computer hard drive, or other media is cheap: just point the camera and press the "record" button.

The disadvantage of such recordings is that they lack the flexibility demanded by contemporary remix culture. This culture demands not self-contained aesthetic objects or self-contained records of reality but smaller units—parts that can be easily changed and combined with other parts in endless combinations. However, because the lens-based recording process flattens the 3-D semantic structure of reality, converting a space filled with discrete objects into a flat field of pixels, any kind of editing operation—deleting objects, adding new ones, compositing, etc.—becomes quite different.

In contrast, 3D computer-generated worlds have the exact flexibility one would expect from media in the Information Age. (It is not therefore accidental that 3D computer representation —along with hypertext and other new computer-based data representation methods—was conceptualized in the same decade when the transformation of advanced industrialized societies into information societies became visible.) In a 3D computer-generated world everything is discrete: objects are defined by points described in terms of their XYZ coordinates; other properties of objects such as color, transparency, and reflectivity are similarly described in terms of discrete numbers. To duplicate an object a hundred times requires only a few mouse clicks or typing a short command on a command line; similarly, all other properties of a world can always be easily changed. Just as a sequence of genes contains the code, which is expanded into a complex organism, a compact description of a 3D world can be quickly transmitted through the network, with the client computer reconstructing the full world (this is how online multi-player computer games and simulators work).

Beginning in the late 1970s when James Blinn introduced texture mapping [8], computer scientists, designers, and animators were gradually expanding the range of information that can be recorded in the real world and then incorporated into a computer model. Until the early 1990s this information mostly involved the appearance of the objects: color, texture, light effects. The next significant step was probably the development of motion capture, which during the first half of the 1990s was quickly adopted in the movie and game industries. Now computer-synthesized worlds relied not only on sampling the appearance of the real world but also the recordings of movements of animals and humans. Building on all these techniques, Gaeta's method takes them to a new stage: capturing just about everything that at present can be captured and then reassembling the samples to create a digital (and thus completely malleable) recreation. Put in a larger context, the resulting 2D/3D hybrid representation perfectly fits with the most progressive trends in contemporary culture which are all based on the idea of a hybrid.




The New Hybrid

It is my strong feeling that the emerging "information aesthetics" (i.e., the new cultural features specific to information society) has or will have a very different logic from Modernism. The latter was driven by a strong desire to erase the old—visible as much in the avant-garde artists' (particularly the Futurists') statements that museums should be burned, as well as in the dramatic destruction of all social and spiritual realities of many people in Russia after the 1917 revolution, and in other countries after they became Soviet satellites after 1945. Culturally and ideologically, Modernists wanted to start with a tabula rasa, radically distancing themselves from the past. It was only in the 1960s that this move started to feel inappropriate, as manifested both in the loosening of ideology in Communist countries and the beginnings of a new Postmodern sensibility in the West. To quote the title of a famous book by Robert Venturi et al (published in 1972, it was the first systematic manifestation of the new sensibility), Learning from Las Vegas involved admitting that organically developing vernacular cultures involved bricolage and hybridity, rather than the purity seen, for instance, in "international style" which was still practiced by architects world-wide at that time. Driven less by the desire to imitate vernacular cultures and more by the new availability of previous cultural artifacts stored on magnetic and soon digital media, in the 1980s commercial culture in the West systematically replaced purity by stylistic heterogeneity montage. Finally, when the Soviet Empire collapsed, Postmodernism won the world over.

Today we have a very real danger of being imprisoned by a new "international style"— something which we can call "global style." The cultural globalization, of which cheap international flights and the Internet are the two most visible carriers, erases certain cultural specificity with an energy and speed impossible for Modernism. Yet we also witness today a different logic at work: the desire to creatively place together old and new, local and transnational, in various combinations. It is this logic, for instance, which made cities such as Barcelona (where I talked with John Gaeta in the context of Art Futura 2003 festival which led to this article) such a "hip" and "in" place today. All over Barcelona, architectural styles of many past centuries co-exist with new, "cool" spaces of bars, hotels, museums, and so on. Medieval meets multi-national, Gaudi meets Dolce and Gabana, Mediterranean time meets Internet time. The result is the incredible sense of energy which one feels physically just walking along the street. It is this hybrid energy, which characterizes in my view the most successful cultural phenomena today. [9] The hybrid 2D/3D image of The Matrix is one such hybrid.

The historians of cinema often draw a contrast between the Lumières and Marey. Along with a number of inventors in other countries all working independently from each other, the Lumières created what we now know as cinema—the effect of motion based on the synthesis of discrete images. Earlier Muybridge had already developed a way to take successive photographs of a moving object such as horse; eventually the Lumières and others figured out how to take enough samples so that when projected they perceptually fuse into continuous motion. Being a scientist, Marey was driven by an opposite desire: not to create a seamless illusion of the visible world but rather to be able to understand its structure by keeping subsequent samples discrete. Since he wanted to be able to easily compare these samples, he perfected a method whereby the subsequent images of moving objects were combined within a single image, thus making the changes clearly visible.

The hybrid image of The Matrix in some ways can be understand as the synthesis of these two approaches which for a hundred years remained in opposition. Like the Lumières, Gaeta's goal is to create a seamless illusion. At the same time, like Marey, he also wants to be able to edit and sequence the individual recordings.

At the beginning of this article I evoked the notion of uneven development, pointing out that often the inside structure ("infrastructure") completely changes before the surface ("superstructure") catches up. What does this idea imply for the future of images and in particular 2D/3D hybrids as developed by Gaeta and others? As Gaeta pointed out, while his method can be used to make all kinds of images, so far it has been used in the service of realism as it is defined in cinema—i.e., anything the viewer will see has to obey the laws of physics. [10] So in the case of The Matrix, its images still have a traditional "realistic" appearance while internally they are structured in a completely new way. In short, we see the old "superstructure" which still sits on top of the new "infrastructure." What kinds of images would we see if the superstructure would finally catch up with the infrastructure?

Of course, while the special effects of Hollywood movies so far follow the constraint of realism, i.e. obeying the laws of physics, they are not exactly the same as before. In order to sell movie tickets, DVDs, and all other merchandise, each new special effects film tries to top the previous one in terms of showing something that nobody has seen before. In The Matrix it was "bullet time"; in The Matrix Reloaded it was the Burly Brawl scene where dozens of identical clones fight Neo. The fact that the image is constructed differently internally does allow for all kinds of new effects; listening to Gaeta it is clear that for him the key advantage of such images is the possibility it offers for virtual cinematography. That is, if before camera movement was limited to a small and well-defined set of moves—pan, dolly, roll—now it can move in any trajectory imaginable for as long as the director wants. Gaeta talks about the Burly Brawl scene in terms of virtual choreography: both choreographing the intricate and long camera moves and also all the bodies participating in the flight (all of them are digital recreations assembled using Gaeta's method as described above).

According to Gaeta, creating this one scene took about three years. So while in principle Gaeta's method represents the most flexible way to recreate visible reality in a computer so far, it will be years before this method is streamlined and standardized enough for these advantages to become obvious. But when it happens, the artists will have an extremely flexible hybrid medium at their disposal: completely virtualized cinema. Rather than expecting that any of the present pure forms will dominate the future of visual culture, I think this future belongs to such hybrids. In other words, the future images would probably be still photographic—although only on the surface.







[1] Not all of special effects in The Matrix rely on new process by Gaeta, and of course many other Hollywood films already use some of the same strategies. I decided to focus on his process as it was used in The Matrix in this text because it articulates a new approach to image construction most systematically—and also because in contrast to many others in the special effects industry, Gaeta has extensively reflected on the process he developed, coming up with a number of terms to describe its different stages such as "universal capture" and "image rendering."

[2] Although not everybody would agree with this analysis, I feel that after the end of 1980s the field significantly slowed down: on the one hand, all key techniques which can be used to create photorealistic 3D images have already been discovered; on the other hand, rapid development of computer hardware in the 1990s meant that computer scientists no longer had to develop new techniques to make the rendering faster, since the already developed algorithms would now run fast enough.

[3] The terms "reality simulation" and "reality sampling" were made up for this text; the terms "virtual cinema," "virtual human," "universal capture" and "image rendering" belong to John Gaeta.

[4] Therefore, while the article in Wired which positioned Gaeta as a groundbreaking pioneer and as a rebel working outside of Hollywood contained the typical journalistic exaggeration, it was not that far from the truth. Steve Silberman, "Matrix 2," Wired 11.05 (May, 2003).

[5] The method captures only the geometry and images of an actor's head; body movements are recorded separately using motion capture.

[6] See www.massivesoftware.com.

[7] John Gaeta, presentation during a workshop on the making of The Matrix, Art Futura 03 festival, Barcelona, October 12, 2003.

[8] J. F Blinn,."Simulation of Wrinkled Surfaces," Computer Graphics (August 1978): 286-92.

[9] From this perspective, my earlier book The Language of New Media can be seen as a systematic investigation of a particular slice of contemporary culture driven by this hybrid aesthetics: the slice where the logic of the digital networked computer intersects the numerous logics of already established cultural forms.

[10] John Gaeta, a workshop on the making of The Matrix.



Lev Manovich is an artist and new media theorist. He is the author of The Language of New Media (The MIT Press, 2001) as well as over fifty articles which have been published in twenty-eight countries. He is in high demand worldwide as a lecturer on new media. Manovich's awards include: Guggenheim Fellowship 2002-2003, Mellon Fellowship from Cal Arts, 2002 Digital Cultures Fellowship from UC Santa Barbara, and 2002 Fellowship from The Zentrum fĂ¼r Literaturforschung, Berlin.


Above copied from: http://www.braintrustdv.com/essays/image-future.html

Monday, June 7, 2010

YOU TUBE AND THE ACCIDENTAL VIDEOART, Monica Ponzini



YouTube has radically changed not only the way we look at video, but also the role of the audience and the language of video. No longer a mere spectator for the images created by the Author, Youtube users are increasingly involved in the making of unique and peculiar videos, unexpected “viral” trends that generate in turn new influences – and all shared on the Web.

Certainly YouTube is the product of a technology revolution that allows practically anyone to record, film, edit and perform postproduction tasks with basic means – which are almost intuitive by now. But the originality and variety of products that circulate on video portals (Youtube, ma but also Vimeo, or the more recent Seesmic), has exceeded all expectations. From extremely basic segments such as the “long portraits” – where a person stares at the camera, to the message chains – like the one started by MadV; from “hallucinated” videos made by fans -where a whining Chris Crocker tells the world to leave Britney Spears alone, to real music videos, that sometimes brought a band to its commercial success – like with the band OK Go…

And the lifespan of a video on the Web doesn't stop here: once it's posted and shared, it can be recast, mashed-up, and mixed to create a new piece of art where the original author's images and purposes are mixed with other authors' purposes and means (that could be other images, thematic connections or even software).

An example -among many- of the artistic and social effects of the ” YouTube phenomenon” was the evening dedicated to video triptychs held at Postmasters Gallery in New York. Organized by the art collective “Artists Meeting” and curated by Thomas Hutchison, Maria Joao Salema and James Andrews, the event showed a compilation of “accidental video art”: video triptychs where the most diverse products of the Internet were aggregated thanks to a custom software, You3b, an Eyebeam project produced by Jeff Crouse and Andrew Mahon.

These three-channel videos –that have extremely diverse features and themes- are products of the “YouTube” mine: “The project takes off from the freedom that you find in YouTube itself, from the eclectic nature of it. These videos are a sort of modern narrative, made not with words but with images. There's a hunger for information, for images, for stories and there's also a library that is expanding very rapidly” explains Raphaele Shirley, one of the organizers.

“For me, what was interesting about YouTube are just the crazy, fascinating, wild videos that people publish” says James Andrews, “It's a little bit like public access TV, a kind of ad hoc personal media that can be very serious and profound, or can be very absurd and random. My favorite discovery on YouTube was the subculture of people who documented their own washing machine -- and this is a very large subculture, pretty massive at this point-- from the aesthetic or technical point of view –. In the end I made a triptych of washing machines and I'm convinced that on social level in a way is connected to the American tradition of tinkering with your own car or motorcycle…”

An interesting aspect of the event is definitely the “authorial shift” of the final works: “This show mutilates the whole idea of ownership”, notes James Andrews, “You really can't figure out who on Earth owns these triptychs: you have one set of people who created the triptych software, then you have three different groups of people who potentially created the three videos, you have the three different YuoTube accounts that published the videos – which may or may not be the same people who created them, you then have the curators, then you have Artist Meeting and you then have Postmasters Gallery. And each of the original videos could be removed from the Net at any time…”

In a sense, this project takes to the extreme not only the concept of authoriality, and the fact that new media art often needs team work in order to be produced, but also, says James Andrews, “it could simply obliviate the human factor, which I think is even more interesting. No one in particular got credit for anything in the show – it was just something that happened…”

An event in which each viewer – the last link of this long chain – looks at the triptychs, getting his or her own, very personal narrative: for Raphaele Shirley “this kind of triptych really lends itself to non-linearity: you can't grasp a single image or a single story and follow it through from beginning to end. It's really about cross connections, cross pollination of ideas, like unconscious thinking and the way you create a dream, or the way you connect data…”





www.postmastersart.com/

www.artistsmeeting.org

www.you3b.com

www.raphaeleshirley.com/



above copied from:

Thursday, June 3, 2010

PATHOSFORMEL FEVER, Massimiliano Schiavoni



Eng: Valeria Merlini e Monica Fontana


26 January 2009. Pathosformel received the special prize UBU 2008 with the following award : “young company aiming at an abstract and physical theatre to be pursued with an already distinct and layered sign, by which the research on material and form is the starting point to evoke a fragmented and deconstructed visionary theatricality of great charm, which exploded in the national and international scene, achieving a significant relationship between conceptual art and theatre.”

They have been staging for more than four years and their innovative style revolutionized and turned upside down contemporary theatrical scenes that needed continuous lifeblood, when the Made in Italy was enormously flourishing. And this is possible thanks to such companies, as Pathosformel, that aren't intimidated by experimentation and comparison, in a multimedia society which is always growing and nevertheless watching attentively and giving critiques on those works masked by dramatic techniques. Daniel, Francesca and Paola create woolly works of art appearing on a stage or in a room, where the scene and the audience, the artist and the user, breathe together, where the eye comes into drama and the sound comes into opera.

There is nothing to envy in the theatrical projects by Aleksej Krucenik or by Malevic, when the timidity of the bones uncovers bit by bit, and shows scenic and physical abilities, accepting personal imagination, suspended trainings, tuning and “dancing algebraic equations”. A relaxation we tend to love through the white colour, when the absence of the body is that little bit extra and where we can watch a sort of natural birth; a cautious performance, a palpable acrobatics, an impudent delicacy. An elegance that elbows and suddenly disappears as long as wax is melting, being this material pure and ephemeral, malleable and spiritually erotic. The mise en scène is not much traditional or at least it disperses alternations, still tasting the typical materiality of a Joseph Beuys or a Marina Abramovic. We are in front of a new theatrical aesthetics, a new poetics that embraces sculpture and sound, performance and choreography, cosmopolitan scenography of the invention.

If the main element in a drama is the audience, in Pathosformel the quibus is characterized by the potential unconscious ability to bring together a generation of users, from the most oddly assorted cultural and social corners of our country, as well as of Europe . That's an important centre for thousands of fans and experts who feel the call from the Venetian collective, who “peregrinate” all around Italy and fill up Festivals e theatres sometimes acting as fans of singers crowding the stadium.

A work was presented at the Alboreto – Dwelling Mondaino Theatre in Summer 2008 who “took off” among the other cities in Dro and in Brussels : "The shortest distance" is the last effort represented as work in progress . New materials and visions, live music plunges us into a bar at the Folies-Berg, but we can still decipher free will and subjectivity, or simply dream gasping behind the passing of suspended squares. “Each of these squares on the lines now corresponds to a sung note, as signs of a Gregorian score that can be continuously modified. Everything we hear is manifested by the sliding presence of forms on the lines. Everything we see is accompanied by its corresponding voice. There is an evident, even ambiguous synchrony between song and image. Does the movement of this ever-changing music generate the melody or is it the song driving these geometric inhabitants? Is it the behaviour of the forms that gives birth to liturgy, or is it the interweaving of voices imposing choreography to this dance opera without stature? (Pathosformel)

On 19 th April the Pathosformel will be present at the Palladium Theatre in Rome. In May they will take part in two events, that is the Egg Festival in Milan and Another Theatre 2009 in Forlì. Meanwhile, I tried to investigate on some issues related to the training, the project phase and the artistic creation of this new collective through a short interview

Massimo Schiavoni: Who were Daniel Blang Gubbay, Francesca and Paola Villani Bucciero? Who are they now?

Pathosformel. Daniel Blangé Gubbay, Francesca and Paola Villani Bucciero were three students at the University of Venice IUAV that associated at the end of 2004 to build a work project called pathosformel, that could investigate through works the limit between theatre and performance. We come from different cities (Milan, Bologna and Parma ) and from different experiences that range from a rather more theoretical mark, and a more developed dramatic practice, to more technical experiences..

We shared all our specificities, and we knew we wanted to work together without any other preconceived idea about drama or any other clear poetic reference. We always had to build a new strategy of work, to understand the techniques and to compare with any other possible language. At every time we need to have a confrontation with a new material that can be represented.

Now we are people who share a passion for this work and who try to make it a consolidated activity. We work together but at the same time we do other things, because we need it and because we are interested in it. Our differences and specificities are still there; they are probably the most important aspects that can enrich a project and that give different contributions to every new work.

Massimo Schiavoni: But what I'd like to know is: did you look around? and what did you see before creating this project? Was it already inside your minds or did it generate from other needs, from other interests in progress.

Pathosformel: The meeting was born from our compatibility, apart from a common vision on the type of work that we wanted to conduct. Surely we looked around, and we studied many things together as we needed – and we still need – to see as much as possible, we believe it happens to most people who face an artistic creation. We trained together at the university and we discussed about everything we found around us, even non-theatrical matters.

Meanwhile we deeply wanted to “do something”, but it was not too much clear yet; life circumstances in Venice and our own interests determined our way to begin work and this influenced work, as well.

Massimo Schiavoni: “La timidezza delle ossa”, could you tell me why it was born and how? And does it reflect, according to you, the beginning of an alternative process or at least an original process inside contemporary drama.

Pathosformel: “The timidity of the bones” comes from a theoretical reflection : there are many of us who think about performance and live show as immediate arts, or the absence of any material medium – as can be the canvas for painting or the film for the cinema – between the observer's eye and the actor's body. The idea was to reflect on the one hand this immediacy, deny expectations for the body's visibility on stage. We thought its presence before the public was no longer the basic element from which we could build the rest, but the heart that must be called into question.

At the same time this is bound to a theoretical approach linked to defiguration, to those artistic techniques that sketch the shape , that put signs, that give it a new power – going beyond the entirety of the shape itself: the shape will be completed by each observer in a different way and it is as if it already possesses in itself the power to become all these forms. The idea of the show was to cover the body, not to hide it, but to let it emerge in a new form every time and to let the observer rebuild it in an anatomical or imaginative way. The project in this sense is for us the beginning of a process of delegation or extreme confidence in the observer's imagination. This is simply to give clues that can be taken for what they are or completed within a kind of narration.

If in “La timidezza delle ossa” we offered a "dotted body" to be ideally completed and rebuilt in its entirety, later in "The smallest distance" we have built a geometric-musical system from which the audience could find their own abstract or narrative intimate structure. They are squares flowing in two directions, still for us they represent exactly this delegating instrument that maybe allows each spectator to find, in the loneliness or in the shapes' crossings, a different life or a total musical abstraction in which he is allowed to lose himself

Massimo Schiavoni: What is the meaning of the body in " Volta "? What role does the metamorphoses have here?

Pathosformel: " Volta " was born at the same time than “La timidezza delle ossa ”, always within our reflection on the relationship between body image and support of the representation. If in “The timidity of the bones” the canvas was the - almost classical - support that allow the presentation of the body image, here we chose to take the problem further.

Space is completely dark, and the bodies are completely covered in black therefore invisible - or only visible on the limbs covered in a layer of white wax. The game is ambiguous: on a side, wax is the support that makes the body visible, on the other side the body itself is a support for the wax. They are bodies of wax, forcedly temporary and fragile, that blend and separate while creating human figures not always recognizable.

Metamorphosis is inevitable in this work: there is one voluntary component, and one typical of the material. Wax is an organic material, and of course it starts immediately to melt and crackle progressively diminishing the body's visibility. It is a disappearance by fragments; in the end we find scales on the ground, like a constellation born from the body's evaporation.

Massimo Schiavoni: You've been chosen for the first part of the "Fies Factory One" project, together with four other young artistic realities like Sonia Brunelli, Teodora Castellucci, Francesca Grilli e Teatro Sotterraneo. How do you live this, and what benefits were taken by this creative experience in close contact with different ways of working and other stylistic codes in the Dro residence?

Pathosformel: Fies Factory One surely represents an opportunity and a support like we couldn't imagine before. First of all, it gave us a three-years funding not related to a single project, which allowed us to plan more completely the times at our disposal for a production: it allows us to develop a project in an articulate way, and to take time to research the next project.

Furthermore, the relationships with the other selected artists was for us especially interesting. Beginning with evident analogies and differences (some of us come from theatre, some from dance or visual arts, and at the same time none of us can be listed in only one definition), the confrontation during the common residence times and the rare but very important collaborations - i.e. during the event in October 2008 for the firts anniversary of the Factory - have made our work richer with a vision that we would otherwise hardly find.

Massimo Schiavoni: Your works stand out in the artistic panorama also for their personal and original scenic presence, and for the use of innovative technical and spatial solutions. How did the creative passage go from the previous works to "La piĂ¹ piccola distanza", and why now a sort of "bodiless concert" suggests more than squares in motion?

Pathosformel: The passage towards "La piĂ¹ piccola distabza" has been for us quite immediate: we wanted to continue in the sign of the previous research, while distancing from the kind of image that we had represented in our early works. We wanted to continue creating a system that played with the imagination of the audience, that allowed them to complete in infinite manners a system of abstract signs loaded with a human imprint.

In case of this work, though, we were not interested in working on an anatomic reconstruction of the body, but more on a study on movement that could be translated into the intentions and relations behind human movements. For this reason, we worked on the idea of a staff in movement, a system of signs in which every element moves following its own speed, and through these springs and stops, enters in some kind of relationship with the nearby shapes.

The "Concerto per harmonium e cittĂ  " was born from this research and as a result of this first performance. We still needed to investigate the relationship between music and behavioural geometries, and in this case we chose to relate music reading to an actual geographical map of an ideal city. A cartography in movement, that is played live while it forms and dissolves: it is as if every urban path traced on the map a line, and at the same time was accompanied by a note that - together with the hundreds of paths that we cross everyday - creates music.

Choosing a geographical map was for us very meaningful, almost a manifesto, because in it we found that hybrid system, in balance between abstract sign and allusion to the real world, that we had been constantly looking for.



www.pathosformel.org
www.myspace.com/pathosformel

above copied from: http://www.digicult.it/digimag/article.asp?id=1436

Thursday, May 27, 2010

DISTRIBUTED CREATIVITY MODELS: THE CROWD-SOURCING, Giulia Baldi



It was maybe predictable and it is definitely significant: Creative Review's has included crowd-sourcing among the topics covered in their 'Year in Review' December issue.


I haven't checked the magazine yet, but I am researching on digital creative crowd-sourcing models since a while and I think they represent such an interesting opportunity for creatives that we should celebrate here.


Known in academic terms as a "distributed problem-solving and content-production model", and in marketing ones as "the outsourcing of functions normally performed by a supplier or contractor to a crowd of people or a community", crowd-sourcing is a neologism first used by Jeff Howe, editor of Wired US, in an article that dates back to 2006. There, he describes the shift that creatives (mostly copywriters, designers, photographers, illustrators, animators, videomakers, directors... or wannabe so) are taking in the networked society.


With the development of web technologies and communities, in fact, the global creative workforce is finally connected, and can finally be active online. Thus, it can decide to move from being represented by offline exclusive and expensive agencies to online accessible and inexpensive platforms


It has all started with websites like Istockphotos.com, a marketplace developed for non-professional photographers that allows anyone to upload his/her own works and to sell them directly to clients, at a fraction of the usual costs. This way, talented emerging artists are now competing with the privileged establishment, and cutting all middlemen and their top rewards. As content producers, they are earning more and they are allowing their clients to spend less than before (for products which are as good as the average professional ones).


So far, so good. These new adventures are just confirming the crowd-sourcing principle that a creative collective can produce better (and cheaper) results than an individual, and making it real. As many other times in the past, technology is slowly but steadily disrupting the status quo of an old industry and creating a new one. But then, even more disruptive stories follows.


With 99desing (AU), BootB (IT), CrowdSpring (US), IdeaBounty (SA/UK), RadarMusic (UK) and Zooppa (IT/US/BR), a new crowd-sourcing model for creativity has emerged. On one side, these new online marketplaces/communities engage with brands offering them the chance to launch contests on specific briefs and get bespoke creative proposals... for smaller fees than those of the traditional agencies. On the other, they engage deeper with professional and non-professional creatives allowing them to participate to real global competitions, based on real brands' briefs... and to participate easily and remotely, i.e. wherever they live and whatever else they are doing, without any agency to represent, and exploit, them.


Furthermore, some of these platforms even delegate to the community the selection of the best entries. Because, again, in the web 2.0 ethos, we are better than me.


Everyone happy? Not yet. There is a problem, in fact... Maybe even two or three.


1) With these platforms, all participants need to do their bit as speculative job and have very limited chance to win. In the offline advertisement industry the pitch-to-brief system has always been the custom; online this process has reached another level: in the average, there are loads of contestants for every project and just little reward for the winning ones. So much so that a proper professional movement called 'no-spec job' has recently made a lot of noise against this practice, based on the assumption that open and cheap competitions are unprofessional and unfair. At the same time, the brands, platforms and users that instead have embraced this model, reclaim the freedom of choice: once that the process is clearly presented, everyone should be free of participating, considering pros and cons from a personal perspective. And in fact, while some have probably kept doing things the old way (they maybe live in big metropolis and have loads of useful contacts), thousands are instead sending now their ideas and projects through these platforms, and hundreds are gaining valuable visibility and good money (that would have not obtained otherwise).


2) the average quality of the outputs isn't that bad, but isn't even that good;


3) brands can rarely build a meaningful relationship with creatives and long lasting strategies virtually;


4) when the selection is not up to the community, it can be definitely too time-consuming for anyone in the industry...


All in all, maybe a new breed of online platforms/agencies is needed, one that will maintain traditional agency project management and campaigns creative curatorship while introducing new remote collaborative practices. And, in fact, a revised model, based on both top down and bottom up principles, has recently been adopted by one of the 'old' platfroms, GeniusRocket (US), and more will probably follow.


In their words, 'We have always understood that crowdsourcing is a process of evolution. From that, we believe that there is still plenty of room to improve upon creative crowd-sourcing. We plan to address the three biggest requests of the creative community; higher awards, more feedback, and less risk, while addressing the request of the client; more polished, selected, creative content.'


How are they planning to achieve it? ' Launching GRSelect. 'The three most important differences between GRSelect and other crowdsourcing sites (including GeniusRocket.com) is that artists will be required to apply in order to participate in each project. First, applicants will be vetted based upon their past work and their submitted proposal. Second, GRSelect projects will work across iterative rounds allowing the artists to receive feedback directly from the client prior to production or final submission. Finally, while not everyone will be accepted to participate in a project, everyone that does make into a project will earn awards based upon their efforts.'


Now, this sounds really good. And this is even becoming a trend: in the last few weeks, a new platform based on the same mixed approach has been launched, and it is called... Victors and Spoils (US). While another one will be launched soon, with the name Guided By Voices (UK).


Creatives out there, no more excuses! Now you can submit your portfolios, be selected, participate to contests, and win your 15 mins of glory and even some pocket money (that now can reach peaks of 10K$), without risking to waste too much of your time. And, yes... Next year you could be on Creative Review.

above copied from: http://www.digicult.it/digimag/article.asp?id=1678