Feb 1 2002
Drawing from DJ culture, Star Trek, and quality assurance, Lev Manovich outlines key collaborative trends incorporated by new media. The author highlights interactive systems as an illusion of collaboration, and opens a discussion on open source collaboration and its effects on our notions of authorship.
Collaboration (over the network or in person, in real time or not) between a group of artists to create a new media work / performance / event is the most visible example of a more general phenomenon which I would like to consider here. New media culture brings with it a number of new models of authorship which all involve different forms of collaboration. Of course, collaborative authorship is not unique to new media: think of medieval cathedrals, traditional painting studios which consisted from a master and assistants, music orchestras, or contemporary film productions which, like medieval cathedrals involve thousands of people collaborating over a substantial period of time. In fact, romantic model of a solitary single author occupies a very small place in the history of human culture. New media, however, offers some new variations on the previous forms of collaborative authorship. In addition to collaboration of different individuals and/or groups (1), I can single out the following models:
(2) Interactivity as collaboration between the author and the user.
In the first part of the 1990s when interactivity was a new term, it was often claimed that an interactive artwork involves collaboration between an author and a user. Is this true? The notion of collaboration assumes some shared understanding and the common goals between the collaborators, but in the case of interactive media these are often absent. After an author designs the work, s/he has no idea about the assumptions and intentions of a particular user. Such a user, therefore, can’t be really called a collaborator of the author. From the other side, a user coming to a new media artwork often also does not know anything about this work, what is supposed to do, what its interface is, etc. For this user, therefore, an author is not really a collaborator. Instead of collaborators, the author and the user are often two total strangers, two aliens which do not share a common communication code.
While interactivity in new media art often leads to” miscommunication” between the author and the user, commercial culture employs interactive feedback to assure that no miscommunication will take place. It is common for film producers to test a finished edit of a new film before a “focus group.” The responses of the viewers are then used to re–edit the film to improve comprehension of the narrative or to change the ending. In this practice, rather than presenting the users with multiple versions of the narrative, a single version that is considered the most successful is selected.
(3) Authorship as selection from a menu.
I discuss this type of authorship in detail in my The Language of New Media; here I just want to note that it applies to both professional designers and the users. The design process in new media involves selection from various menus of software packages, databases of media assets, etc. Similarly, a user is often made to feel like a “real artist” by allowing her/him to quickly create a professional looking work by selecting from a few menus. The examples of such “authorship by selection” are the Web sites that allow the users to quickly construct a postcard or even a short movie by selecting from a menu of images, clips and sounds.
Three decades ago Roland Barthes elegantly defined a cultural text as “a tissue of quotations”: “We know now that a text is not a line of words releasing a single ‘theological’ meaning (the ‘message’ of the Author-God) but a multi-dimensional space in which a variety of writings, none of them original, blend and clash. The text is a tissue of quotations drawn from innumerable centres of culture.” In software-driven production environment, these quotations come not only from the creators’ memories of what they previously saw, read, and heard, but also directly from the databases of media assets, as well as numerous other words that in the case of the World Wide Web are just a click away.
(4) Collaboration between a company and the users.
When it released the original Doom (1993), id software also released detailed descriptions of game files formats and a game editor, thus encouraging the players to expand the game, creating new levels. Adding to the game became its essential part, with new levels widely available on the Internet for anybody to download. Since Doom, such practices became commonplace in computer game industry. Often, the company would include elements designed by the users in a new release.
With another widely popular game Sims (2001), this type of collaboration reached a new stage. The Web site for the game allows users to upload the characters, the settings, and the narratives they constructed into the common library, as well as download characters, settings, and narratives constructed by others. Soon it turned out that the majority of users do not even play the game but rather use its software to create their own characters and storyboard their adventures. In contrast to earlier examples of such practice – for instance the 1980s Star Trek fans editing their own video tapes by sampling from various Star Trek episodes or writing short stories involving main Star Trek characters – now it came into the central place, being legitimized and encouraged by game producers.
Another way in which a company can be said to collaborate with the users of its software is by incorporating their suggestions about new features into the new version of the software. This is common practice of many software companies.
(5) Collaboration between the author and the software.
Authoring using Al or AI is the most obvious case of human-software collaboration. The author sets up some general rules but s/he has no control over the concrete details of the work – these emerge as a result of the interactions of the rules. More generally, we can say that all authorship that uses electronic and computer tools is a collaboration between the author and these tools that make possible certain creative operations and certain ways of thinking while discouraging others. Of course humans have designed these tools, so it would be more precise to say that the author who uses electronic/ software tools engages in a dialog with the software designers (see #4).
(6) Remixing
Remixing originally had a precise and a narrow meaning that gradually became diffused. Although precedents of remixing can be found earlier, it was the introduction of multi-track mixers that made remixing a standard practice. With each element of a song – vocals, drums, etc. – available for separate manipulation, it became possible to “re-mix” the song: change the volume of some tracks or substitute new tracks for the old ounces. Gradually the term became more and more broad, today referring to any reworking of an original musical work(s).
In his DJ Culture Ulf Poscardt singles out different stages in the evolution of remixing practice. In 1972 DJ Tom Moulton mixed his first disco remixes; as Poscard points out, they “show a very chaste treatment of the original song. Moulton sought above all a different weighting of the various soundtracks, and worked the rhythmic elements of the disco songs even more clearly and powerfully…Moulton used the various elements of the sixteen or twenty-four track master tapes and remixed them.” By 1987, “DJs started to ask other DJs for remixes” and the treatment of the original material became much more aggressive. For example, “Coldcut used the vocals from Ofra Hanza’s ‘Im Nin Alu’ and contrasted Rakim’s ultra-deep bass voice with her provocatively feminine voice. To this were added techno sounds and a house-inspired remix of a rhythm section that loosened the heavy, sliding beat of the rap piece, making it sound lighter and brighter.” In another example, London DJ Tim Simenon produced a remix of his personal top ten of 1987. Simenon: “We found a common denominator between the songs we wanted to use, and settled on the speed of 114 beats per minute. The tracks of the individual songs were adapted to this beat either by speeding them up or slowing them down.”
In the last few years people started to apply the term “remix” to other media: visual productions, software, literary texts. With electronic music and software serving as the two key reservoirs of new metaphors for the rest of culture today, this expansion of the term is inevitable; one can only wonder why it did no happen earlier. Yet we are left with an interesting paradox: while in the realm of commercial music remixing is officially accepted, in other cultural areas it is seen as violating the copyright and therefore as stealing. So while filmmakers, visual artists, photographers, architects and Web designers routinely remix already existing works, this is not openly admitted, and no proper terms equivalent to remixing in music exist to describe these practices.
The term that we do have is “appropriation.” However, this never left its original art world context where it was first applied to the works of post-modern artists of the early 1980s based on re-working older photographic images. Consequently, it never achieved the same wide use as “remixing.” Anyway, “Remixing” is a better term because it suggests a systematic re-working of a source, the meaning which “appropriation” does not have. And indeed, the original “appropriation artists” such as Richard Prince simply copied the existing image as a whole rather than re-mixing it. As in the case of Duchamp’s famous urinal, the aesthetic effect here is the result of a transfer of a cultural sign from one sphere to another, rather than any modification of a sign.
The only other commonly used term across media is “quoting” but I see it as describing a very different logic than remixing. If remixing implies systematically rearranging the whole text, quoting means inserting some fragments from old text(s) into the new one. Thus it is more similar to another new fundamental authorship practice that, like remixing, was made possible by electronic technology – sampling.
(7) Sampling: New Collage?
According to Ulf Poscardt, “The DJ’s domination of the world started around 1987.” This take-over is closely related to the new freedom in the use of mixing and sampling. That year M/A/R/S released their record “Pump Up the Volume”; as Poscardt points out, “This record, cobbled together from a crazy selection of samples, fundamentally changed the pop world. As if from nowhere, the avant-garde sound collage, unusual for the musical taste of the time, made it to the top of the charts and became the year’s highest-selling 12-inch single in Britain.”
Theorizing immediately after M/A/R/S, Coldcut, Bomn The Bass and S-Xpress made full use of sampling, music critic Andrew Goodwin defined sampling as “the uninhibited use of digital sound recording as a central element of composition. Sampling thus becomes an aesthetic programme.” We can say that with sampling technology, the practices of montage and collage that were always central to twentieth century culture, became industrialized. Yet we should be careful in applying the old terms to new technologically driven cultural practices. While the terms “montage” and “collage” regularly pop up in the writings of music theorists from Poscardt to Kodwo Eshun to DJ Spooky, I think these terms that come to us from literary and visual modernism of the early twentieth century do not adequately describe new electronic music. To note just three differences: musical samples are often arranged in loops; the nature of sound allows musicians to mix pre-existent sounds in a variety of ways, from clearly differentiating and contrasting individual samples (thus following the traditional modernist aesthetics of montage/collage), to mixing them into an organic and coherent whole; finally, the electronic musicians often conceive their works beforehand as something that will be remixed, sampled, taken apart and modified. Poscardt: “house (like all other kinds of club music) has relinquished the unity of the song and its inviolability. Of course the creator of a house song thinks at first in terms of his single track, but he also thinks of it in the context of a club evening, into which his track can be inserted at a particular point.”
Last but not least, It is relevant to note here that the revolution in electronic pop music that took place in the second part of the 1980s was paralleled by similar developments in pop visual culture of the same period. The introduction of electronic editing equipment such as switcher, keyer, paintbox, and image store made remixing and sampling a common practice in video production towards the end of the decade; first pioneered in music videos, it later took over the whole visual culture of TV. Other software tools such as Photoshop (1989) had the same effect on the fields of graphic design, commercial illustration and photography. And, a few years later, World Wide Web redefined an electronic document as a mix of other documents. Remix culture has arrived.
(8) Open Source Model
Open Source model is just one among a number of different models of authorship (and ownership) which emerged in software community and which can be applied (or are already being applied) to cultural authorship. The examples of such models are the original project Xanadu by Ted Nelson, “freeware,” and “shareware.” In the case of Open Source, the key idea is that one person (or group) writes software code, which can be then modified by another user; the result can be subsequently modified by a new user, and so on. If we apply this model to a cultural sphere, do we get any new model of authorship? It seems to me that the models of remixing, sampling and appropriation conceptually are much richer than the Open Source idea. There are, however, two aspects of Open Source movement that make it interesting. One is the idea of license. There are approximately 30 different types of licenses in Open Source movement. The licenses specify the rights and responsibilities of a person modifying the code. For instance, one licence (called the GNU Pulic License) specifies that the programmer have to provide the copy of the new code to the community; another stipulates that the programmer can sell the new code and he does not have to share with the community, but he can’t do things to damage the community.
Another idea is that of the kernel. At the “heart” of Linux operating system is its kernel - the code essential to the functioning of the system. While users add and modify different parts of Linux system, they are careful not to change the kernel in fundamental ways. Thus all dialects of Linux share the common core. I think that the ideas of license and of kernel can be directly applied to cultural authorship. Currently appropriation, sampling, remixing and quoting are controlled by a set of heterogeneous and often outdated legal rules. These rules tell people what they are not allowed to do with the creative works of others. Imagine now a situation where an author releases her/his work into the world accompanied by a license that will tell others both what they should not do with this work and also what they can do with it (i.e. the ways in which it can be modified and re-used) Similarly we may imagine a community formed around some creative work; this community would agree on what constitutes the kernel of this work. Just as in the case of Linux, it world be assumed that while the work can be played with and endlessly modified, the users should not modify the kernel in dramatic ways.
Indeed, if music, films, books and visual art are our cultural software, why not apply the ideas from software development to cultural authorship? In fact, I believe that we can already find many communities and individual works that employ the ideas of license and kernel, even though these terms are not explicitly used. One example is Jon Ippolito’s Variable Media Initiative. Ippolito proposed that an artist who accepts variability in how her/his work will be exhibited and/or re-created in the future (which is almost inevitable in the case of net art and other software-based work) should specify what constitutes the legitimate exhibition/recreation; in short, s/he should provide the equivalent of the software license.
Conclusion
The commonality of menu selection / remixing / sampling / synthesis / open “sourcing” in contemporary culture calls for a whole new critical vocabulary to adequately describe these operations, their multiple variations and combinations. One way to develop such a vocabulary is to begin correlate the terms that already exist but are limited to particular media. Electronic music theory brings to the table analysis of mixing, sampling, and synthesis; academic literary theory can also make a contribution, with its theorizations of intertext, paratext, and hyperlinking; the scholars of visual culture can contribute their understanding of montage, collage and appropriation. Having a critical vocabulary that can be applied across media will help us to finally accept these operations as legitimate cases of authorship, rather than exceptions. To quote Poscardt one last time, “however much quoting, sampling and stealing is done – in the end it is the old subjects that undertake their own modernization. Even an examination of technology and the conditions of productions does not rescue aesthetics from finally having to believe in the author. He just looks different.”
Above copied from: http://switch.sjsu.edu/nextswitch/switch_engine/front/front.php?artc=65
No comments:
Post a Comment