Wednesday, January 6, 2010

BETWEEN DUCHAMP AND TURING: Strategies for Linkage Between the Visual and the Auditive in Audio-Visual Art, David Sudmalis


School of Visual and Performing Arts, University of Tasmania, Launceston, Australia
Abstract

In Lev Manovich’s essay The Death of Computer Art the author concluded that ‘the convergence will not happen’ between the seemingly irreconcilable aesthetics of ‘Duchamp-land’ and ‘Turing-land’ (Manovich 1996). The battle between content and state-of the-art technologies, suggests Manovich, is not resolvable without an appropriate strategy for extrapolating techniques and methodologies of one into the other, thereby creating a new form that transcends technology in the service of a new creative medium.

This paper examines strategies for creative linkage between image and sound within a technologically facilitated environment. This linkage exists within technological and methodological frameworks, circumventing the less satisfactory outcome in which one art form is merely a sympathetic counterpoint to the other. With the very real possibility of abandoning the focus on one type of media and moving towards a more synaesthetic arts experience (Paul 2003), a gestural model of composition (Sudmalis 2001) is employed to create meaningful linkages in the creation of original audio-visual works that employ technologies as mediator and facilitator of these linkages. The result is an audio-visual work that communicates more successfully than either one of the discrete component arts forms would have alone. The discussion of parametric linkage includes references to examples created by the author which use, as their relative points of departure, aspects of the natural environment and natural
world.

Duchamp v Turing

For Manovich, the terms ‘Duchamp’ and ‘Turing’ are used to identify opposed ends of the spectrum of computer art (Manovich 1996). In Duchamp-land, the art output is oriented toward the content/the rationale/the concept that may operate on a number of levels, and draw meaning from a range of fields including sociology, the arts and politics. It is sometimes self-referential, and even destructive towards its materials (for example, Nam June Paik fixed magnets to monitors, thereby distorting their output, and opened television sets in order for the technology to be laid bare). In Duchamp-land, content reigns.

In direct contrast to Duchamp-land, Turing-land operates in an environment where there exists an orientation towards new, state-of-the-art computer technologies, and where process and method are the drivers of the creative act. It is simple in its meaning, usually exploring one or another technological facet (interactivity, for example), and takes its technology very seriously, almost reverently, as opposed to the more deconstructionist attitude towards technology that
seems to exist in Duchamp-land.

Terms other than Duchamp and Turing could be invoked to describe these extreme, opposed ends of the creative spectrum. Modernist v postmodernist, concept v method, Cagian v Boulezian, holistic v parametric may be considered as substitutes at the ready. Manovich’s terms relate to object and application, and to contextualisation and evolutionary practice. The middle ground between the two exists where the concept is facilitated by technological process, where ‘… the world of culture at large and the world of computer culture meet each other’ (Manovich 1996). It is here that strategies for linkage between disparate art forms that incorporate concepts, techniques and methodologies across disciplines can be developed. Whilst this paper deals with creating linkages across audio and visual art, it shall now examine the method employed in
creating these linkages within the domain of sound. Strategies for the linkage of sound and image will then follow.

Gestural approach to musical composition

Despite the relative common usage of the label ‘gesture’ in the discussion of contemporary music, a simple and brief definition is incongruously difficult to determine. This is perhaps because ‘gesture’ is the least immediately quantifiable musical parameter found in a composer’s vocabulary. Whilst it is inextricably linked with the other more measurable parameters of pitch, tessitura, rhythm, performance media, tone colour, dynamic and articulation (the application of which may be quantified independently of one another), ‘gesture’ refers to the totality of the aural element conceived through the interaction and simultaneity of these disparate events. Gestural composition, whilst incorporating the pre-composition machinations of each parameter, relies entirely upon the aural totality created through the interaction of the aforementioned elements, their development, and their context of meaning within the work as a whole. It is dependant upon its ‘organic context’ (that is, its position in a chain of events [as antecedent or consequent]; its stratification or contiguous juxtaposition), or its ‘individual context’ as an ‘autonomous, absolute’ moment (Stockhausen 1963, pp. 198–199).

‘Gesture’ is not intrinsically a musical term. As such, many definitions exist to serve its extra- musical applications: ‘significant movement’; ‘the use of such movement to convey feeling or as a rhetorical movement’; an ‘action to evoke a response or convey intuition’; ‘movement of the body, head, arms’; and ‘any action or proceeding intended for effect or as a formality or
demonstration’ are but a representative sample of definitions in standard usage. As is evident from the meaning proffered above, the concept that movement as the conveyor of information or intent is of paramount importance. It is a language of itself without a verbal requisite. It implies that gesture as a communicative device is reliant upon the directness of its syntax, and further dependant upon the nature of the dialogue to arrive at contextual meaning. In other words, gesture as communication is dependent upon the context within which it operates. Far from being content with this successive or purely contextual nature of gesture, composers have expanded the concept of antecedent/consequent relationships, personifying musical gesticulation into a self-encompassing and self-fulfilling entity. Karlheinz Stockhausen argues
that:

… each now is not regarded untiringly as a mere result of the immediately preceding one or as the prelude of the one which is approaching … rather [it is] something personal, autonomous … independent [and] absolute … (1963, p. 61).

Stockhausen continues, forwarding his thoughts on how the perception of material presented as being of itself without relation to that which has transpired, or that which is to follow, creates a stillness, or impression of timelessness in the temporal flow. It follows, then, that the gesture requires examination regarding its context (as part of a larger form), itself (as a centred, autonomous entity) and the effect of its deployment pertaining to the aural perception of the passage of time or the progression of the work in question. Hence, ‘gesture’ is an entity constituted from substantively different component parts existing on several planes simultaneously: it may be perceived as the juxtaposition from events before to events after, or as a self-sufficient moment impervious to its wider context, preferring instead to concentrate on the ‘now moment’ (Hasty 1986) (or, indeed, both concurrently).

Of itself, this definition is neither difficult to understand nor particularly taxing to deduce. What makes the defining process somewhat troublesome is its application. The linear succession of related events within music (for example, the unifying nature of pulse or the recognition of an audible harmonic scheme) does not present problems regarding the perception of continuity. However, when these related events are removed and replaced with contiguous, often apparently unrelated gestures, the perception of musical form changes as a result of the deficiency of predictability on the listener’s part and the subsequent reinforcing and heightening of the ‘now moment’ or ‘individual context’. As Christopher Hasty argues, ‘extreme contrast or the absence of predictability can negate temporal succession and thus create an absolute discontinuity’ (1986, p. 60). The question of gesture relates, then, not only to its context, but also to its role as a facilitator of unification or as the catalyst for discontinuity (Marek 1981).

The ‘problem’ to solve, it would appear, lies in reconciling apparently disparate musical gestures without compromising development over a larger formal structure—that is, without constantly reverting to stratification and juxtaposition of different gestural types as the primary musical development. Edgard Varese’s Integrales, for example, employs a crystallisation technique which effectively fuses different gestures (or music materials) as subsequent gestures are presented; Stockhausen’s Momente, whilst devoted to the ‘now’, uses only a small number of gestures, concentrating instead upon their proximity to each other and their metamorphosis through the influence of other moments, whilst retaining some aspect of the integrity of the initial gesture in its pure form.

Whilst the actual application of gesture is idiomatic to one’s compositional style, a reliable, working definition of gesture should be an absolute; neither malleable nor vague, in which case the term would be rendered meaningless. Thus, gesture is the aural entity created by the interaction of the various musical parameters. These individual parameters exist simultaneously and inform different aspects of the resultant sound complex. Methods of development of a gesture may take the form of changes in one parameter that informs the aural entity, or numerous changes and developments across numerous musical parameters. Richard Brooks defines the ‘minimum definition of gesture [as] a single sound subjected to at least one parametric permutation’ (1980, p. 55). His qualification of the ‘minimum definition’ is that it is more likely that a number of parametric specific developments occur simultaneously in order to constitute the awareness of changes to (or indeed the existence of) an individually recognisable aural entity. As he concludes, ‘any parametric alteration or permutation needs to be aurally more apparent to achieve gestural significance’ (ibid, p. 56).

Thus, the composite aural entity constituted of the various musical parameters may undergo development of itself in one parameter or numerous parameters simultaneously, or undergo a development that involves the ‘fusing’ with another defined gesture type. This ‘fusing’, or hybridisation, is the method through which the gestural model may inform a musical work, not only on the level of surface detail but also on the macro level—particularly in terms of form and texture. The application of the gestural model at the pre-compositional level (that is, defining the idiosyncrasies and behaviour of the component gestures of a work at the outset of the act of composition) allows for the free and ready interchange of otherwise parametric specific transformational procedures. These may then be applied to other parameters within the same gesture, and to other gestures within the same work, constituting both gestural development through the application of the former, and hybridisation through the application of the latter. The strength of this method lies in its flexibility, as developments within a defined parameter may be applied across other parameters within the gesture, or exported to other gestures entirely. This
facilitates compositional and developmental unity alongside thematic or conceptual consistency. Strategies for linkage

Perhaps the most significant concepts of strategic linkage are descended from Wagnerian Gesumkunstwerk and the application of the leitmotif. In this form of total art work, methodological frameworks are developed within which the creative artist may develop specific gestural information, but always within the service of the concept, or meaning, of the premise. It is perhaps of little surprise that the successor to Wagnerian Gesumkunstwerk is found in the plastic arts—most notably film (Prendergast 1977). Linkages between the visual and the auditive in film have, since film’s conception, been sought to augment not only the experience of the consumer of the arts product, but better and more fully communicate the on-screen drama. Indeed it was as early as 1949 that Aaron Copland codified the set of aesthetic criteria for music in film that is still used today. In order to ‘make potent through music the film’s dramatic and emotional value’ (1949, p. 28), Copland derived five broad areas that allowed music to operate in a film:

• music can create a more convincing atmosphere of time and place;
• music can be used to underline or create psychological refinements
—the unspoken thoughts of a character, or the unseen implications
of a situation;
• music can serve as neutral background filler;
• music can help build a sense of continuity in a film; and
• music can provide the underpinning for the theatrical build-up of a
scene, and round it off with a sense of finality.

These five broad areas canvassed by Copland show the communicative power of music; however, it is clear that music exists subserviently to the dramatic action and premises of the film. In this model the music is an emotional counterpoint to the film—albeit a highly effective and powerful one.

An earlier attempt to fuse music with image is to be found in Alexsander Nevsky, the 1932 collaboration between Sergei Eisenstein and Sergei Prokofiev. Whilst the resultant audio-visual score is oft heralded as a wholly successful example of a gesturally interrelated, parametrically intertwined relationship due to the ‘same motion laying at the base of both the musical and plastic structures’ (Eisenstein 1947 in Prendergast 1977, p. 223), there is a fundamental discrepancy relating to the correspondence of the structures. This discrepancy has to do with the information that is presented on the audio-visual score. Image and music are not shown: selected, key frames and musical notation is shown. There exists lack of consistency with relation to the passage of time in the visual, and no accounting for the difference in aural and visual perception times. What is apparent is a similarity of shape, but on paper only—not in audio-visual reception. The dangerous opposite of this is a literal translation of image into sound—otherwise known as ‘mickey-mousing’. This technique, quite obviously, takes its name from unadulterated mimicry in the musical part of the visual part—a technique that may be commonly found in cartoons. Whilst Prokofiev was keen for convergences and similarities between image and film, they ‘were never crudely synchronised but were intertwined in the complex texture of music and action in a much subtler way’ (Eisenstein 1948, p. 152). Unfortunately, the more subtle means of linkage between the audio and the visual proved to be temporally flawed.

If there exists an absence of a generalised theoretical framework for the analysis of ‘musical multi-media’ (Cook 1998, p. vi), there also lacks a substantive methodology for the creation of works of this ilk outside of the more intuitive Copland model, and the method of occasional convergence. It is in this relative void that a gestural method of composition comfortably sits. In addressing each of the 17 component musical parameters (Solomon 2002) and apportioning them to parameters in the visual domain, parametric leitmotifs are established, all of which are changeable yet regulated by pre-composition. Once the technique has been established, the real, informed act of composition may commence—to give the method meaning (Boulez 1975). Employing concurrent models of gestural and parametric development, and hybridisation across component art forms in the collaborative process, results in new media that is truly intertwined at both the conceptual and the technological (or process) levels. Through this particular method, linkages across disciplines are made at the methodological level, creating a closer relationship between the discrete component art forms than would normally exist in the situation where one art form responds to the other, or is employed for supportive or colouristic purposes. ENKI for flute, digital audio and live electronics was composed for the opening of the International Digital Art Awards (IDAA) 2004 in Launceston, Tasmania, and had a very close parametric linkage to several works in the exhibition. This was achieved by selecting key works (under the guidance of the curator of the exhibition), examining their relative conceptual drivers and technical points of departure, and employing the same principles of development and transformation in musical parameters that function similarly in the visual domain. (Far from a simplistic and didactic model, there is considerable scope for the individualist assignment of linkages in order to communicate in the desired fashion using the audio-visual medium.) Although the range of digitally-manipulated print media in the exhibition was large and diverse, there was in evidence a similarity of concept and technique throughout the works. These included:

• organic evolutionary principles;
• parametric specific developments;
• distortion;
• transformation; and
• making the familiar unfamiliar.

ENKI was designed to incorporate elements of a fixed audio part, able to be broadcasted, with a live flute part modified in real time by live electronics. The digital audio part was comprised of numerous flute samples (C flute, bamboo flute and shakuhachi, for example), modified wood sounds (tapping, clapsticks, marimba), sounds of metal (such as cymbals) and short excerpts from extant orchestral works of mine that embrace sounds of nature (Cicada Dusk, Naracoopa) assembled and modified in Pro-Tools. These orchestral chunks ultimately manifested themselves in the digital audio part as spatialised repeated figures, creating a larger macro-rhythm. While the digital audio part was broadcast, the flute simultaneously performed from the score. The score had some non-traditional elements but was, overall, notated in a standard fashion. A combination of regular notation and boxes of text, the score contained an analysis of the fundamental composition drivers that then formed the parametric basis for instrumental improvisation. Simultaneously, the flute signal was passed through an effects unit and treated as raw material for improvisation by a sound designer or mixer. Thus the flute sound heard by the audience was comprised of the raw acoustic sound and the treated, manipulated and spatialised sonorities of the amplified flute. Additionally, the flautist was at times required to produce sounds somewhat unfamiliar to traditional flute performance, including tongue clicks, pops and lip kissing, as well as organic sounds (such as breathing), and the evocation of key words:
‘Enki’, ‘Annunaki’ and ‘Ti-it’.

All three key words are essential components in an alternative view of the evolution of Homo sapiens. According to this view (which is fully expanded in the works of Zachariah Sitchin and Rene Boulay, amongst others), the Annunaki descended to Earth and settled in the Sumerian region in order to exploit the natural resources of the planet. The indigenous population (early humans) were used as unskilled labour and as a food source. Unsatisfied with the intelligence and strength of the early humans, the Annunaki set in process a genetic engineering experiment to develop the cognitive capacities of the species whilst simultaneously improving their strength. Charged with this responsibility was Enki, the chief engineer. Ultimately, Enki was successful in creating this stronger and more intelligent strain of human through the confluence of human and technology. Such was Enki’s love and admiration for his creation, he enabled this new human species to breed (which was at odds with his original brief). ‘Ti-it’ refers to the act of creation itself—the term, according to Bouley, is Sumerian for ‘new life’, but has historically been mistranslated as ‘rib’. The use of ‘Enki’ as the title of the work relates to the use of human performers and the technological aspect of the performing media. The cross-fertilisation of\ sound-type output (whereby the flute is treated electronically in real time, the digital audio consists of some ‘pure’ flute sounds, and the performer uses breath sounds and phonemes) unifies the organic sound with the processed, electronic sounds, thereby creating one aural entity.

This is reminiscent of Enki’s own experiment, and parallels the digitally-manipulated print media showcased in the exhibition.
The transformational and developmental aspects of ENKI were related directly to the transformational and developmental principles that underpinned the key works of the exhibition.

1 Additionally, important convergences occur between the composition of image within the limitation of the frame and the architectural aspect of form in the composition. The relationship between musical form and visual composition is also evident in works such as CageFor by Ruslanus Baranauskas

2 Here a familiar, if hybridised, humanoid form is rendered unfamiliar through the processes of juxtaposition of body parts, the employment of a limited colour palette, and dimensional modelling based on foreground, midground and background layers. These aspects of CageFor are paralleled in ENKI: the familiar timbre of the flute is placed contiguously
with exotic flute samples and the transformed audio of the regular C flute, with the raw materials primarily limited to flute sounds or breath and text sounds. The spatialisation of the sound (via the broadcastable digital audio and live electronics in combination with the directionality of the acoustic flute) results in location-specific relationships between musical layers reminiscent of the interplay between foreground, midground and background layers in Baranauskas’ work. Future research and applications

Present conditions allow for a degree of cross-disciplinary and cross-cultural collaboration that was barely imaginable (if imaginable at all) fifty years ago (Fischer 2003). However, while technological means for the production and dissemination of new arts products have fostered the emergence of new forms of creativity and media, it appears that in the case of audio-visual art, defined and discrete convergences have been generally neglected. Instead, an emphasis has been placed on the critical theory of the visual component.

Future research would seek to correct this discrepancy by analysing the parametric and symbolist synergies of key audio-visual works. Within this context, I also believe that exciting research can be undertaken in the composition of such works and their public presentation. Current projects in progress include employing the gestural method within the short film format, and the investigation of a new form: ‘music film’ (a development of emphasis within film music). I believe it desirable and possible for a specialist arts niche to be developed using the methodologies discussed. This specialist niche would draw upon Tasmania’s expertise and character in nature and wilderness print media and moving image, and combine it with an audio component that is of equal importance in facilitating meaning. Representations in the visual domain would be transmogrified into audio reflections—not merely sympathetic counterparts or mood facilitators, but equal component art forms that communicate more to the spectator than either one would alone. This specialist niche, with nurturing, has the potential to be a truly Tasmanian form dealing with Tasmanian themes, yet able to communicate internationally.

1 For a discussion of the key works and further musical analysis, please see my paper Music for an Exhibition: ENKI and the IDAA (Sudmalis, 2004).

2 CageFor may be viewed at http://www.internationaldigitalart.com/IDAA/2004idaaGal1/pages/e_CageFor.htm.


References
Boulay, R. A. (1990). Flying Serpents and Dragons: The Story of Mankind’s Reptilian Past. Clearwater, FL: Galaxy Books.

Boulez, P. (1971; translation by S. Bradshaw & R. R. Bennett, 1975). Boulez on Music Today. London: Faber & Faber.

Brooks, R. (1980). Structural Functions of ‘Musical Gesture’ as Heard in Selected Instrumental Compositions of the Twentieth Century: a Graphic Analytic Method. Ann Arbor, MI: Universityof Michigan.

Cook, N. (1998). Analysing Musical Multi-Media. Oxford: Clarendon Press.

Copland, A. (1949). Tips to moviegoers: Take off those ear-muffs. The New York Times, 6 November.

Eisenstein, S. (1948; translation by R. Griffith, 1970). Notes of a Film Director. New York: Dover.

Paul, C. (2003). Digital Art. London: Thames & Hudson.

Prendergast, R. (1977). Film Music: a Neglected Art. New York: Norton.

Hasty, C. (1986). On the problem of succession and continuity in twentieth century music. Music Theory Spectrum 8, pp. 58–74.

Manovich, L. (1996). The death of computer art’ The Net Net [WWW site, cited 5/2/04]. Available from Internet: .

Marek, Z. (1981). The Perception of a Musical Work from the Aspect of Anticipation of the Further Course of the Succession of Sounds. Presented at the Eighth International Seminar on Research in Music Education, Dresden, German Democratic Republic, 15–22 July 1980. Published in Council for Research in Music Education 66–7, Spring 1981.

Solomon, L. (2002). Music Parametric Analysis [WWW site, cited 15/2/04]. Available from Internet: .

Stockhausen, K. (1963). Texte zur elektronischen und instrumentalen Musik, vol.1, Aufsatze 1952–1962 zur Theorie des Komonierens. Cologne: M. Du Mont Schauberg.

Sudmalis, D. (2001). Gesture in Composition: a Model of Composition Involving Gesture, Gestural and Parametric Development, and Hybridisation as Examined in Six Original Compositions. PhD thesis, Sydney Conservatorium of Music/University of Sydney.


Address for correspondence
Dr David Sudmalis
School of Visual and Performing Arts
University of Tasmania
Academy of the Arts
Locked Bag 1-362
Launceston TAS 7250
AUSTRALIA
David.Sudmalis@utas.edu.au

Above copied from: http://www.utas.edu.au/arts/imaging/sudmalis.pdf.

No comments: