Thursday, December 13, 2007

Indeterminacy and chance

From the earliest days of algorithmic art, the probabilistic approach to art generation has been very popular. Its modus operandi can be summarized as follows: (1) a space of possibilities is defined in explicit, mathematical terms; (2) a probability distribution is defined over this space; (3) an algorithm is executed which draws random samples from the space, in accordance with the probability distribution.

The idea of chance-based art was first broached in the early twentieth century by Lewis Carroll, Marcel Duchamp, and Tristan Tzara. Its consistent application was pioneered in the 1950's by Ellsworth Kelly, François Morellet and John Cage; it was continued by a large number of artists which includes Mark Boyle, Karl-Otto Götz, Jackson Mac Low, Kenneth Martin, Manfred Mohr, Frieder Nake, Georg Nees, A. Michael Noll, Peter Struycken, Zdenek Sykora, Herman de Vries, and Ryszard Winiarski

Different approaches to probabilistic art

Random Sampling

The computer may be used to make abstract art at a new level of abstraction. One may implement formal definitions of visual styles, and then use chance operations to sample pieces within that style. This was done, for instance, by Martin, Mohr, Nake, Nees and Sykora; Noll's Mondriaan simulation is another obvious instance. The best representative of this approach is perhaps Harold Cohen's program-sequence AARON, which was developed somewhat later and is significantly more complex.

Programs of this sort are sometimes viewed as models of a human artist. Because it is the role of the random sampling operation to create a certain amount of unpredictability within the program's style, this operation may then be viewed as the locus of "creativity" (Nees, 1969) or the placeholder for "intuition" (Nake, 1974). This is misleading; real variety and unpredictability depend on the complexity of the program's style.

Celebrating Chance

Several artists made work which does not merely employ random sampling operations, but uses chance in a more prominent role: to avoid choice and to symbolize arbitrariness. Work of this sort is created by drawing random samples from very elementary spaces (all possible grids, all possible dot configurations, all possible line configurations), or from spaces which themselves were chosen at random. Examples are Duchamp's Erratum Musical, Morellet, Cage and De Vries. Technically, such work is "algorithmic art". Content-wise, it is close to the traditions of the readymade and the monochrome. All of the artists just mentioned also made non-probabilistic work which shows this: Duchamp invented the readymade, and emphasized how the choice of his readymades was "based on a reaction of visual indifference with at the same time a total absence of good or bad taste ... in fact a complete anesthesia" (Duchamp, 1961). One of Morellet's favorite themes is the empty grid. Cage wrote silent pieces (4'33", 0'00"), and lectured on "Nothing". De Vries painted white monochromes and published an empty book.

The intrinsic connection between chance art and the monochrome is well-known from Information Theory textbooks: for human perception, strictly uniform patterns and strictly random patterns provide similarly boring, almost indistinguishable experiences; total order and total disorder are equivalent. For instance: randomly colored grids are perceptually indistinguishable from each other. If the cells of a random black-and-white grid are sufficiently small, it looks uniformly colored: grey.

Many chance-artists made very similar-looking pieces. The content of such pieces may nonetheless be radically different: Kelly's work is about perception; Morellet is ironically philosophical ("esprit"); Cage and De Vries used chance in an almost figurative way, evoking nature; Struijcken is concerned with objectivity (being right).

Art after the End of Art

To be aware of art history as a meaningless process of stylistic innovation, and yet to be part of that tradition, implies the desire to make a qualitatively different step: to not just add a few more styles, but to transcend the whole process. (Constructive postmodernism.) An unusually explicit articulation of this an ambition is the idea of the "arbitrary artwork" – the piece which is sampled at random from the space of all possible artworks, without any subjective aesthetic decisions.

Hard-core chance art as pioneered by Morellet and De Vries (random grids etc.) does not carry out this idea, but it symbolizes it. That was a necessary first step. To actually produce random artworks requires a long-term scientific research project: to develop an explicit analytical description of the space of all possible artworks, not in terms of pixel grids, but in terms of Gestalt structures as perceived by human observers. The IAAA project Artificial is a modest attempt in this direction. [Cf. Scha (1988), Harry (1992), Van Weelden (1994), Scha (1998), Scha (2001).]

copied from: http://radicalart.info/AlgorithmicArt/chance/index.html

Chance (What is it? Does it exist? Can we fake it?)

Is it possible to design conceptually definite processes with unpredictable outcomes? Can indeterminacy be implemented without invoking "nature", and without shifting artistic decisions to curators, performing artists, or the public itself? The obvious answer to this challenge is the use of chance procedures – a method that may be summarized as follows: (1) define a space of possibilities in explicit, mathematical terms; (2) define a probability distribution over this space; (3) draw random samples from the space, in accordance with the probability distribution.

This probabilistic art generation strategy highlights one artistic problem with relentless clarity: How to define the space of possible outcomes (and the concomitant probability distribution)? This problem is discussed in our page on chance art. The strategy also raises some slightly esoteric philosophical/physical questions: What is chance, and does it exist? For the practice of chance art, the answers to these questions are largely immaterial, but for an appreciation of its conceptual dimensions, they are indispensible.

What is chance?

The common-sense notion of chance refers to real-life unpredictability. (William Wollaston, 1722: "Chance seems to be only a term, by which we express our ignorance of the cause of any thing.") For predictions about an ongoing sequence of events that must be based on observations of an initial segment, a mathematical correlary of unpredictability can be developed: unpredictablity = the absence of regularity = the impossibility of a gambling strategy. This analysis was first proposed by Richard von Mises in 1919. It was perfected by Abraham Wald (1936/1937) and Alonzo Church (1940), criticized by Ville (1939), and saved by Per Martin-Löf (1966).

A different perspective on this matter, based on Shannon's information theory, is due to Andrey Kolmogorov, who focussed directly on the absence of regularities in initial segments of a random sequence. Since any regularity in a sequence allows it to be encoded in a more efficient way, randomness may be equated with incompressibility. This idea was further developed by Gregory Chaitin. (Cf. Li & Vitanyi, 1993; Calude, 1994; Chaitin, 2001.)

Randomness implies various kinds of statistical uniformity – and for many practical purposes, that is all one needs from a "random" sequence. Effective criteria for statistical uniformity were first proposed by Kermack & McKendrick (1936/1937) and Kendall & Babington Smith (1938). See Meyer (1956) for a bibliography of early work in this area. The current state of the art is the Diehard test-suite (cf. Marsaglia & Tsang, 2002).

Does it exist?

Unpredictability is often operationalized through uncontrolled physical processes, such as casting dice, tossing coins, and spinning the roulette wheel. For practical purposes, this works fine. We know, however, that events of this sort can in principle be predicted, by measuring initial conditions and applying the laws of classical mechanics. For roulette wheels this is even practically feasible (Bass, 1985). But prediction becomes increasingly difficult if we look at modern devices for random number generation, which generate fast bit streams from small-scale physical phenomena such as thermal noise (electric potential fluctuations in conducting materials) or atmospheric radio noise (cf. random.org).

Physical measurements at the quantum level are not predicted by any known theory; they are thus "random" in an unusually strong sense of that word. It is sometimes asserted that they are absolutely random, i.e., that we know that no conceivable deterministic theory could predict their outcomes. Von Neumann (1932) presented a formal proof to this effect, which was, however, based on an incorrect assumption (cf. Hermann, 1935; Bell, 1966). In the meantime, there is experimental evidence about the reality of quantum-entanglement, which implies that quantum-measurements cannot be accounted for by local hidden variables. HotBits is an online source of random numbers which uses quantum effects: radioactive decay.

Can we fake it?

An old challenge in computer science: can a deterministic computer be programmed to yield number sequences which are "random" in the mathematical sense of that word? In the strict sense demanded by Von Mises and Kolmogorov, this is obviously out of the question: the generating algorithm defines both a perfect gambling strategy and an extremely efficient compressed code. (John von Neumann, 1951: "Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.") Mere statistical uniformity, on the other hand, is a difficult but not impossible challenge. Divisions between large incommensurable numbers often yield sequences with reasonable statistical properties (Knuth, 1969). Several other methods have been developed over the years; see Coddington (1996) for an overview. The current state of the art is the "Mersenne Twister" (Matsumoto & Nishimura, 1998).

copied from: http://radicalart.info/AlgorithmicArt/chance/chance.html

for more information see:

http://sigchi.org/chi97/proceedings/short-talk/ak.htm

http://www.lcdf.org/indeterminacy/

http://www.art-newzealand.com/Issues21to30/chance.htm

No comments: