Abstract
Video is an electronic and audiovisual medium based on signal processes. Since the mid-1960s, it has been primarily artists from the areas of music and film who have explored the audiovisual and processual features of video. With their video experiments, the composer Nam June Paik, the violinist Steina Vasulka, and the trained filmmaker Woody Vasulka, for example, intervened in the internal structure of electronic images and sounds, exchanged sound and image signals, and in the 1970s edited video signals with auxiliary devices such as the processor and the synthesizer. Video artists have used the special features of media in order to make multifaceted variations of image-sound connections by means of electromagnetic manipulation as well as interfacing analog and, later, digital devices. These features allow the possibilities of variation in the medium of video to emerge in correspondence with principles of musical composition. Since the 1980s, the audiovisual medium of video has also proved to be continuously upgradeable through the use of digital processes. Since the 1990s, large-format video installations in particular have been introduced into the art and museum context, and video has been employed in mixed media environments such as virtual, augmented, and mixed reality.
In terms of the history of technology, video emerged with the gradual introduction of the battery-operated Portapak camera by Sony in 1965, the development of fast-forward and rewinding magnetic tapes, and videocassette recorders (VCRs) around 1969. It was not until 1971 that portable video technology with videotapes that could record, play back, rewind, and fast-forward was available, and from 1973 there was a half-inch VCR for amateurs.
Video is in principle an electronic medium like television, with which it shares the basic technical properties of signal transmission and a scan-line image format. One basic difference is the way this technology is used. Television is adjusted so that during a broadcast, the scan lines are synchronized in such a way that image interference (i.e., interruptions and delays in the signal transmission) does not occur; instead, a constant and coherent image impression is generated. In contrast, video is an open, modular system consisting of different components. In video there are various possibilities to record, transmit, and broadcast video signals. They can be produced in the devices themselves (e.g., in a synthesizer). Even in the early days of this medium, in collaboration with engineers, video pioneers such as Nam June Paik, Steina and Woody Vasulka, Dan Sandin, and Gary Hill explored the variety of possibilities of intervening in the synchronization of the signal processes and combining different devices. The processual structure of video allows multiple connections of the devices as well as the exchange of audio and video signals, and due to these specific features has proven to be particularly upgradeable for experimental artistic developments in the area of electronic media.
Like television images, video images are kept in constant motion and reflect the flux of electronic signals. In his didactic video How TV Works (USA, 1977), Dan Sandin demonstrates how light information is transformed into signals inside the camera and how these signals circulate between the recording and the playback devices (in a so-called closed circuit). The electronic signal runs vertically and horizontally and both constructs and reconstructs electronic images in the camera and in the screen. Each video image is assembled from two interlaced half fields that consist of temporally displaced even and uneven lines. In this continuous scan-line process, the video signal has to be stopped at the end of each line and synchronized as compound image information; only then does a recognizable raster image appear on the screen. If this step did not occur, video would consist of open lines in horizontal drift.
The first experiments with television and video took place in the early and mid-1960s. Due to the lack of recording technology, they were conducted with synchronous or time-delayed signal transmission in a closed circuit of input and output. Video pioneers such as Nam June Paik, Steina and Woody Vasulka, and, first and foremost, Skip Sweeney discovered the possibility of delayed feedback early on and achieved strong multiplications of electronic wave forms through video feedback.[1]
They also began intervening in the line configuration by undertaking deliberately non-synchronized changes of direction in the constant vertical and horizontal movements of the video signal, interrupting the broadcasting of image and sound signals, and generating deviations from the televisual raster image (in the standardized formats PAL, NTSC, and SECAM). For this purpose, a television, video cameras, and a recorder were modularly combined, for example, with electromagnets (Paik’s Demagnetizer, 1965), synthesizers (the Paik/Abe Synthesizer, 1969), and image processors (the Rutt/Etra Scan Processor, 1973). This approach was taken with the intention, on the one hand, of testing the variability of electronic wave forms in frequency modulation, by means of which the image content is dissolved into abstract graphic patterns and three-dimensional forms, and, on the other hand, of obtaining more information on the transformation possibilities of processual forms of progression (i.e., the transformation of audio into video, changes of the direction of signal motion).
A synthesizer either generates (synthesizes) wave forms (audio and video) internally with oscillators or it modulates existing signals. Steina and Woody Vasulka initially employ audio synthesizers as an interface in order to transform a video signal into an audio signal. In the process, image signals are translated into sound, and the sound is controlled by images. Video synthesizers developed especially for image processing, for example the Paik/Abe Synthesizer (1969) built by Nam June Paik and Shuya Abe, as well as Stephen Beck’s Direct Video Synthesizer (1970) and Eric Siegel’s Electronic Video Synthesizer (1970), separate video from camera-based, optical recording devices by generating signals into movements whose temporal progression can moreover be made visible and audible. In these types of video synthesizers, wave forms can be produced by means of oscillators that can be modularly connected and in this synthesis create new forms that occasionally have a three-dimensional effect.
A synthesizer can therefore be integrated into the variable structure of video because it processes the information coming from different modules, all of which have input and output connections. The signals that pass through the different modules can ultimately be recorded and played back. It should be emphasized that the color channels (red, green, blue) are processed individually, which heightens the variety of the processes and combination possibilities.
In the late 1960s, engineers around the world, often in close collaboration with artists, began developing different models of synthesizers.[2] Although many of these devices have been forgotten, because of the success of the video artist Nam June Paik, the Paik/Abe Synthesizer is today one of the best-known video devices. However, Paik did not use his synthesizer for the internal generation of audio and video, instead working with camera input and external image material.[3]
At the WGBH television studio in Boston, the Paik/Abe Synthesizer reached the public for the first time in a four-hour broadcast of Video Commune (August 1, 1970). In this interactive performance by Paik, modified television images to which songs by the Beatles had been added were broadcast live. This performance was preceded by the recorded experiments with the Paik/Abe Synthesizer at the WGBH television studio in collaboration with its director, David Atwood: 9/23/69: Experiment with David Atwood (1969). In the videotape Global Groove (1973) as well, Paik designed a kaleidoscope of media communication by means of the decomposition and recomposition of excerpts from television broadcasts, theater documentations, commercials, and a Fluxus performance by Charlotte Moormann (TV Cello). He transformed the material, which had been altered through magnetic manipulation, feedback, synthesizer, and processor, into a collage of musically structured flux motion that features the interval-like variation and cluster-like superimposition of different processing operations (see the work description).
In the early 1970s, video processors were constructed by engineers in a close exchange with video artists. They serve to control the electric voltage and bring about signal variations that cause the deflection of the individual scan lines. In contrast to the synthesizer, which in principle proceeds compositionally, generates image and sound, and links different objects, a processor analyzes the smallest units in the video, its wave forms, and in this way controls the image.
The Rutt/Etra Scan Processor,[4] developed in 1973 by Steve Rutt, Bill Etra, and Louise Rutt and with which, for example, Nam June Paik, Gary Hill, and Steina and Woody Vasulka worked, is particularly suited for video analysis — that is, for the control and modulation of electric signals. In the Scan Processor, the brighter parts of the image are strongly or slightly lifted up in their temporal progression according to the voltage, causing the horizontal lines to deflect vertically and sculptural forms to be generated. Abstract figurations are produced from videographic scan lines through the addition of power. In this way, in Vocabulary (1973), Woody Vasulka had separate image areas flow together to produce new forms based on equal brightness and color values. Gary Hill employed other functions of the device in Picture Story (1979), in which he reduced and enlarged keyed-in-that is, cut-out and inserted — image parts, and transposed the top-to-bottom and right-to-left relationships in the overall image.
Exemplary use of the functions of the Rutt/Etra Scan Processor is made in Steina and Woody Vasulkas’ video Noisefields (1974). In this work, additional electronic information (electronic snow) is keyed into a circular form that has been recorded by camera and processed in such a way that the impulse movement of the signal can simultaneously be seen and heard. In the process, the image content is determined by the modulation of unformed electronic oscillation processes — in other words, video noise. Thus, noise as a formless electronic basis that contains all of the frequencies to the same degree has the potential of auditive and visual configuration.
This effect of the feedback of the video image to the electronic raw material in sound is intensified in that a video sequencer, which regulates the deflection frequency of alternating image fields, transfers the audiovisual information that has been heightened into noise into a positive-negative switching with variable speeds. A noisy image-sound impression is produced, an electronic flicker effect. Noisefields thus represents the beginning of electronic signal processes.
The audiovisual quality of video lies in the exchange relation between audio and video signals. Sound signals, which are produced by means of an audiosynthesizer, can be translated into image signals and can therefore control the visual phenomena of video. Conversely, the electronic information contained in the video signals can at the same time be realized acoustically and visually. In video, one can see what one hears and hear what one sees.
In the early 1970s, Steina and Woody Vasulka demonstrated this reciprocal regulation of audio and video impulses in numerous experiments. Soundgated Images (1974) is an example of the simultaneous generation of image and sound, and Soundsize (1974) modulates a television test pattern, with electronic sound determining the format and the form of visual realization. Heraldic View (1974) works with a pattern generated internally by oscillators that is laid over a camera image. Changing the voltage in the cable-connected audiosynthesizer causes abrupt shifts in the synthetic image, which interacts with the keyed-in elements of the camera image and triggers unpleasant disorientation in one’s perception. In Full Circle (1978), Gary Hill demonstrated how sound oscillation produced with his own voice can be depicted visually. In the video, the wave forms of the pattern-generating oscillators as well as those of his voice can be simultaneously heard and seen.
A further audiovisual potential of video lies in its treatment of noise, whereby the video signal can produce either an auditive or a visual representation of its raw material, noise.[5] On the basis of this potential, early video pioneers recognized a structural kinship between the generation and processing of electronic signals and the principles of music composition. Their interest is related to the variability of the electronic signal, such as in the variation and repetition of a pattern and in the interaction between various instruments and devices.[6] The interconnection of several devices not only heightens the diversity of electronic visuality or audiovisual representation in the pattern, but also opens up the possibility of creating multilayered abstract forms that interact in temporal progression on the represented audio and video level. The closer connection between video and music (compared to other image media such as film and photography) follows, on the one hand, from the technical foundation in video noise, which has the potential of audiovisual configuration, and, on the other hand, from the devices’ possibilities of modular composition — not only do they work together like musical instruments, they can actually interact, as Steina Vasulka demonstrates in Violin Power (1970–1978).
In particular the trained composer Nam June Paik and the trained violinist Steina Vasulka dealt in their videos with issues related to structural correspondences between music and video and considered video an extension of their musical practice. Paik explored the variability of vertical-horizontal image motion and changed the synchronization of the signals like variations on a musical theme.[7]
Steina Vasulka saw the connection between music and video primarily in the possibility of transferring the movement of the instrument being played to video modulation. She realized this type of interaction between image and sound in her audio-video performances in Violin Power. In this work, the movement of the bow across the strings of the violin, which is being played live, causes direct signal deflections of the image position of the video image of this performance, which is simultaneously being recorded and played back, so that the artist, as it were, plays violin and video at the same time. By including the scan processor, sequencer, and keyer, she produces not only variations, but a complexity of videographic movements as well. This type of acting together of several representational levels of the source information is equal to musical polyphony and becomes most vivid when image and sound are produced from the same source or, as in Violin Power, are processed in parallel.
In comparisons of musical and videographic composition, it should be noted that, due to the open structure of processual images, the technical realization of the analog medium of video allows, as is the case in music, infinite variations on the image pattern. This capability also distinguishes video from the seriality and repetition in film and photography.
In the 1980s, the development of video stood at the threshold of analog and digital devices. Characteristic of this period is the use of keyers for the control and arrangement of multilayered image segments whose textures can be cut out, — that is, keyed in and out. As early as the beginning of the 1970s, there were keyers with digital components which allowed the variable mixing of different video sources (foreground-background relationships) in a single video output.[8] The first testing of digital computers for image processing began in the late 1970s. In 1978, the Digital Image Articulator constructed by Jeffrey Schier and Woody Vasulka allowed one to change the format, scale, resolution, and size of the image field in individual programming steps as well as the determination of the color values for individual image positions in real time. The device, which is designed for internal image generation on an algorithmic basis, processes external image sources that are transformed into data by means of an analog-digital converter. At the digital processing level, the electronic signal is sampled and construed in discrete units.
Overall, the methods and concepts of analog video processing were continued with increased complexity with the digital programmability and greater storage capacity of the digital computer. In the 1980s this was achieved above all by modularly combining analog and digital applications.
Video may have initially been an analog medium based on modular plug connections; however, due to its analog processuality it can be considered a precursor of the digital programming function in the computer. Video devices that work with electric variables on the basis of plug and switch connections and that arrange sequences can be regarded as analog computers.[9] The difference between analog devices and digital devices consists in the fact that the former is plugged and the latter is programmed.
In addition to its integration of computers and the transfer of video processes to the digital computer, video from the 1990s to the present above all stands out through the expansion of the use of video to include multimedia installation and object art with large-format projections onto larger-than-life screens.[10]
The processual videos by the duo Granular Synthesis (Kurt Hentschläger and Ulf Langheinrich) in particular demonstrate structural audiovisuality in abstractly suspended forms. Granular Synthesis prefers to work with the technical method of granular synthesis,[11] with which they subject recorded image and sound material to an analysis down to the smallest elements (grains) in order to be able to resynthesize audio and video samples obtained from these kinds of noisy units of information. In their live performances of Model 5 (1994–1996), for example, they reassemble the previously recorded image/sound material from the performer Akemi Takeya, which they have separated into its smallest units, in high density. In the process, pitch and the playback speed of the images can be controlled independently.
In contrast, David Stout selects noise as the source material for processing. The use of the computerized feedback of the raw video material noise and closed-circuit arrangements results in digital modulations of noisy energy fields (noisefields) which, like feedback, create abstract formations in the video. Stout realizes these processes in his interactive video-noise performances (e.g., Signalfire [2003]) with the aid of the open-source software Image/ine.
In the late 1990s, Steina Vasulka used the same program on a laptop in further developing her performance setting for Violin Power. In order to increase the variability of the electronic image and sound in the interaction, since 1991 she has carried out her live video/violin performance using a MIDI violin connected to analog signal modulation.
It was not until the 1990s that video art successfully established itself in exhibitions, while its presence at media festivals decreased. Media festivals increasingly focus on interactive and network-based works in which video is included as a representation medium. The adaptation of video in blended media realities such as virtual, augmented, and mixed reality, however, is for the most part related to solutions to problems of representation and movement in digitally constructed space. But it also shows that video has developed a specific, electronic vocabulary and in the meantime is acknowledged as a reference medium for audiovisual experiments in digital media.
[1] Video feedback is a common process; the term is not traced back to an inventor. Cf. Woody Vasulka, “Video Feedback with Audio Input Modulation and CVI Data Camera,” in Eigenwelt der Apparate-Welt: Pioneers of Electronic Art, ed. David Dunn (Santa Fe, N.M.: Vasulkas, and Linz: Ars Electronica/Oberösterreichisches Landesmuseum, 1992), 148–149. Online at http://www.vasulka.org/archive/eigenwelt/pdf.old/147-152.pdf.
[2] Many of these developments originated in the setting of the Experimental Television Center in Binghampton, New York; see http://www.experimentaltvcenter.org. Work on a book, Tools: Analogs and Intersections; Video and Media Art Histories (eds. Kathy High, Sherry Miller Hocking, and Mona Jimenez), with the DVD Early Media Instruments is currently in progress.
[3] On the various synthesizer developments, see Dunn, Eigenwelt der Apparate-Welt.
[4] See the description of the scan processor in Woody Vasulka and Scott Nygren, “Didactic Video: Organizational Models of the Electronic Image,” Afterimage 3, no. 4 (1975), 9–13.
[5] See Yvonne Spielmann, Video: The Reflexive Medium (Cambridge, Mass.: MIT, 2008).
[6] Sherry Miller Hocking and Richard Brewster, “Image Processing,” in Dunn, Eigenwelt der Apparate-Welt.
[7] These experiments were shown at the exhibition Exposition of Music: Electronic Television, Galerie Parnass, Wuppertal, 1963, and were in part reconstructed and filmed by Jud Yalkut for Early Color TV Manipulations by Nam June Paik (1965–1968).
[8] One example is George Brown’s Variable Clock (1972), an impulse generator that represents a programmable instrument.
[9] Cf. “Analog Computers,” Computer Museum, University of Amsterdam, the Netherlands; online at www.science.uva.nl/faculteit/museum/AnalogComputers.html.
[10] Cf. Ursula Frohne, ed., Video Cult/ures: Multimediale Installationen der 90er Jahre, exh. cat. ZKM Karlsruhe (Cologne: DuMont, 1999).
[11] Tom Sherman provided the following description: Granular Synthesis, the name adopted by artists Kurt Hentschläger and Ulf Langheinrich, is derived from granulated sound synthesis, an information processing technique for synthesizing digital audio. A series of very short samples (grains) are sequenced, reassembled to produce a granulated sound synthesis. Selected whole sounds (and/or images) are fragmented into tiny snippets (grains) and are also recombined to make whole new granulated sound or image continuums. Online at www.experimentaltvcenter.org/history/pdf/shermangranular_2740.pdf.
People |
Works |
Timelines 1960 until today All Keywords Socialbodies |