With the invention of the photocell and the cathode ray tube in the late nineteenth century, analogue media techniques were introduced that made it possible to transform sounds and images. The photocell as an image to sound converter played a central role in the development of sound-on-film techniques. As early as the 1930s, this potential for transformation was also artistically approached: optical sound not only represented the first effective process for the direct synthesis of sounds but also provided the opportunity to make drawn or recorded graphic elements audible, while at the same time observing them in the moving image.
With the help of the cathode ray tube as an electronic image generator it is possible to transform sound into images. As an image generator in the oscilloscope and television, it served to simultaneously transform acoustic signals into moving images in early video experiments of the 1960s. The specific characteristics of these audiovisual transformation processes were increasingly considered to be fundamental prerequisites of art production for the eye and the ear. This demonstrates clearly that technical audiovisuality has its own very distinctive way of interacting with the senses, and generates convergences of sound and image that differ from synesthetic correlations and structural analogies.
The term audiovisual transformation describes the processes of converting sounds into images and images into sounds. This text deals with analogue transformation as opposed to digital parameter mapping. Different media can take on the role of transmitters and also be effective in aesthetic terms as prerequisites for conversion. This is particularly apparent at the intersection of film, electro-acoustic music, and video art, a field in which audiovisual strategies are developed that respond to the technical medialization of sound and image. Similar to concepts such as color-tone analogies and structural analogies, transformation models also reflect on the relationship between sound and images, as well as the interaction of different art genres in the twentieth century. For example in 1922 Raoul Hausmann described the photoelectric cell as an instrument with which the existence of an identity of light and sound could be proved, as a result of which “no connections between painting and music in the sense of established genres and sentimental categorizations” might any longer be acknowledged.[1] The writings of László Moholy-Nagy and John Cage, published a good ten years later, continued to reflect on the relationship between audiovisual media technology and art production.[2] These early aesthetic-conceptual reflections on the transformability of sound and image can be seen as paving the way for the audiovisual experiments in video that have taken place since the late 1950s.[3]
In initial reports on photographic attempts at recording sound in the early nineteenth century one can read how Daguerre plates were exposed using a mirror affixed to a recording membrane. The discovery of the photoelectric characteristics of selenium[4] in 1873 led to the development of the photo cell, which was then used in sound films to transform variations in light intensity into sound. Ernst Ruhmer and Eugène Augustin Lauste laid the foundations for the optical sound process. In 1916, Dénes von Mihály presented the first optical sound screenings, followed by Sven Berglund in 1921, by Joseph Tykociński-Tykociner in 1922, and also Hans Vogt, Joseph Massolle, and Joseph Engl, who made a name for themselves as the Tri-Ergon society.
Sound film constitutes the earliest recording medium for sound and image: celluloid strips contain both the images of events unfolding in a timeline and optical sound: audio events that have been visually recorded. In the course of the development of sound film, several optical sound recording processes emerged that, while they differ in terms of their media-technical details, are comparable with regard to the principles of audiovisual transmittability. The recording of oscillographic peaks[5] has established itself as a standard practice.
The recorded sound is transformed into electrical voltage fluctuations using a microphone. The signal is transferred to a light image by a mirror moved electromagnetically, which vibrates in accordance with the voltage fluctuations generated by the sound. This oscillating mirror reflects a ray of light that creates an image of the vibrations in the form of an audio track set between the picture frame and the perforation on the moving celluloid strip. In this way, the sound is recorded as an oscillographic curve, quasi photographed.[6] The exposed audio track has a level of transparency proportional to the sound pressure level: when the amplitude is higher, the level of transparency increases. When the film is projected, the curve is scanned accordingly, following the reverse schema. An electronic light source shines through the audio track as it passes and hits the photo cell behind, which generates an alternating voltage level in relation to the amount and intensity of light. These alternating voltages are made audible by means of amplifier tubes and loudspeakers.
Recording technology is subject to the logic of the reversibility of recording and playing. The fact that the optical sound track is equivalent to the electro-acoustic signal makes it possible to synthesize sounds by means of tracing the appropriate wave forms directly or exposing them onto the sound track through stencils. As a result, optical sound as a photoelectric sound synthesis process was not only relevant for the development of the sound film but also played a significant role in the construction of different electronical musical instruments. In most cases, the sound was generated using the wavetable-synthesis method[7] via concentric, rotating perforated discs or partially darkened panes of glass, which modulated the light that lands on the photo cell. In the late 1920s and early 1930s, a whole generation of electric organs was created based on this method. This includes for example the Cellule Photo-électrique (1927) made by Pierre Toulon and Krugg Bass, the Superpiano (1929) created by Emerick Spielmann and finally, Edwin Welte’s Lichttonorgel (1936). In addition, Yevgeny Sholpo’s Variophone (1932) and Daphne Oram’s Oramics (1959) are techniques that use the exposed celluloid loops and sound generators.
Early artistically motivated experiments with sound include the experiments made by the Russian Futurist Arseny Avraamov from 1930 onwards. Avraamov developed methods of first drawing wave forms in larger formats by hand, before then scaling them down photographically to fit the narrow audio track of the film material and in this way synthesize sounds. In the same year, the animator and engineer Rudolf Pfenninger worked on similar methods to develop his Tönende Handschrift (GER 1932). Like Pfenninger’s experiments, those of the Russian inventor Boris Yankovsky between 1932 and 1939 were also initially motivated by a scientific interest in electro-acoustics and phonetics. It was Yankovsky who exploited the potential of optical sound tracks for the deliberate processing of sounds (spectral analysis and resynthesis, time stretching, or formant synthesis).[8]
However, the fundamental conditions that shaped artistic production were derived from the media-technical requirements of optical sound as early as the 1920s. László Moholy-Nagy for instance, distanced himself in 1927 from the concepts of color-light art, and saw optophonetics as the locus of future aesthetic discourse on the interrelationship of all optical-kinetic and acoustic-musical matters.[9] In his essay “Neue Filmexperimente” (new experiments in film),[10] published in 1933, he makes reference to artistic appropriation of the optical sound method as a means to back up his theoretical writings from 1923 on the potential of the gramophone.[11] It is remarkable that both Moholy-Nagy in these texts, and John Cage in a lecture he gave in 1937, “The Future Of Music: Credo,” sketched two antagonistic models of the artistic use of optical sound. On the other hand, both authors call for a precise study of the graphic symbols of the different acoustic phenomena,[12] in order to provide complete control over the overtone structure of tones … and to make these tones available in any frequency, amplitude, and duration.[13]. In addition, optical sound methods allow for music to be completely recreated,[14] while new methods will be discovered, bearing a definite relation to Schoenberg’s twelve-tone system.[15]. Following the approach taken by Theodor W. Adorno, namely that what is new in art emerges from the progressive evolution of artistic material, optical sound was perceived as an instrument for the subjective control of sound structuring. On the other hand, Moholy-Nagy and Cage introduced the concept of an experimental aesthetic practice into their reflections on the nature of optical sound. Every form of optical material can serve as a source for the creation of sound.[16] Left to the apparative logic of the photo cell, the resonant results cannot be predicted on a theoretical level.[17] In this context, Moholy-Nagy refers to experiments in which “the profile of a person … was hand-drawn on film and then made audible.[18] In these attempts to make the two-dimensional tracings of the human physiognomy audible based on the icononical similarity to the transversal script of the phonograph, the anthropomorphism that also inspired Rainer Maria Rilke to trace the coronal suture of a human skull with a phonograph stylus is expressed.[19] In addition, Moholy-Nagy describes studies made by filmmaker Oskar Fischinger, who, in addition to studies on the synchronization of instrumental music and animated visual forms had addressed himself since approximately 1931 to the matter of drawn optical sound. Fischinger put patterns and ornaments on the audio track of the film strip. However, he scarcely broached the issue of the discrepancy between the imagery of these figures and the entirely indifferent medial gaze of the photo cell, by writing the following in two newspaper articles from 1932: There is a direct relationship between ornament and music, in other words, ornaments are music. … One can perhaps hope that relationships can be found between the linear beauty of form and musical beauty.[20] We can perceive the optical sound track as a two-dimensional form yet the photo cell only evaluates one optical dimension (fluctuation in the intensity of light) and a temporal one (frequency of the fluctuation of intensity). For this reason, different optical patterns can create the same variations of intensity on the optical sound track and therefore sound identical when played. Consequently, to assert the unambiguous correlation of geometric forms and sounds is only partially valid; these must be read in a more differentiated manner, in the light of the interdependency of medial operation and human perception.
The possibility of audiovisual transformation provoked an oscillation between semiotic and medial registers, as Guy Sherwin reflects in the optical sound films he has realized since 1971, a good forty years after Fischinger. Sherwin did not find the pattern-like cyclical nature of the optical film sound track in abstract drawings, but in the photographic image — by taking a series of photos of fenceposts and stairs in succession and exposing them on both the image and sound tracks. In order to render persistently indexical photography audible in the form of a play of light and shadow based on architectural elements, Sherwin’s concept takes the change of media to its extreme. In the film Newsprint (UK 1972) he particularly emphasizes this culmination by sticking newspaper onto celluloid strips and thus replacing the spatially distanced exposure process with physical contact. The film material literally becomes a carrier for image and sound. Sherwin allows the script and audio soundtrack — symbols and signals, in other words — to collide with one other.
Others also attempted to make optical means of recording sound visually tangible, among them Norman McLaren[21] (Synchromy, CAN 1971) and Lis Rhodes (Light Music, UK 1975). The image track in these films is laid out in equivalence to the equidistant beams on the optical sound track. In contemporary art practice, especially works by Bruce McClure and Derek Holzer’s Tonewheels (since 2007) project are representative of the transformative use of light.
Wassily Kandinsky expressed the following in his book Punkt und Linie zu Fläche (Point and Line to Plane) from 1926: The geometric line is an invisible thing. It is the track made by the moving point; that is, its product. It is created by movement — specifically through the destruction of the intense self-contained repose of the point.[22]The validity of this sentence for the primal scene of a medium which on first view does not appear to show any particular similarities to artistic drawings, is expressed by Claus Pias in an essay on the genealogy of computer graphics.[23]
Here we are dealing with the electromagnetic deflection of an electron beam inside of a cathode ray tube, the image-generating building component of radar displays, oscilloscopes, and vector screens.[24] Here, the image is not found on a plane, as in film, but in an accumulation of the routes taken at high speed by a point of light on a phosphor screen. The permanent flow of the electronic image is formed over time due to the inertia of our visual perception, while the routes traced by the point of light merge to form curves that appear static. These curves drawn by the point of light are ephemeral artifacts of our visual sensory cells. Only by constantly tracking the traces left by the point of light is the line able to transcend its vanishing into the unseen.
Control signals, superimposed at right angles, which deflect the electron beam inside the cathode ray tube, describe the movement of the point of light. In this way, the moving image in video is merely a flow of signals, while in film it is bound to the process of its medial fixation on celluloid. For this reason, primarily methods of visualizing sound and music have been developed in video, as opposed to strategies for the photoelectric generation of sound using optical sound. In contrast to the pictorial principle of static celluloid, the temporal continuity of the electronic image signal can be musically perceived as the sound of one line scanning.”[25] Microphone-recorded or synthesized sounds thus generate the source or input signal, which diverts the image point without inertia. The question of the aesthetic consequences of the volatility and immateriality of the electronic image led as early as the 1960s to artistic experiments with the video-immanent potential of using sound for image production.
The basis of analog-electronic audiovisuality is a signal simultaneously made visible and audible via loudspeakers and cathode ray tubes. The principle of the orthogonal superimposition of two vibrations is essential for the electronic image. A waveform is affixed both to the abscisse and the ordinate, spanning the image plane. The specific composition of the signals can emerge in different ways, as the variety of video formats demonstrates. In principle however, cathode ray tubes can be used to present arbitrary waveforms. These picture formats are all subject to the general theory of oscillation, which makes them comparable to older types of images.
In 1815 the mathematician Nathaniel Bowditch described for the first time the functions of the perpendicular superimposition of harmonious swings of the pendulum. Following this discovery, a wide range of different mechanical instruments was constructed for the creation of such Bowditch curves — including numerous so-called harmonographs[26] and Charles Wheatstone’s Caleidophone (1827) — and in order to directly observe oscillation patterns of light on resonant metal staves. The curves finally became known as Lissajous figures, when Jules Antoine Lissajous examined them in 1857/1858 in the context of acoustic experiments concerning the oscillation behavior of solid objects. With Karl Ferdinand Braun’s invention of methods for electronic image production in 1897, the waveforms of electric signals could also be observed. As a result, the oscilloscope was developed as a physical quantifying instrument for determining alternating voltage.
Similar to the transfer of visual patterns into sound that occurs in optical sound, electronically generated audiovisuality works with a specific interaction of medial operability and perception processes. The perpendicular superimposition of oscillations as a form of two-dimensional representation, which serves to make acoustic processes in the electronic image medium visually accessible, has always constituted an interference of two signals, which deflect the point of light in two different directions — horizontal and vertical. In addition, the oscillation movement of a Lissajous figure appears static when the level is higher than the fusion frequency of the human eye (approx. 18 Hz), while the human ear is not able to perceive spectral components below this frequency. Thus a Lissajous figure does not represent a particular frequency and there is no clear correlation between the pitch and figure. Nevertheless, sound and image can be compared within other factors, such as the relationships between frequencies (intervals) and phase relations in the lower frequencies. Depending on the complexity of the matter to be audiovisually perceived, specific convergences between media-technical and perception-related aspects can arise. For an aesthetic approach to electronic audiovisuality precisely these intersections are of interest, in order to expose the perspective on mediality, the constructed nature of the transfer. Intuitively comprehensible convergences are created, for example, as a result of the precise simultaneity of sound and image, which is guaranteed by the precision of the analogue electromagnetic interconnection and which cannot be achieved in digital coupling systems. The same applies to the magnification of the figures at higher volumes, caused by the heightened amplitude. Finally, the growing complexity of the Lissajous figures when harmonic frequency relations become more complex corresponds to the impression made on the ears. These kinds of correlations are often simulated in the algorithmic parameter mapping of digital audiovisual systems.
Electronic image synthesis was initially explored as an aesthetic strategy in the older medium of film, which by the 1930s had established itself also as an abstract art form. Mary Ellen Bute[27] and, in the 1950s, Hy Hirsh and Norman McLaren integrated Lissajous figures into their animations by filming oscilloscope screens. In addition, the possibilities presented by geometric image synthesis also interested practitioners of Op art, kinetic art, and early computer graphics.[28] Cathode ray oscillography was also tested in music visualizations in electronic studios in Berlin by Fritz Winckel (1960s) and in Paris by Pierre Schaeffer (La Trièdre Fertile, 1975).[29] Reynold Weidenaar continued to use analogue synthesizers and oscilloscopes for audiovisual compositions (1979) and Bill Hearn’s video synthesizer VIDIUM (1969), an audio synthesizer modified especially for this purpose, enabled the precise synthesis of complex Lissajous figures. Due to their ability to couple electronic sounds and image signals without inertia, cathode ray tubes were used by Nam June Paik and David Tudor for live participatory and performative purposes. Paik’s experiments with the audiovisual interconnection of television sets, audiotapes, and microphones in his first solo exhibition Exposition of Music — Electronic Television (1963) are now seen as the notorious beginning of video art. In 1966 David Tudor, in collaboration with Lowell Cross, realized the performance piece Bandoneon! (a combine), on the occasion of 9 Evenings: Theatre and Engineering, for which several audiovisual transformation processes were deployed. For the Pepsi-Cola Pavilion at the Expo 1970 in Osaka, Tudor and Cross together with the physicist Carson D. Jeffries developed a multiple deflection system for laser rays. The system operates on the same principles as the electronic image generation mentioned before. Finally, with a hybrid analogue-digital linking system, Robin Fox extended the possibilities for synthesizing Lissajous figures in his video series Backscatter (2004) and, like media artist Edwin van der Heide,[30] has been experimenting increasingly with deflection systems in laser-sound performances. In his audiovisual performances, Fox expands the boundaries of the screen-centered projection by using rooms filled with smoke-machine generated haze as projection volumes. Once again Moholy-Nagy demonstrated particular foresight when, as early as 1936, he conceded in his essay “probleme des neuen films” that: “it is certainly conceivable that smoke or vapour can be hit at the same time by different projection apparatus, or that figures of light can appear at the points where different cones of light meet.[31]
[1] Raoul Hausmann: Optofonetika, in: MA, 1922, excerpt printed in Karin v. Maur, Vom Klang der Bilder (Munich: Prestel, 1985), 140. Trans. G. M.
[2] See as earliest examples the essays “Neue Filmexperimente” from 1933 (332–336) and “probleme des neuen films” from 1936 (344–350) by László Moholy-Nagy, reprinted in Krisztina Passuth, Moholy-Nagy (Weingarten: Kunstverlag, 1986), and in comparison John Cage, “The Future of Music: Credo” (1937), in idem., Silence (Middletown: Wesleyan University Press, 1961), 3–6.
[3] Moholy-Nagy also explicitly applied the attempt “to expand the apparatus (means) used fomerly only for reproduction and use them for productive purposes to the television (Telehor), see Moholy-Nagy, “Produktion Reproduktion,” in Malerei Fotografie Film from 1927 (reprint Berlin: Neue Bauhausbücher, 2000), 28. Telehor describes a mechanical television that worked with a Nipkow disc, developed in 1919 by Dénes von Mihály.
[4] The British electro-engineer Willoughby Smith made the discovery that the chemical element selenium changes its electrical resistance as a reaction to the variations of light.
[5] Also called peak to peak recording.
[6] Walter Ruttmann wanted his experimental radio play Weekend (1930), which developed from the montage of optically recorded sounds to then be perceived as photographic audio art, see Jeanpaul Goergen, Walter Ruttmanns Tonmontagen als ars acustica, Massenmedien und Kommunikation 89 (Siegen: universi, 1994), 25.
[7] In this sound generating process that is today frequently produced using synthesizers, a wave form is elicited as a loop from a wave table and transmitted as variable tone pitches by means of different reading speeds.
[8] Cf. Andrey Smirnov, Sound out of Paper, http://asmir.theremin.ru/gsound1.htm.
[9] László Moholy-Nagy, “Die statische und kinetische optische Gestaltung,” in Malerei Fotografie Film (1927, reprint Berlin: Neue Bauhausbücher, 2000), 20. Trans. G. M.
[10] Moholy-Nagy, “Neue Filmexperimente,” in Krisztina Passuth, Moholy-Nagy (Weingarten: Kunstverlag, 1986), 332–336.
[11] Moholy-Nagy, “Neue Gestaltung in der Musik. Möglichkeiten des Grammophons” (1923), reprinted in Passuth, Moholy-Nagy (Weingarten:Kunstverlag, 1986), 308–309. Trans. G. M.
[12] Moholy-Nagy, ibid., 309. Trans. G. M.
[13] John Cage, “The Future of Music: Credo” (1937), in idem., Silence (Middletown: Wesleyan University Press, 1961), 4.
[14] Moholy-Nagy, “Neue Filmexperimente” (see note 2), 335. Trans. G. M.
[15] John Cage, “The Future Of Music: Credo” (1937), 5.
[16] see John Cage, idem., 4: Any design repeated often enough on a sound track is audible.
[17] This formulation can already be found in thoughts about a handwritten scratched writing for the gramophone. Moholy-Nagy, “Neue Gestaltung in der Musik” (see note 11), 309.
[18] Moholy-Nagy, “Neue Filmexperimente” (see note 10), 336, trans. G. M.; cf. also Dayton Clarence Miller, The Science of Musical Sounds (1916, reprint New York: Meyer Press, 2007), 119–120.
[19] See Rainer Maria Rilke, “Ur-Geräusch,” in idem., Sämtliche Werke, published by the Rilke archive in connection with Ruth Sieber-Rilke, edited by Ernst Zinn, vol VI (Frankfurt am Main: Insel, 1987), 1085–1093.
[20] Oskar Fischinger, “Klingende Ornamente,” Deutsche Allgemeine Zeitung, July 28, 1932, cf. the excerpts of the essay “Tönende Ornamente. Aus Oskar Fischingers Neuer Arbeit,” Film Kurier Berlin, July 30, 1932 printed in this volume. Trans. G. M.
[21] In his book L’image-temps.Cinéma 2 from 1985, Gilles Deleuze observes a new relationship to sound and uses the works of Norman McLaren as an example. Here after the German edition, Das Zeit-Bild. Kino 2 (Frankfurt am Main: Suhrkamp, 1997), 276.
[22] Wassily Kandinsky, Punkt und Linie zu Fläche, Munich 1926.Trans. Howard Dearstyne and Hilla Rebay (New York: Dover Publications, 1979), 57.
[23] Claus Pias, “Punkt und Linie zum Raster,” in Ornament und Abstraktion, ed. Markus Brüderlin exh. cat. Fondation Beyeler (Cologne: Dumont, 2001), 64–69, http://www.uni-due.de/~bj0063/texte/abstraktion_de.html.
[24] Television and computer screens also contained a cathode ray tube until LCD flat screens were introduced.
[25] See Bill Viola, “The Sound of One Line Scanning,” in idem., Reasons for Knocking at an Empty House (Cambridge, MA: MIT Press, 1995).
[26] Cf. Robert J. Whitaker, “Types of Two-Dimensional Pendulums and Their Uses in Education,” in Michael R. Matthews, Colin F. Gauld, Arthur Stinner, The Pendulum: Scientific, Historical, Philosophical and Educational Perspectives (Dordrecht: Springer, 2005), 377–391, esp. 383ff., cf. also http://physics.kenyon.edu/EarlyApparatus/Oscillations_and_Waves/Harmonographs/Harmonographs.html.
[27] http://www.ima.or.at/lichtmusik/?cat=1&language=en
[28] Cf. also the light shapes and oscillograms by Herbert W. Franke (http://www.zi.biologie.uni-muenchen.de/~franke/Kunst1.htm) or the Rhythmogramme by Heinrich Heidersberger (http://www.heidersberger.de/scripts/frontend/index.php3?ACTION=MENUEPUNKT&ID=1032) from the 1950s.
[29] Winckel had already worked on the music visualization methods used by Nipkow as transmittion equipment from 1932 onwards.
[30] http://www.evdh.net/lsp/index.html
[31] Moholy-Nagy, “probleme des neuen films” (see note 2), 348.
People |
Works |
Timelines 1800 until today All Keywords Socialbodies |