Sound Recording

Early Sound Recording

From the time of Thomas Edison's phonograph in 1877, up until about 1947 when practical tape recorders became available, recordings were made in real time as a single channel directly to the storage medium. The original Edison system employed a stylus attached to a diaphragm that scribed a grove into aluminum foil wrapped around a rotating cylinder. Later models used a wax cylinder, which could more easily be duplicated and stored. In 1887 Emil Berliner invented flat-disk recording that allowed straightforward manufacturing, although the initial quality was poor. Before 1925 all recording was done mechanically, with vertical hill and dale cuts in the surface. Interestingly, stereo recording was invented in 1929 by Arthur Keller of Bell Labs and later formulated by Blumlein in 1931 in its current 45°/45° groove modulation format (Alexandrovich, 1987), but the first commercial stereo recordings did not appear until 1957.

The early flat records rotated at a rate of 78 revolutions per minute (rpm), with information impressed, first on one and later on both sides of the disk. The record players were entirely mechanical. A sharpened needle stylus was attached to the center of a diaphragm, which was coupled to an expanding horn megaphone. A crank handle that wound a coiled spring provided motive power. Speed was controlled by a mechanical governor. In console units, openable doors at the end of the horn controlled the level. Since the first engineers had no temporary storage capability, there was no ability to manipulate or playback music for post production and thus no need for post-production facilities. Recordings were made with the duplicating equipment in the same room as the musicians. The sound level during a recording session was controlled by positioning a ball of yarn in a large megaphone used to concentrate the sound energy onto the pickup diaphragm (Alexandrovich, 1987).

With the invention of the audion (vacuum) tube by Lee De Forest in 1907, the audio amplifier by Harold D. Arnold in 1912, and the condenser microphone by E. C. Wente in 1917, sound could be converted to electrical signals, and used to drive a groove-cutting lathe to make a master disk from which duplicates could be pressed. Bell Telephone Laboratories carried out experiments in "auditory perspective" and demonstrated three-channel stereo transmission on telephone lines in 1933 (SMPTE, 2001).

Sound for motion pictures had many originators. W. K. L. Dickson, working in Thomas Edison's laboratory, invented one of the early recording systems in 1895. It was offered to the public in the spring of 1895 as the Kinetophone and consisted of a recording on an Edison wax cylinder along with the Kinetoscope, a box housing a rotating strip of photographs that were observed through a peep hole. The sound was transmitted to the viewer via two rubber ear tubes.

In the silent film era of the 1910s, orchestras or theater organs played a musical score accompanying the film. Loudspeakers were later placed in the orchestra pit to replicate the musicians and still later added behind the screen for dialog—a system that required manual switching between the two by the projectionist. The vitaphone, a system of multiple long-playing (33.3 rpm) records, was introduced in 1926 as a way of recording music, but not dialog, for film. The first motion picture with dialog was The Jazz Singer in 1927. In 1928, Disney's Steamboat Willie appeared, featuring the first soundtrack created in post production including music, dialog, and sound effects.

Walt Disney's 1940 film Fantasia (Garity and Hawkins, 1941) was a giant technical leap forward, employing four channels of recorded information, three separate audio channels, and a tone control channel using a variable-density optical system printed on 35 mm film. The system used two linked projectors, one for the film and one for the sound. The film included an optical mono mix of the soundtrack as a backup, a scheme that is still in use today. It introduced several other innovations for the first time including multichannel surround, the pan-pot, overdubbing of orchestral parts, simultaneous multitrack recording, and three directivity-aligned loudspeakers located behind the screen (SMPTE, 2001).

The Disney engineers also originated many new techniques including the use of multiple optical recorders called dubbers (short for doublers) to do mixing. Musicians were seated in a large stage and played the score while the film was being screened, a technique that is still used today. The conductor watched the film listening to timing cues, called a click track, through a single headphone. Sound effects, later nicknamed "Foley" after Jack Foley, a sound editor at Universal Studios, were produced using walking surfaces and clever devices manipulated by hand. Dialog was recorded separately and added in post production, a process called automatic (sometimes automated) dialog replacement (ADR). The three components of film sound—music, effects, and dialog—were combined in a large dubbing theater with mixing consoles located near the middle of the room.

With the development of magnetic tape in Germany in the 1940s, recorded sounds could be played back and manipulated after the performance. Multitrack tape recorders became available in the 1950s and artists such as Les Paul raised looping or overlaying of recorded material to a fine art. With the ability to record and erase, the need arose to listen to the material during the post-production process and the room became part of the audio chain. This led to the development of studios specifically dedicated to sound recording and playback.

Recording Process

The goal of recording and subsequent playback is to deliver an experience that accurately recreates the original performance. Although this is still the object of most recordings, some performances are never heard by an audience in their original form and exist only as electronic

Figure 21.1 Types of Sound Studios

Figure 21.1 Types of Sound Studios

signals. Their sole interaction with the architectural environment comes on playback in a listening room, and even this can be bypassed through the use of headphones. Most commercial recordings of music are done in rooms designed specifically for that purpose, carefully crafted to contribute positively to the process. Recording studios vary with the type of music; a simplified overview is given in Fig. 21.1.

In the simplest case, the performance and the recording both take place in one room. An instrument, such as a keyboard, is used to create the sounds that are monitored, either on loudspeakers or headphones, and recorded through a small mixing console by a storage device. In this example the recorded sound never passes through a microphone and its only interaction with a room is during monitoring.

At a medium scale, studio musicians might play on acoustic (an unfortunate use of the term, which has become part of the culture) or electronic instruments, which are recorded using separate microphones or transducers built into the instrument. A recording might be made with all the musicians present at the same time, or with musicians playing their parts in separate studios, sometimes continents and weeks apart. When musicians play together, particularly loud instruments such as drum sets should be isolated in separate dedicated rooms so that their sounds do not bleed into the other microphones. Vocalists or sources needing a special environment, such as pianos, can also be placed in separate spaces.

A large symphony orchestra is recorded in a hall or scoring stage, using separate microphones for each section, plus a stereo pair on the centerline of the room, plus several more distributed throughout the room. Groups of players can be separated by portable barriers called gobos, but this affects the overall orchestral balance. Most mixers prefer to balance the orchestra using instrument placement and microphone location without resorting to isolating barriers (Murphy, 2001).

Recording Formats

The stereo format has dominated the recording industry since its commercial introduction in 1957. Since we have two ears in the horizontal plane, a pair of loudspeakers in the same plane can provide critical lateral cues for source localization. Recording techniques have improved to the extent that a phantom image can be reliably produced between or even outside the two loudspeakers. In the best combinations of recording and playback, a soundstage is created with depth as well as width, in which the listener can hear the instruments in their original positions.

Traditional film audio used right-center-left loudspeakers located behind the screen. Dialog was placed in the center and music could be distributed right and left. In 1973 a low-frequency effect called SenSurround was introduced for the Universal Studios' film Earthquake. In the earliest versions the signal was produced by a low-frequency noise generator and later by a recording on the film. At first there was a concern that this high-energy effect would damage theaters. The solution was to play the film 10 dB louder than the actual show levels, without an audience present, and see what shook loose (Stern, 1980). If nothing did, the theater was approved for showing.

Themed entertainment venues have been able to feature custom theaters with unique multitrack shows. In theme park attractions, audio animatronic (AA) characters, so named because the movements were controlled by audio tones, were designed with individual point sources for each figure. Loudspeakers were built into props and set pieces, or if the character is large enough as in the case of King Kong, into the figure itself. Multiple loudspeakers could be located throughout a theater or ride and the sound mixed down by engineers within the actual venue. Motion could be simulated using voltage controlled attenuators (VCA) under the control of a show control computer (Long, 2001). In the Sanrio Puroland Theme Park in Tokyo, Japan, a system designed by the author in the 1980s for the Time Machine of Dreams Theater utilized three loudspeaker clusters located behind the film screen with eight side and three rear loudspeakers each under individual computer control. Multiple overhead loudspeakers also were included. This system made complex motion simulation possible, while allowing different films to be screened by changing the computer control software.

Recently the film industry has adopted a standard 5.1 system utilizing left, center, right, left surround, and right surround audio tracks (5) with a low frequency/effects (LFE) embedded bass track (.1). This is a simpler version of the multitrack formats used in theme parks and has the advantage of not requiring computer control. The sound is recorded onto the film or onto a digital disk and played by an outboard processor. Several competing systems are available. With the advent of digital projection equipment, both audio and video software in the future will be sent directly to theaters or the home via cable or wireless transmission. The 5.1 system or another similar multichannel surround system will probably become standardized for home use. There are a number of other combinations in service including five loudspeakers behind the screen, separate side and rear surrounds, one separate or two embedded bass tracks, but these have not found as wide an acceptance.

0 0

Post a comment