Wednesday, April 22, 2009

An interview with Tomlinson Holman, pt. 3

by Matteo Milani, U.S.O. Project, April 2009
(Continued from Page 2)


MM: Where did come the idea to write your book Sound for Film and Television? When you started teaching at school?

TH: Yes. It really was taken from the first seven years of teaching and followed the outline of my course. I am currently at work on the 3rd ed. [now out]. Having now fully made the digital transition there is much new information but I'm hoping this is the magnum opus and I won't have to revisit it again for a very long time.

THX continued to grow inside Lucasfilm. A division with excellent revenues and low overhead, the THX Sound System became the 'de facto' standard of superior theatrical presentation, and it had a profound impact on the theater industry. [...] THX remained part of Lucasfilm until 2002, when it was spun off as an independent company, owned partly by LFL, with other corporate and private investors. [...] Tom Holman began working with Audyssey Labs in 2002, a start-up company whose first product was a device that could tune professional and home theater sound.
[excerpt from Droidmaker]



MM: Ten years of '5.1 Surround Sound - Up and Running'. Where are we today?

TH: Well that book is in its 2nd ed. now called only Surround Sound -- Up and Running. We dropped the 5.1 because when it first came out the idea was new to music people, and now the idea is well understood. 5.1 remains the principal film-making format, with up-mixing in home receivers to 7.1 routinely, and newly introduced 9.1 and 11.1 channel systems.


MM: Your thoughts about Cinema Digital distribution. Toward linear PCM (no bit-rate coders) sound on a server with picture?

TH: Yes, they are 24-bit, 48 kHz, multichannel systems, so are transparent in a theatrical environment. Even those who argue for wider bandwidths have to be content with 48k sampling in large spaces because air absorption over 10m and more just does in very hf frequencies.

About Absorption
Sound may be absorbed by its interaction with boundaries of spaces, by absorptive devices such as curtains, or even by propagation through air. Absorption is caused by sound interacting with materials through which it passes in such a way that the sound energy is turned into heat.

[excerpt from Sound for Film and Television]


MM: Meaning and your progress of 10.2 channel sound at 'TMH Corporation'.

TH: I've been working on 10.2 for many years now, a system with a full path (not an upmix) from input to output because among the three competitors for bit rate, sample rate, bit depth or word length, and spatial capability it is the spatial one that is the most interesting since everyone we've exposed to the system can hear the differences across 1 to 2 to 5.1 to 10.2. There's simply no debate except "well how practical is it?" Well how "practical" is 192 kHz sampling? Who can get to 24 bits of precision, one part in about 17,000,000 in just over 5 microseconds? Nobody, that's who. Those extra bits are "marketing bits" if the truth were to be told.


MM: Marketing bits. As you wrote, is sampling at 60 kHz still enough?

TH: Yes I covered this extensively in Surround Professional magazine, taking on every single theory that has been proposed that I could find, and 60 kHz is adequate for all. 48 is pushing things a little bit. You know the 20 kHz limit of hearing is a "soft" limit, an average of thousands. As a young person I could hear an undistorted sine wave out to about 24 kHz, and we've had students who could do this in our lab confirm it in the last few years. But above about 24k there's just no response. So that leaves little wiggle room for 48k sampling, but probably enough for almost all practical purposes, certainly for cinema due to its air propagation effects.


MM: Beyond 5.1 - Height sensation, missing from all stereo and multichannel system. Is it for you unnecessary nowadays to implement in the movie industry?

TH: IMAX has had its Center Top channel for many years. I made that stereo since a width and height sensation is required I believe. In fact, Fantasound in some of its eight renditions in 1939 had some overhead channels, so it's not new, just "new" in the marketplace.

[Prev | 1 | 2 | 3 ]

Related Posts:

Bibliography

Books
Holman, T. (2007) Surround Sound: Up and Running, Focal Press
Rubin, M. (2006) Droidmaker: George Lucas and the Digital Revolution, Triad Publishing Company
Blesser B. | Salter L.(2006) Spaces Speak, are You Listening?: Experiencing Aural Architecture, MIT Press
Holman, T. (2001) Sound for Film and Television, Focal Press

Websites
tmhlabs.com
wikipedia.org
cinema.usc.edu
imdb.com
digitalcontentproducer.com
focalpress.com
britannica.com

An interview with Tomlinson Holman, pt. 2

by Matteo Milani, U.S.O. Project, April 2009

(Continued from Page 1)

MM: You've invented a system that is a signal processing chain of different products, blending knowledge, math, material, and so forth. How did you manage this huge work of taking care of each small step during its development?

TH: With a lot of help, which I credited in the THX Manual. The ideas did come from many sources, some dating to the 1930's whereas some were brand new in 1980. It was my skill to: go to the MIT library and read the whole field within a week of getting the job (I was in Boston at my company 'Apt Corporation' at the time), selecting from amongst those developments the ones that made sense; working with John Eargle at JBL to find real products that would meet the standard sought; to assemble and figure out how to measure them; to exchange measurements with, and have experiments conducted, at Waterloo University (Stan Lipshitz, now retired, and John Vanderkooy); to select the LR4 crossover and get help in implementing it; to figure out how to make delay lines and fit poles and zeros with which I had help from Gordon Jacobs; then to build the crossover networks, find a manufacturer, etc. So there was lots of help but I performed an integration and leadership function.

The invention came about after it was nearly finished. Standing one day at the base of a wall that contained the system, with my head stuck between the wall and screen, I heard lots and lots of high-frequency energy, and figured out that this was from multiple bounces between the wall and screen. The wall was needed to support low frequencies but was a problem at high frequencies, so covering the wall in high-frequency absorbing material made it disappear acoustically at high frequencies. That's the patent: more a "discovery" than an "invention" that I set out to do.

[Holman Talking About 10.2 Possibilities - via sklathill]


MM: How was the relationship with Dolby Laboratories? What's the output of your collaboration at that time?

TH: Dolby set the stage by improving film sound to the point where THX was worth doing. While they concentrated on the pipeline to get program material from studio to cinema and home, I concentrated on what happened to it once it was delivered. So they were compatible completely, although some ignorant parts of the marketplace thought one was supplanting the other and vice versa. Never was true.

By the mid-'80s, Lucasfilm and Dolby labs were ping-pongin developments - "technological leapfrog" Tom Holman called it. Dolby was pushing improvements for recording sound on film and playing it back with expanded frequency and dynamic ranges; Lucasfilm focused on the auditorium.
In 1987, a committee of audio engineers debated the number of channels for digital sound on film. It was lively debate about a range between two and eight channels; Holman proposed 5.1 - five main channels and a dedicated sub-woofer channel. (Technically speaking, the sub-woofer represented 0.005 of a channel in bandwidth. "I call that marketing rounding!" said Holman.) 5.1 Surround Sound became the industry standard.
[excerpt from Droidmaker]


MM: What was exactly the Lucasfilm's Theater Alignment Program? It was an early version of THX?

TH: No, it was a companion program to see that all theaters were tuned up to the best of their ability and consisted of lab viewing of 70 mm prints, installation tune-up by certified technicians, and viewing by trained people separate from the technicians with feedback to Lucasfilm, and end-to-end quality control for films in theaters. It ran a number of years on a lot of films but didn't make any money.


MM: Few words on the development of Home THX.

TH: I went to CES (Consumer Electronics Show) in the mid-80's and found misrepresentations of the work of Ben Burtt and others, some of it deliberate, and some of it from a lack of knowledge. So I set our to better represent movies over home systems, and Home THX with its patented technology is the result. It also had a "better housekeeping seal of approval" component, especially in areas where people routinely stretched the truth, such as power amplifier output capacity. We used loudspeakers to test, not resistors, and some high-end products simply failed to pass the standard based on real program material into real loads.


MM: When (and why) did you move away from Lucasfilm?

TH: I started teaching at USC in 1987 and working at Lucasfilm mainly in THX by that point from 1987 to 1994, but it was a hard road to hoe to have two jobs, one in northern and one in southern California, so the dye was cast with the completion of the Technical Building in 1987, the design and building of which occupied three years of 60 hour plus weeks for me.

Tom Holman had succeeded beyond everyone's expectations in his execution of C Building; he produced some of the most remarkable filmmaking workspaces ever, and, in the process, launched the TAP/THX projects. After testing out numerous ideas on Kerner, Holman transitioned into working full time on the engineering specifications for the 750,000 square foot Technical Building.
[excerpt from Droidmaker]

In the 'Stag Theater' at Skywalker Ranch it's twenty-two speakers all-around which are positioned for uniformity of coverage. The multiple arrival times screw up your ability to localize any one sound. A lot of things in the Tech Building are all hidden from view on purpose, because if you don't see one, you don't locate one.
[excerpt from Sound-On-Film]

[Prev | 1 | 2 | 3 | Next]

An interview with Tomlinson Holman, pt. 1

by Matteo Milani, U.S.O. Project, April 2009

This is the last chapter dedicated to the new era of film sound, started in the early 80's, thanks to George Lucas and his talented crew. His efforts to develop new technologies had taken the cinema experience radically into the future. We take for granted technologies like THX, now been absorbed in the popular culture. I had the pleasure of interviewing Tomlinson Holman, the developer of the THX Sound System and its companions Theater Alignment Program, Home THX, and the THX Digital Mastering program. He was at Lucasfilm for 15 years, winding up as the company's Corporate Technical Director.

Mr. Holman is Professor of Film Sound at the University of Southern California School of Cinema-Television and a Principal Investigator in the Integrated Media Systems Center at the university. IMSC is the Engineering Research Center for multi-media of the National Science Foundation. He is founding editor of Surround Professional magazine, and author of the books Sound for Film and Television and Surround Sound Up and Running, both published by Focal Press. He is an honorary member of the Cinema Audio Society and the Motion Picture Sound Editors. He is a fellow of the Audio Engineering Society, the British Kinematograph Sound and Television Society, and the Society of Motion Picture and Television Engineers. He is a member of the Acoustical Society of America and the IEEE. He has lifetime or career achievement awards from the CAS and the Custom Electronics Design and Installation Association. Tom holds 7 U.S. and corresponding foreign patents totalling 23, and they have been licensed to over 45 companies.


MM: Could you describe the movie sound at the beginning of the 80's, a period of study and development about a new way of experience movies that eventually became THX. But what was there before it (untrained professionals, unprepared audience)?

TH: Just after WWII cinema sound systems had gotten standardized. Drawing on the developments of the '30's, but adding to them the outgrowth of technology developed for the war, the 'Altec Voice of the Theater' became "standard," with 80%+ market share in theaters, and thus in dubbing stages. This chicken-and-egg served decently for decades, but even its own inventor tried to improve upon it, but couldn't crack the chicken begets egg paradigm. An example of the WWII technology employed is the permanent magnets used in the loudspeakers. They were originally developed for bombadier crew earphones! The loudspeakers of the 1930's needed dc current to form their magnetic fields. So the 'Voice of the Theater' was a great step forward, but also froze technology at the 1947 level until 1980.

What I did was to consolidate many of the developments made between 1947 and 1980, add a few of my own (which got patented), and make one comprehensive system out of it, and then install it only in rooms that met required acoustical standards. It was the first time ever that anyone tried to standardize on room acoustics from place to place, eventually around the world, within certain parameters--the very opposite of, say, concert hall design, but in many ways far simpler.

[..] With the introduction of surround sound, audio mixing engineers were forced to adapt the old rules and paradigms to that had been used for mixing a stereo production. Answers were needed for new questions. Tomlinson Holman, the father of THX cinema surround, pointed out that the most important decision in creating surround sound was the choice between two primary listener perspectives: the 'in-audience perspective', where the listener sits in the best seat in the house, sonic activity is located at the front, and surround creates reverberant ambience; and the 'onstage perspective', where the listener sits in the midst of the musicians, encircled by active sound sources.
[excerpt from Spaces Speak, are You Listening?: Experiencing Aural Architecture]


MM: You're in the film history, as you invented the "de-facto" standard of theater industry. How could you describe your early days at "Lucasfilm"?

TH: Heady. We were on a mission to improve the whole experience of going to the movies. THX was only one of the resulting developments, but it had the most public face eventually.

People have been asking the question for years: what does THX stand for? And this is was exactly Jim Kessler's marketing intent: keep 'em asking question after question, and they're taking about you!" [...] "It's gotta sound cool, high-tech, and I wanted a way to credit Tom Holman, the inventor," said Kessler. Doodling around, "I just wrote the initials for Tom Holman Crossover on my desk one day." "Crossover" was a reference to the way the speakers divide the treble and bass for ideal acoustics. Traditionally, the crossover is done passively, in a loudspeaker. Holman had designed an electronic crossover. Kessler wrote "crossover" with an X, as in "X-over." He smiled as he recognized the letters THX from George's film THX 1138. "George always seemed to like them, and had used them as a private joke on a license plate on Harrison Ford's roadster in American Graffiti." Kessler liked that the name was both very "Lucas" and still not immediately identifiable as Lucas. "THX, that's perfect!" So Kessler rushed down the stairs and into the cool dark mixing theater beneath him, where Lucas was sleeping on the couch during another marathon Jedi mixing session. Lucas woke up and watched in silence as Kessler waved the paper around and ranted about how perfect the name would be. "Great" was all he said, and that was the end of it.
[excerpt from Droidmaker]


MM: Could you technically describe the C building in San Rafael, the site of the original THX speaker installation?

TH: Within the first weeks of joining Lucasfilm I had a conversation on the phone lasting past midnight with the architect/acoustician of the C building dub stage. He wanted a text book approach as he was an MIT grad and trained members of the team headed by Leo Beranek, who had written the great classic book 'Acoustics'. Beranek's "recommendation" for cinemas was an average of cinemas he found pre-1952, and in the stereo era, I thought the reverberation time should be shorter than what were in fact old vaudeville theaters revamped for cinema. He wanted 1.2 s RT60 and I wanted something like 0.6 s. We compromised on 0.9 s but he forgot some absorption so it came in at 0.8 s. When we finished and listened to movies in the room, I was convinced of the utility of lower RT than the average 1930's theaters for improvement in speech intelligibility and localization.

The first phase of Holman's work would be the new sound mixing environment for the dubbing stage. "Bring a new level of quality to film post-production," said George Lucas to Tom. C Building was his canvas. [...] The Sprockets theater was a marvel. Visitors to C Building from Hollywood and from theater exhibitor companies were stunned by the audio. "They wanted that sound in their new facilities," said Tom Holman, "because it was clearly better."
[excerpt from Droidmaker]

The reverberant process smears together the syllables of speech and the sound effects so that a loud sound will cover up a soft one that comes after it. [...] If you want reverberation, you record it on the sound track and if you don't want it, you turn it off. That's really a rather different way of doing things. That makes it pretty independent of the number of people in the auditorium. [...] There's a fundamental difference between a concert hall, which is a space for production, where the orchestra plays and interacts with the hall, and a movie theater, which is a space for reproduction.
[excerpt from Sound-On-Film]


MM: How Ben Burtt helped you to develop the system at Lucasfilm? Did you shaped with him the sound post-production workflow and method in an innovative way?

TH: I was hired as a back up in case the computer division didn't get it all done digitally within three years. It didn't, but their work eventually became Pixar and some went to Avid, etc. So the work flow was pretty much standard, but highly cleaned up: three generations of magnetic heads made the response flatter and flatter, 3000 man-hours of modifications to a music industry console for features and sound improvements beat film consoles, and many more. Ben was vital in being the main customer, and at recognizing the value and quality of what was being done. He also taught me a great deal about workflow, history of Hollywood sound, etc. I remember one time we were in the first small room before the C building in a store front. We were working on Raiders. He told me that the sound source for opening the lid of the ark in the last reel was within 20'. I couldn't figure it out. It turned out to be lifting the back off the toilet above the water chamber, and slowing it down. I was astonished at his methods and remain in awe of them as heard recently in Wall-E. I liked later to define the difference in roles as "I make it sound good; he makes it sound interesting."

Film sound consoles weren't very good because they were custom built to the needs of film sound in small quantity without much competition. We bought a music industry console built in much higher volumes and with a lot of more competition and changed its functions to make it into a film sound console.
[excerpt from Sound-On-Film]

[ 1 | 2 | 3| Next ]

Saturday, April 11, 2009

Bernard Parmegiani, a sound master

Organized by Groupe de Recherches Musicales (G.R.M.) and jointly produced with Radio France, the Présences Électronique festival explores the link between the concrete music of Pierre Schaeffer and new experiments in electronic music. One of this event’s special features is that it offers the public and performers a unique “spatialized” broadcasting and listening system in the Acousmonium.
This year, for this fifth event, Présences Electronique has moved out of the Maison de Radio France and spent three days in the various rooms of 104, the new multi-disciplinary cultural centre in Paris’s 19th arrondissement.

We were there and listened in darkness to 'De Natura Sonorum' (1974-1975), one of the best of Parmegiani's works in terms of technics-sound-harmonic-tone.

Bernard Parmegiani (1927) met Pierre Schaeffer who encouraged him to attend a training course in electro-acoustic music in 1959. Then he joined the Groupe de Recherches Musicales the next year, becoming a full member right up until 1992. Pierre Schaeffer put Bernard Parmegiani in charge of the Music/Image unit of the ORTF's Research Departement, where he went on the compose the music for both full-length and short films. The proved to be a first class training ground for learning how to deal with the problems of musical form as these relate to time, and how to overcome the constraints imposed by the medium of the cinema. He also wrote the music for several jingles, as well as songs and music written for television, the ballet or the theater. There then followed 40 years of uninterrumpted research and musical creations built out of an ongoing fight that led him to regard bodies of sound as living bodies. He took a keen interest in those areas in which the improvisation techniques used by jazz musicians meet with electro-acoustic music. Parmegiani's own output, primarily made up of sounds recordes on tape, includes more than 70 pieces of concert. Except some mixed pièces, his work as a whole take the form of music for « fixed sound », coming within the scope of the large repertoire of electro-acoustic music.


Some excerpts from the interview by Évelyne Gayou, published in full in the book "Portrait Polychromes: Bernard Parmegiani":


Can the Parmegiani's sound be defined? Some people speak of "organic sounds"...

In the past, people used to talk about a "Parmegiani's sound", a little too much to my liking, and it bothered me a lot. People would say: "Oh! Parmé, what beautiful sounds you make!!!" It's good to make nice sounds, but really, we don't compose music to produce nice sounds, but rather to compose from an idea. I'm not trying to seduce anyone with my music; I'm trying to get people interested. That's why I'm obsessed with constantly renewing myself musically. I can only exist by continuosly exploring new territories; otherwise one gets bored with one's own music. The risk is to do 'Parmegiani in the style of Parmegiani' and so on. If I must define what the "Parmegiani sound" is, then it's a kind of movement, a kind of colour, a way of starting and a way of fading the sound, a way of bringing life into it. I do consider sounds as living things. So there's, indeed, something organic, skin deep, but it's always difficult for me to define my music; what we perceive from within isn't always understood by others in the same way. We recognize ourselves in the mirror others hold up for us, to a certain extent; it's a game between the inner and outer realms.
[...] When I start a piece, I create a sound bank; I include new sounds, never used before, that might fit my intention and reworked old sounds. I listen to them and create detailed inventories; it is essential and imposed by my working method. For example, for De Natura Sonorum, I made lists of sounds classified by shape, subject, colour, etc. according to the TOM (Treaty of Musical Objects)'s typology. I like to set the sound material in my ear first, so that I can then work with these sounds to express what I want to say [...]


When performing your music in concerts, how do you see the spatial aspects? Do you want to create a show or is it a mere experience?

I'm not very happy with the word show because of its demonstrative character. I prefer for it to be an "experience" because I never project the sounds in the same way twice. When I'm in a concert, standing at the sound projection desk, I intentionally send the sound to specific speakers, I either pan to the left, to the right, along the sides or behind and I associate pairs of sounds. The sound can follow a pre-defined trajectory; remain static in a speakers area or even in a pair of stereo speakers. Some composers, especially when they start out, turn all the potentiometers up and don't vary the levels of the speakers much, the result is imperceptible. Worse than that, the sound is hindered in all directions because it is everywhere at once. Depending on the acoustics of the concert hall, you might even get reverberation or interference phenomena, and then the audience can't hear any subtlety.


You've gone through the digital revolution, what do you think these new tools have brought to your music?

I was probably the first person at the GRM with a personal digital studio. So I had to learn how to use the digital equipment by myself. By switching from the scissors to the mouse, we've improved a few things, but we've lost out on others. [...] The time it takes to put an idea into practice has shortened and, consequently, we're closer to the compositional act.

[the review of Parmegiani 12-CD box set | by Caleb Deupree]

Wednesday, April 08, 2009

EMS 09 - Heritage and future

The EMS conference is organised yearly through the initiative of the Electroacoustic Music Studies Network, an international team which aims to encourage the better understanding of electroacoustic music in terms of its genesis, its evolution, its current manifestations and its impact.
Areas related to the study of electroacoustic music range from the musicological to more interdisciplinary approaches, from studies concerning the impact of technology on musical creativity to the investigation of the ubiquitous nature of electroacoustic sounds today.


Presenting EMS 09 and Main Theme

Heritage and future

During recent years the electroacoustic community has been celebrating different dates related to the birth of electroacoustic music. Such moments are often a time for analysis and reflection, which is very important for a better understanding of our past, and allows us to review our present and to think about our future.

It also make us wonder: how do we imagine the future of electroacoustic music? And at the same time, what do we wish for the future of electroacoustic music?

At this point, many other questions arise, all of them connected to the same root: what is the role of musical education in this path? To what extent is our heritage significant in terms of our seeking to extend our sound-music boundaries or, instead, to what extent does this heritage generate barriers which limit our imagination?

Are we looking to develop a self-sustained electroacoustic art or do we think that electroacoustic music would be best integrated with other disciplines in the future?

Will technical developments limit our future steps or have we reached a stage of maturity which allows us to seek in the technology that which we need?

Is it possible to think about new languages (or derived new ones), starting from what is already known today as electroacoustic music?

[more info via ems-network.org]

The Sound Behind the Image: Ben Burtt and the Art of Sound Design

Though many consider motion pictures to be primarily a visual medium, it would be impossible to create the full magic of cinema without sound.
The Sound Behind the Image - hosted by Academy Award-winning sound designer Ben Burtt, and presented in partnership with the Academy of Motion Picture Arts and Sciences - will explore the extraordinary impact of sound in feature films.

The event will also take a fresh look at the revolutionary sound design of the 1977 Oscar-winning classic, Star Wars. A combination of film clips and live demos will examine the role that sound played in shaping the movie.

Friday 10th April 2009, British Film Institute, London

[read more - via bfi.org.uk]