Friday, December 14, 2012

Frank Kruse on "Cloud Atlas"



Here's some info about the work of Frank Kruse, supervising sound designer on Cloud Atlas, in his own words [via duc.avid.com].

"I started on CA in November 2011 about 6 weeks before the ending of the principal shoot. Tom Tykwer hates to work with temp sound effects and it really made sense to start early on this film because we had a pretty early first temp mix to cater about 10 weeks after the last day of shooting which was quite tough for such a long film.
So I could spend some very valuable time creating some base sound fx and B/Gs for the different stories. Tom usually has most of the music done before the shoot starts so on his films we almost never encounter temp-music. So we can closely work with the composer team. I physically moved my studio into the same space as the cutting rooms and the VFX production dpt. which also enabled very close work with VFX updates and the editor Alex Berner.
I provided effects stems to Alex who would then cut with those in the AVID so the directors could make notes while cutting the picture and also adjust edits to make room for some sound ideas. I spread the info the rest of the sound crew from there.
Quite a few things wouldn't have been in the film hadn't we had this close connection during the editing both in terms of the picture cut and sounds.
This gives the sound team the chance to go into wrong directions and try things out early on instead of piling up a lot of redundant tracks for the mix and build the sound track in the theater.
We talked about transitions and that we wanted the film to feel like one story and not 6 episodes. tied together so quite some time was spent to create seamless transitions that should never be on the nose. For example there are some cool yet subtile transitions from the bridge where Zachry and Meronym hide from the Koona warriors ("bridge are broken hide below") the horse that gallops away turns into the rhythm of the rails of the train with Cavendish and then when he sees himself sitting in the opposite seat in the past the trans "morphs" from a modern train to a steam train and back. So horse to train to steam-train and back. I think most people won't recognize it at first "glance" but we thought these kind of transitions were the ones that would help glue the story together. Markus Stemler the second sound designer came up with some great things like that.
Many transitions are at the edge between music and effects. We tried to blur the differences a bit from the "waking up" of Papa Song's restaurant to the flare gun when Zachry and Meronym discover the huge Sonmi-statue. All those things we tried to treat as half way between musical and sound effects.
The scene where Chang shows Sonmi the safe-house with the animated cherry blossoms on the walls for example: The ambience there is made with quite musical drones and some percussive sound effects played in asian scales. I found S-Layer for Reaktor really useful on CA for these things.
The close proximity to the cutting room (literally next door) enabled us to keep a very close connection to the changes in the picture cut.
One other thing we discussed with the directors was the thing with the gender changing actors in film. In the beginning they were concerned that the men playing female characters would show their true identity through their voice so I went on set to capture some test recordings with Weaving and Whishaw and then experiment with some voice treatments to disguise their voice or make them more female. So I had a channel-strip prepped to treat these voices.

We tracklayed all the FX in a session prepped with EQs and Reverbs so it was pretty easy to output a stem with reverbs etc. for the AVID and the temp mixes were more or less based on these sessions which Matthias Lempert and Lars Ginzel mainly mixed in the box.

Some things for the tech-interested: We recorded a prototype electric car that a big German car mfg kindly let us use that served as the base sound for the skiff (the floating "motorcycles"). We also recorded lots of magnetic field effects with guitar pickups that foley supervisor Hanse Warns built. A device we called the iHum was used to capture fields of power tools, TFT screens etc. etc.
The main sound for the busted delivery truck that Chang uses to free Sonmi from the prison is actually the electric field of our studio's vacuum cleaner with further treatment.
Some of the gunship elements were created with iPad based synths that I liked to use for the great touch interface that I could then modulate to picture."

[wildtrax.eu]

Thursday, November 01, 2012

Sonic Screens 2012 - full lineup


Electroacoustic music concert 
an event by U.S.O. Project (Matteo Milani, Federico Placidi) in collaboration with O’ and Die Schachtel 

Premieres: 
Agostino Di Scipio
"Two Sound Pieces with Repertoire String Music"
for any number of bowed string instruments and live electronics


Andrea Valle
"Dispacci dal fronte interno"
for Violin, Cello, spatialized electronics and printers


Federico Placidi
"TimeCapsule"
for Violin, Cello and live electronics


Performed by: 
Èdua Amarilla Zádory - Violin
Ana Topalovic - Violoncello

Sound Direction: 
Matteo Milani

Live Sets: 
Thoranna Bjornsdottir aka Trouble
Massimiliano Viel

O’ | via pastrengo 12 Milan | Italy
Saturday, December 1st - from 8:00 to 22:30 p.m. 

Door 5 euro

Saturday, October 13, 2012

Randy Thom @ VIEW Conference 2011


On Friday, Oct. 28, 2011 VIEW Conference hosted a Master Class with Randy Thom, Director of Sound Design at Skywalker Sound. He is a firm believer that the sooner the sound designer is involved in the pre-production, the better the story can be told. Randy illustrated how sound can shape a film, talking about how doors can be opened to sound. He also shared clips from movies where this kind of early collaboration has happened. Here's an excerpt of his talk during the workshop.

Sound as a full collaborator to make better films 

Alan Splet, Walter Murch and Ben Burtt where the three people who lived within about 20 miles each other near San Francisco who really brought a new revolution into American film sound during 70s. I was lucky enough to work with all three of them and “steal” some of their best ideas. One of the first things that you learn as a sound designer is not think to literally about sound, so one aspect of training your ear is to interpret sound in emotional terms. Subjectivity in filmmaking is a playground for sound: when the audience understand that they don’t figure out consciously, but what they seeing and hearing is being filtered through a character’s or filmmaker’s point of view in a subjective way. Very often working on a sequence for a film what you want to do is think of how you want the sound to make people feel and you analyze what it is about that sound makes you feel certain way and you go looking for sounds or raw material that have those qualities. 




Apocalypse Now

If there was ever a film where sound and image were treated more or less equally and allow to affect each other certainly is Apocalypse now. The first sound that you hear in the film - before any music or any dialogue - is a very odd, electronically synthesized helicopter sound - the Ghost Helicopter. Captain Willard is hearing the memory of the helicopter that he has. What you’re listening to is this guy’s brain. He’s remembering things, he’s hallucinating, he’s dreaming, he’s drunk and under the influence of drugs, he’s listening to his brain operate. The opening sequence is the launching point for all story, immediately the audience is put in a frame of mind that anything can happen, this is going to be a very strange ride. As he stands at the window looking outside he might heard a little fly buzzing: it took me a week to record that fly (laugh). At first he hears - and we hear - the sound of Saigon outside (car horns, Vespas, police whistles). Those sounds morph into the sound of the jungle: each one of those individual city sounds turns into specific jungle sound. Physically for the all sequence he’s still in his hotel room, but in his mind he moves back into the jungle.


Once upon a time in the West 

Sergio Leone decided early on that they will record all the music for this movie before they started shooting the film and they used the music during the shooting to help the actors and essentially to inform how the film is going to be shot. They were struggling how to make music and sound working together before shooting the sequence. Ennio Morricone, the composer, happened to go to a musique concrete concert - a genre involved using real world sounds, rather than traditional musical instruments - by a guy who played a latter, banging and scraping on it. Then he called Leone and said: “There’s should be no conventional music at all in the beginning of the film: instead you perhaps should shoot around the sound effects.” Leone went shooting the sequence thinking about how the sound of this little train station were going to work in the storytelling.

I think that’s a tragedy that very few studios these days will have the guts to allow a filmmaker to do a sequence like that. They say: “People are going to be bored, we have to fill up with uptempo music through the all thing”. Some budget filmmaking these days is “fear based”, it’s not an attempt to do something new, interesting and unusual, to open people imagination. It’s an attempt to avoid boring people, which is never a good motivation in art.

One of the things that these two scenes have in common is a very strong sense of point of view. Camera angle are very important to sound, believe it or not. The kind of shot, where an actor looking at nothing in particular, is another open door for creative, subjective sound, because the audience knows intuitively that they’re going inside the character’s head. It’s an open door for sound designers to put almost any kind of sound that we want. Having some ambiguity or mystery about the visual image makes it easier for me to do something useful with the sound. Extreme close-ups get across the idea of subjectivity. Long duration shot opens the door for sound, too. The character’s closing eyes is also an opportunity for sound to do something interesting (imaging, remembering...). The most difficult kind of shot is a brightly-lit medium shot, because you’re not focusing on anything in particular and there’s no mystery there, there’s nothing that invites the ear to help figure out what’s going on.

Another element they’ve in common is sparse dialogue. I’m certainly not against dialogue in film - dialogue will always has a role. Dialogue and sound design generally don’t go well together, because there’s something about the human voice that the human ear want to attempt to.
If someone is talking - no matter how hard a director ask me to try to push sound effects during that sequence - it will distract you from the dialogue, which the audience is trying to hear. The way to solve that problem is to design the sequence in a way that there are moments for the dialogue and moments for sound effects. A compromised has to be made, you can’t as a filmmaker try to fire all your bullets at the same time, it’s not going to work. One category of sound tends to dominate at a time - it’s dialogue, it’s music or it’s sound effects. Another bad tendency in contemporary filmmaking is to try to set it up so that all three dominates simultaneously, it will never works. Lazy filmmakers will just call the composer: “make some very strange music telling the audience that this is a very strange place”. As a sound designer you try to do things and variations in the same way a music composer, to use sound in pure musical way: tempo, harmony and rhythm to evoke emotion. Think about what elements in a set could generate the sound useful for the storytelling: this will be more powerful at the end, it’s not a decoration, that’s a very organic way of telling the audience “this is a very strange place”.



Sound-friendly scripts

Most writers are obsessed with words, and they tend to think words should dominate every sequence, with wall-to-wall dialogue.
Filmmakers simply don’t think about how to use sound in that way before start shooting the sequence. Think about what the characters hear. Think about how the things they hear affects them and how character changes over time. I often found people who come from visual/light background - which David Lynch did - have very interesting sound ideas. He demands you to be creative all the time.

Another thing I told to directors is: during rehearsal in a live action scene, play with your actors in terms to find things in the space that can make sound that will be useful to the story.
As a sound designer, try to imagine ways that sound could playing in an interesting but organic, truthful way to help the storytelling in the sequence. Try to think to other powerful sounds that the audience doesn’t expect. Part of your job as a sound person - I think - is to help the director make the best film possible. If you have interesting ideas - and you should have them - about the way of film shooting that allow you to do something you couldn’t do otherwise, of course you should talk about it.


Sound for Animation 

Thanks to big aesthetic jumps in animation, more contemporary animation directors want movies should sound like a live-action movie. For “How to Train Your Dragon” I come up very early on with some speculative vocalizations for the dragons that will help the animators to animate to those elements. I tried to use real-world animal sounds - tiger growls, elephant, whales, goats, camel, dogs - to cover a wide range of emotions, allowing sound to influence the animation. The challenge is how to make the transition from one to another, which needs a lot of work and experimentation with pitch-changing techniques.




Sound Mixing 

Mix is about to choose right or most powerful sound in any given moment. In a moment when you need to hear the dialogue you try to artfully lower or eliminate sets of other sounds that are competing with the dialogue at that moment. Space can make sounds useful to the story: there are a lot of others tricks like moving the sound effects and the music into the side loudspeakers and have the voices mostly come out from the center speaker, which make a little bit easier to understand the lines of dialogue. Sound is more powerful if comes from a place we doesn't expect. You need to think of sound in terms of spectrum and frequencies and tailor those for a given moment.


Related Post:

Wednesday, October 10, 2012

Augmented Listening

By Tue Haste Andersen - October 9, 2012

reBlogged from: design mind
 

Stop for a second and listen. Close your eyes, use your ears, and just listen.

Whether you are in a quiet office environment or out on a busy street, you'll be amazed by how many sounds there are around you. Most of us do not pay attention to the ambient sounds that surround us. Our brains filter them out and we don't listen. Yet the sounds we miss can be very enjoyable.

Designed Sounds

Today, what we hear in our daily lives is often designed sound- music and sound effects carefully crafted for games, devices, and products. For example, mission-critical products, such as heart rate monitors used during medical surgery or a plane’s flight deck controls, use distinctive alarming sounds that are designed to be easy to perceive and raise a sense of urgency or danger.
In interfaces for everyday tasks, sound is used to create engaging and beautiful experiences. Sounds can generate a special feeling or underline brand identity while simultaneously providing cues that a command has been received by the system. Most smart phones today come with subtle sounds that indicate the pressing of a touch screen’s virtual buttons. Since there is no way to feel if a virtual button has been pressed, the sounds reinforce the action for the user. Another example can be found in industrial design, where the latest electric cars are being designed with artificial motor sounds. The sounds alert pedestrians to the car as well as reinforce the sense of driving a powerful vehicle. These examples underline the overall trend of sound being used to create an aesthetic experience rather than serving as purely a functional aid to improve interaction.



Blurring the Border Between Listening and Composition

While systems and products are becoming more enjoyable and pleasant to listen to, they are usually not intentionally designed for sound interaction. The emergence of accessible music software on computers and mobile devices is changing this. These programs allow for easy modification of sound by the average user and blur the border between listening and sound creation. The small form and limited complexity of mobile interfaces has forced music software designers to reduce the complexity of their products, resulting in music software that is widely used by average mobile phone users.
Music apps are often top sellers. Popular applications allow people to become mobile DJs, to transform sounds, and to design ringtones.
I was interested in exploring the blur between sound creation and listening when my friend and colleague Matteo Penzo put me in contact with Matteo Milani from the U.S.O. Project sound art group. The ideas and compositions of the U.S.O. Project revolve around the use of noise and ambient sound as a foundation for sound installations and music composition. Together we wanted to create a mobile experience that would support active listening to the everyday sounds that surround us, making the listener a part of a personal sound installation. Instead of creating a tool for recording and transforming sound, we wanted to start from the sounds themselves. Our goal was to reinforce the sounds of the listener’s environment while blending them with more musical sounds. Together the sounds would form a unique experience that could be enjoyed by anybody that has an interest in sound and art.  



Early Experiments

We started with a small prototype app for iOS using simple sound algorithms to blend U.S.O. music with live recording from the iPhone microphone. The prototype was tested with real use cases that included listening to the app while taking a long walk as well as while sitting at the computer in the office. We added many parameters for the user to be able to tweak and play with the sound transformation.The parameters were mapped to on-screen sliders and buttons and to sensors like the accelerometer.
While doing the informal tests we found that the users were struggling to understand the relationship between the parameters and the sound output. Also, in most cases they would end up spending time experimenting with the parameters to discover how they work. The visual interface and controls were clearly distracting, taking attention away from the app’s original goal of reinforcing ambient sounds for the listener.  
Following these early experiments, we decided to take a drastically different approach. We limited the visual interface as much as possible and provided a set of sound themes in the app for the listener to select. This worked much better. All of a sudden the users would pick up the app and, once started, would tuck it away in a pocket while listening to the sounds. Each theme takes sounds from the microphone and blends them with sounds composed by U.S.O. Project. The sounds are blended using sound algorithms, unique to each theme. Each algorithm is carefully calibrated to replicate the work and skill that goes into producing a great listening experience.

Lis10er

The result is Lis10er (pronounced Listener), an augmented sound installation app. Sounds are blended from the listener’s surroundings, creating dynamic music that changes while maintaining its identity. Lis10er provides users with a creative way of listening to their environment and a unique experience with every listen. 


Tue Haste Andersen is Senior Software Architect based in frog’s Milan studio. Tue is a Human Computer Interaction and Computer Music expert, with research ranging from DJ work practices to the use of sound and music in common interaction tasks. He is also the founder and original author of the popular open source DJ software, Mixxx.

Monday, September 17, 2012

Mirror_Mirror | Sound Installation


Concept and software design by Federico Placidi
Hardware design by Matteo Milani
Woodworker: Fabio Testa
Produced by U.S.O.Project, 2012


Where: [.BOX] Videoart project space, Via Federico Confalonieri 11, Milan
When: Thursday, September 27, 2012 | 6:30 until 21:30 PM
Free admission

The concept of multiverse was first introduced in the so-called “many-worlds interpretation” (MWI) of quantum mechanics by Hugh Everett III in his PHD thesis, "The Theory of the Universal Wavefunction". His model was thought as an alternative to the renowned theory called “Copenaghen interpretation”, developed by Niels Bohr and Werner Heisenberg.

The MWI interpretation postulates that every quantum measurement process (at Planck’s scale) creates, as a consequence, a division of the observed Universe into multiple parallel universes - as many as the possible outcomes of the measurements are.

In different formulations of this concept, all the universes - which form the Multiverse - are structurally identical, and they can coexist at different states even if they possess the same physical laws and fundamental constants.

We need to take into account that those universes are non-communicating (there cannot be any information exchange between them).

In an episode of the famous TV series “Doctor Who” - the episode was Doomsday, written by Russell T. Davies - the Doctor finds himself in the situation to make a difficult and dramatic choice: separating from the person who probably loved him the most, by “exiling” her in a parallel universe to guarantee her safety and survival.

It is worth mentioning some parts of the dialogue from the original script of the episode:


Rose comes to a halt in the middle of the beach and stands there, waiting. A short way to her left, the Doctor fades out of thin air. Rose turns to him. He's slightly translucent.

ROSE

Where are you?

THE DOCTOR


(his voice sounds distant)

Inside the TARDIS.

INT. TARDIS

The Doctor is, in reality, standing by the TARDIS console facing straight ahead.

THE DOCTOR (CONT'D)


There's one tiny little gap in the universe left, just about to close. And it takes a lot of power to send this projection, I'm in orbit around a super nova.

(laughs softly)

I'm burning up a sun just to say goodbye.

Sure enough, the TARDIS is spinning around a beautiful super nova.

EXT. BAD WOLF BAY


ROSE


(shaking her head)

You look like a ghost.

THE DOCTOR

Hold on...

He takes his sonic screwdriver out of his pocket.

INT. TARDIS

He points the sonic screwdriver at the console and somehow this strengthens his projection.

EXT. BAD WOLF BAY


The Doctor now looks as solid as if he were really there. Rose walks over to him and raises a hand to touch his face.

ROSE

Can I t--?

THE DOCTOR


(regretfully)

I'm still just an image. No touch.

ROSE

(voice trembling)

Can't you come through properly?

THE DOCTOR

The whole thing would fracture. Two universes would collapse.

The scene partially violates the no-information-transit prohibition between the two universes, but the subterfuge used in the story (the two characters cannot touch, only see each other), somehow preserves the assumption presented by the MWI, only conceding a small but necessary poetic licence.

In the same episode, there is a very touching scene, where the two characters (Rose and the Doctor), right after their isolation in two different universes, are in front of one another separated only by a simple white wall.

This border, imaginary and symbolic, which divides not only two places in the same space-time continuum, rather two whole universes, gave us an idea.

We wanted to offer that experience as an installation. We wanted to allow the audience to listen to whatever is “on the other side”, beyond that wall, and we wanted all this to happen in real time.

That’s how Mirror_Mirror was born.

The installation is organized inside a space.

What does this mean?

The space in itself, when it isn’t filled with matter (empty), is in reality permeated by low energy levels continually fluctuating.

These fluctuations in the void, have a particular significance on a quantistic level (we refer to the Planck scale, so at infinitesimally small dimensions).

In quantum mechanics, these fluctuations represent temporary shifts in the energy state of the void space, according to the Heisenberg Uncertainty Principle.

This means that the Conservation of energy principle can be violated for very brief periods of time (the lower the energy, the longer the fluctuation can persist.)

This energy can decade and take the shape of pairs of particles and antiparticles (which then annihilate each other).

In substance, the amount of average energy on a larger scale remains constant. Nothing is created, nothing is destroyed.

“There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law—it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in the manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same.” - R.Feynman

It was in our interest to draw an analogy in the sound domain, allowing the audience to directly experience this phenomenon.

As a consequence, we created an application able to “sonify” ideal energetic fluctuations (generating pressure waves, structures and emerging behaviours), starting from the lower energy level available, which is the background noise.

Thanks to the Kyma software implementation, and the use of microphones, it was possible to “measure” background noise and , through a series of negative feedback operations, enable the instrument to create “something”, using all the information available in our space/universe to create temporary energetic fluctuations (statistic variations of the density of sonic quantums’ “packets”), without violating the Conservation of energy principle - in fact, the average energy quantity, altogether, remains the same.

We’ve thus arrived to the gravitational center (here we have no more fluctuations, only numerous certainties, due to the dimensions and mass of the object) of the opera, which is represented by a wooden artifact, symbolically revisiting the white wall we came across in Doctor Who, which divides our universe from another possible universe.

Which one it is, the visitor will have to find out for himself.

By placing a stethoscope on the wooden surface. the observer will be able to “measure”, with various levels of definition, the sounds coming from another probable universe out of phase with ours.

In fact, as identical as it will seem, it strangely does not share the same temporal parameter.

From quantum mechanic and its Multi-Worlds interpretation, we know that every measurement operation will produce a further division of the universe.

So, it is possible that in the end, there will be as many universes as the present observers.

And, paying a little attention, it will be possible once again to listen to the voices of Rose and the Doctor, as if they are suspended in a temporal loop, to remind us that maybe, the current physics laws, could not be the same in every place and every time.

Federico Placidi, Matteo Milani
>>U.S.O.Project

Sunday, September 09, 2012

New Album 'Unsorted Tales' Just Released!


Composed by U.S.O. Project - Unidentified Sound Object
SYN-007 © 2012 Synesthesia Recordings
Mastered @ Green Movie, Milan (Italy) by Matteo Milani & Federico Placidi

Performers:
Edoardo Carlo Natoli - Violin
Federico Placidi - Violoncello & Flute

TRACKLIST:

1) F'Shima (13.49)
2) Gretel's New Clothes (19.46)
3) Psalm 21 (22.15)

RELEASE INFO:

Artist: U.S.O. Project - Unidentified Sound Object
Title: Unsorted Tales Cat.No: SYN-007
File under: Experimental/Electronic
Format: Digital
Release date: 9.2012


F'Shima 



It's night.
There are no precise coordinates, nor recognizable signs that can lead us back to a familiar place.
And yet those sounds remind us, almost pathetically, of a summer night scene, to which we listened countless times.
But that’s not what this is.
The slow and articulated flow of pulsating energy gradually takes a structure, enabling us to be part of something very different.
The flow surrounds us, passes through us.
We can’t see it, nor touch it, but every particle interferes with our body, with our biological substrate. Far away, a storm is coming, perhaps.
It’s neither natural, nor inoffensive. It deceives us with the variety of its harmony of timbres.
The structure suddenly changes, reorganizing itself while turning us into something new.
A breach made of light. It’s oscillating.
Then, silence.
The air stagnates, suspended and motionless.
An electrical impulse.
One more. Others follow, one after another, fainter and more distant.
(Sounds), like petals made of ash.

F'Shima was designed and built around a series of recordings of electromagnetic fields generated by various electronic devices (Hard Drives, iPad, iPhone, Portable Game Consoles ...), recordings made ​​by using old analog phone-captors. The source material, sometimes fully recognizable, sometimes radically altered, has been digitally processed using the Kyma Sound Design environment.


Gretel's New Clothes 



"Nibble, nibble, gnaw,
Who is nibbling at my little house?"
The children answered: "The wind, the wind,
The heaven-born wind."

Gretel's New Clothes is, in a way, an acousmatic reinvention of the famous fairytale by the Brothers Grimm. It is in the form of a Rondò. 
There is no attempt to literally retrace the story, rather to suggest it in a subtle way, through the use of strongly characterized sound materials (footsteps in the woods, the wind, the "sounds" of the night...) which are then dialogically processed during the final Violin and Cello improvisation. 

“They walked the whole night and all the next day too from morning till evening, but they did not get out of the forest, and were very hungry, for they had nothing to eat... And as they were so weary that their legs would carry them no longer, they lay down beneath a tree and fell asleep.” 


Psalm 21



“...Your hand will find out all your enemies;
your right hand will find out those who hate you.
You will make them as a blazing oven
when you appear.
The Lord will swallow them up in his wrath,
and fire will consume them.”

Psalm21 is not a religious work. There is no implication of a celebratory or ritual nature. The dominant and recurring element during the composition of the piece, is the relationship between creation (as a constantly evolving act), creator (of which we ignore existence and features) and creature (as a self-conscious subject that reflects on his nature). These three elements are the driving force around which the work unfolds.

[Available on CD Baby Music Store]
[Digital booklet: Unsorted_Tales.pdf]

Monday, August 27, 2012

An interview with Hamilton Sterling

by Matteo Milani - U.S.O. Project, 2012 

I am happy to continue our series of interviews with Hamilton Sterling - sound designer, supervising sound editor, effects editor, and mixer who has worked on The Dark Knight, War of the Worlds, and Master and Commander: The Far Side of the World, as well as many independent films. He recently cut sound effects on MIB3 and The Host, and worked on Terrence Malick’s To the Wonder, and The Tree of Life. Hamilton was the supervising sound editor, sound designer, and re-recording mixer on Tomorrow You’re Gone by David Jacobson, and has edited sound on the films of P.T. Anderson, Christopher Guest, Andrew Dominick, and Steven Spielberg. To date he has worked on over seventy-nine feature films. 


Recording a Demolition Derby (photo: Michael Dressel)


Matteo Milani: Thanks for your time Hamilton! First of all, could you tell me a bit about your education and musical background?

Hamilton Sterling: I come from a musical family. My mother and aunt sang four, five, and six part harmony by ear. They performed as the Silhouettes on KDKA radio in Pittsburgh, Pennsylvania in the 1930’s. Both were very encouraging of my early musical interests. My aunt, who worked at the local newspaper, often brought home fascinating music: György Ligeti, George Crumb, music from all over the world. I heard the soundtrack to Fellini Satyricon before I ever saw the film. My mother was a jazz fan, and when I was a boy, would take me to listen to local groups. 

In high school I played electric bass in the jazz band and upright bass in orchestra. The musician’s union put together an all-star big band for high school students in which I played, and I also performed in the Allstate Orchestra, becoming principal bassist my senior year. I played my first jazz gig when I was sixteen years old, and entered Arizona State University on a four-year music scholarship, graduating with a BA in jazz and classical performance. 

From a social context, I also benefited from the cold war belief that the arts held importance, and that America had to out-compete the former Soviet Union in creative endeavors. Music education in elementary school and high school was excellent and generously funded. Unfortunately, for the arts and young artists, those days are gone. 


MM: Does your experience as a musician help you in your career in film sound? 

HS: I think a sense of rhythm, melody, and harmony is essential in being able to make interesting sound for film. Recent studies show that human beings are wired for music. In studing jazz, the adage of “learn the music theory, then forget it” allows the right brain freedom to improvise from knowledge. I’ve come to feel that cutting and layering sounds is a slow-motion version of improvisation. 


MM: How did you enter the movie industry? 

HS: Alongside my musical activity, I became obsessed with films. Stanley Kubrick’s 2001: A Space Odyssey became the impulse for much of my early creative life. I saw the film for the first time when I was ten years old. It brought to me an interest in modern classical music, archeology, cosmology, astronomy, AI, cinematography, and special effects. It also appealed to my budding existentialism. That any one object of art could do so much was, to a young mind, amazing. 

It may seem hard to believe, but public television at that time played films by Antonioni, Godard, Fellini, Losey, and Bergman, and the art house cinemas were still going strong. I began making short films in the summer vacations between school from the money I made playing music. When I came to Los Angeles in the early 1980s, sound seemed a natural fit. I began by editing documentaries, Warren Miller ski films, industrial films, until I got my first sound effects editing work on Alan Rudolph’s Trouble in Mind. 


MM: Choosing the right sound(s) to picture. An art form? 

HS: Because sound editing began as a technical blue-collar job, many people had the impression that what we did was nothing special – “A monkey could do it” was the often-heard refrain. Here in America, we never developed a militant artistic union, an aesthetic of labor, if you like. Of course, because many of the corporate films being made today are empty of artistic merit, to say nothing of merited thought, it’s no wonder that art and labor are still dirty words. Choosing the right sound is an artistic, and surprising moral endevour. Thinking back on choosing sounds for films I did twenty-five years ago, if you had a scene in a rough cityscape, one might choose a black voice yelling in the street. The effect of that implying threat (at least to a certain part of the population). And that is a choice that can subtly further social injustice. I’m not saying that an artist should proscribe their work, but one has to very conscious of one’s choices, because in mass entertainments, those choices may reach millions of people, and they have consequences. 


MM: You've frequently shared your nominations with Richard King, Christopher Flick and Michael W. Mitchell. How do you guys collaborate on projects? 

HS: When I worked for Richard King, I did sound design and sound effects editing. Occasionally, I did sound effects recording. The frog rain in Magnolia was re-recorded against the reflections of a cliff in the Angeles National Forest. We set up two speakers facing away from the microphones, and slowly rotated the speakers toward mic, giving the playback the effect of distant frogs falling toward us. Eric Potter and I also put a speaker in a pickup truck to record a playback of previously sampled frog elements. Eric put the truck in neutral and steered into the quiet, distant valley. The sound was incredibly bizarre, and I ruined the heads and tails of multiple takes by laughing. It’s always fun to get out into the world to record, and Richard loves to record. 

Chris Flick did all the programming and cutting of the foley, and we conferred with him on elements that effects needed help with. As to the sound effects editing, Michael Mitchell and I did a lot of heavy lifting. Richard cuts sound effects as well, and in this day and age, I admire him for it. 


MM: Would you like to explain your role when working with the other members of the sound editorial?  

HS: When I’m not supervising, I try to communicate very specifically with the supervisor and my fellow editors: what are the stage delivery requirements, the predub breakdowns, or whether there will even be predubs, who is doing what in terms of special design. Hopefully, the supervisor has been able to spot the film with the director. If there is little in the way of specific information, I get a sense of the aesthetics of the director from what’s on the screen. If what you see is a pack of cliches, you know what to expect. If there are few cliches, you have reason for hope. At the end of the day, your work is only as good as the director’s vision, and their courage to take risks. 


On the Cary Grant stage at Sony on Morning (photo: Leland Orser)


MM: What are the musical tools you use to boost your sound designing workflow? 

HS: A number of years ago I purchased a Kyma sound design system from Symbolic Sound. It is very useful in producing unique sounds. It’s always inspiring. I also use my old Kurzweil K2000 as a midi controller, a Haken Continumm fingerboard, and a PC2R. In studio I use Millennia mic preamps. For field recording I use Schoeps mics and Sound Devices mixers as well as different contact, ribbon, and dynamic mics to gather my sounds. My bass is fitted with a midi pickup that I also use through an Axon to trigger my samples. 


MM: Sound processing: can you give us a description of your studio gear? 

HS: I use Pro Tools HD3, Genelec 5.1 speakers with a MultiMax monitor, and many plug-ins. For picture I project HD through a Decklink card to a nine-foot screen. Aside from Kyma, Haken Continuum Fingerboard, Kurzweil, and Axon, I have on occasion used Beat Detective in Pro Tools to place rhythmic structure onto multiple effects, and Melodyne to re-engineer animal vocals. I just started using Battery as a sampler. Altiverb, Pitch ‘N Time, Lowender, and GRM tools are staples. 


MM: Can you reveal to us a "making of" of a very special sound effect(s) or a sound sequence? 

HS: I’m very proud of the storm sequence in Master and Commander: The Far Side of the World. Making the scene dynamic given the similarity of frequencies both in the water and the wind was an interesting problem. First I began by cataloguing the ocean sounds into frequency ranges from low to high. I catalogued the wind in a similar way. At the time, Warner Bros. had terrible editorial rooms that had not been updated since the 1950s. Not only was there no surround system, but there was no wall treatment of any kind. Anyone who has had to edit endless water and ocean effects knows that in a box-like room with hard surfaces, audio hallucinations in the white noise of the waves can produce boat engines that aren’t there and other weird effects. So I brought sound blankets from home and hammered them into the walls. I scavenged an extra pair of speakers from an adjoining room and built myself a primitive surround system. I cut the sequence in 5.1 tracks that would mirror the mixing console and internally panned and level-set everything. Because the water visual effects were actual layered shots of waves, they changed constantly. But the first time I cut the sequence, I really liked the rhythms I had found. As the sequence changed, I was determined to keep this poetic kind of rhythm, so instead of just cutting up the tracks in conforming them, I took the time to find new rhythms and create the sequence anew each time it changed. I was very tough minded in this approach. Fortunately, I was given the time to do this – something that is unique to this day. I then cut the hard effects in the same 5.1 style and processed the siren’s call of the wind in the rigging through Kyma. When the edit went to the stage in this form, the mixers worked on it for a couple of days, trying to tame it. Much to their kindness, they told me they decided to put it all back the way I had originally laid it out because it had a life to it that their smoothing of the rough edges took away. That’s the way the sequence was released. 


MM: What do you regard as your most important credits in your career thus far? 

HS: The Tree of Life and The Assassination of Jesse James by the Coward Robert Ford are my two favorite films. They’re closest to the feelings I felt as a young man introduced to the great European cinema: thoughtful, unsentimental, mysterious. They capture something of eternity. 


MM: How do you get involved with the movie “The Tree of Life”? What kind of approach did you take on foley? 

HS: I have known the sound supervisor, Craig Berkey, for many years. Erik Adahl had hired me on Transformers: Revenge of the Fallen, and mentioned me to Craig. We fell back together again, which is the way of the film business. Andy Malcolm of Footsteps Studios walked the foley. Realism and proper perspective (using multiple mics) is very important. Because Mr. Malick often uses non-synchronous production takes, foley is used to ground the characters within the scene. It becomes another part of his pallet. When it is absent, that too becomes a color. 


MM: Can you describe how some of those sounds were accomplished? 

HS: Andy originally used his house as a foley studio. It’s out in the middle of the Canadian wilderness – forty-five minutes from Toronto. The stairs really creak, he never sweeps his kitchen. It’s all real. (Just kidding about the kitchen.) Now he has a fabulous studio a stone’s throw from his house, so you get the best of both. 


MM: How was the communication with the director and the rest of the team? 

HS: I was only on stage for a few hours during our first temp mix. But I was struck by Mr. Malick’s graciousness. I truely admire his work, and have since I first saw Days of Heaven as a youth. 


MM: To mix "in the box" in sound editorial before the final dub: what are its pros and cons? 

HS: Unfortunately, schedules now seem to only allow the sound effects to be mixed in-the-box. Even on the most well-financed films, mixing in-the-box is now common. On Knight and Day (James Mangold), we pre-mixed all of the sound effects and ambiences into 5.1 groups and kept them virtual. We had a couple of weeks to adjust these pre-mixes on the mixing stage, as the console on the Cary Grant stage at Sony could mirror Pro Tools. But the number of tracks and the constant changes necessitated having to continually re-mix added elements in-the-box. I recently did some work on MIB3, and at least for the temp mix, the effects were pre-mixed in-the-box, then taken to the stage for final adjustment. Keeping a somewhat traditional separation of elements is helpful for conforming, as well as giving the sound effects mixer creative input. If you set your editing room up correctly, it can work out quite well. Of course, sound editors are not being paid as mixers, so there are ways in which this situation is financially disadvantageous. But it is creatively rewarding. For independent films, the future is here. With the track counts of the new Pro Tools HDX cards, traditionnal mixing facilities will have an increasingly difficult time staying afloat. Unfortunately, many fine mixers will as well. 


MM: A networked environment: can you describe the importance of a client/server architecture in sound post production for a feature film? 

HS: It’s great to have. On Knight and Day we were editing at another facility before we moved to Sony for the mix. Sony has a nice server system for moving your work to and from colleagues as well as mixing stages. Structuring the folder architecture on the server is extremely important. Knowing exactly what elements have come from the mixing stage, what needs to be updated, what needs to be mixed, may seem simple, but with multiple versions, competing creative interests, and huge amounts of data, organization and terminology is paramount. 


MM: Sound effects editing for multichannel-surround: what are you spazialization techniques? 

HS: I edit for the 5.1 pre-mixes. When I do have to spread an effect, I’ll use a little delay, reverb, or the Waves PS22. I’ve recently begun using the Schoeps free DMS plug-in for three channel field recording that decodes to 5.0 surround. I love the Schoeps plug-in. Now I record all my sounds on three channels and decode to 5.0. Even simple sounds, like a light switch, pick up the character of the room. It’s a facinating way of creating a feeling, using these simple multi-channel sounds. If the simple sound creates an interesting space, I’ll work backward, and using Altiverb, try to get the rest of the sounds of the scene into that same environment. Of course if it doesn’t work, you still have the mono or MS stereo recording. 


MM: You made a film in the late ‘90s. Did you do your own sound? 

HS: I supervised it and cut, but I had a number of wonderful friends from the sound editing and mixing worlds who helped me to complete it. Because of current events, I decided to prepare a new version of the film, Faith of Our Fathers, for Blu-ray and DVD, and re-construct the sound for 5.1. As I began to re-assemble all the elements, I realized that we who started in the business in the mid-1980s lived through a radical transition in our work. At the time magnetic film was all there was. But by the time I shot Faith of Our Fathers in 1991, the digital world was just beginning. Faith was originally shot on 16mm film with a 1:1.85 ground glass for theatrical blow-up to 35mm. All the dailies were 16mm. I had obtained from a friend, one of the first Sony D10 Pro DAT recorders in the country. It was strictly grey-market. I thought I could mix and record the production sound to one channel and put a 60 cycle pilot tone on the other so that when transfering to magnetic stock (both 16mm and later 35mm) the transfer machine (“dubber” we called it) would stay in sync. So, over a long period of time, friends and I sunk and coded the 16mm dailies. I cut the picture, and when it was time to prepare the track for 35mm mixing in 1995, I used a 16mm to 35mm synchronizer to phase the new 35mm dialogue to the 16mm worktrack. The dialogue was cut on mag. The backgrounds were a combination of 24 track two-inch and DA-88s (the bane of all mixers at the time). And most of the sound effects were cut on an early version of Pro Tools which were then transfered to DA88. When I decided to do my 5.1 re-mix, I had 35mm mag to transfer to Pro Tools, DA88s (I still have one of those boat anchors which work!), and DATS with original production and music. When I think that the process involved 16mm mag, 35mm mag, Pro Tools, 24 Track, DAT, and DA88s, it becomes evident that the transition from analog to digital was quite messy. The other shocking thing is that I was able to finance my film on a sound editor’s salary (which is the reason it took so long to complete). 


MM: What are your thoughts on the boundaries between music and sound design?

HS: Having recently released Migration, I can tell you that creating a 5.1 programmatic musical soundscape is a wonderful artistic process. Combining a purely aural narrative with the abstraction of music and processed effects blurs the creative experience. I don’t mean just adding a sound effect to a music track, I mean creating the entire living thing as one artistic statement. There is a universe of possibility in the soundscape form, and because of my musical life, the addition of ambiences and effects to create emotion is a fullfillment of who I am. Other examples of soundscapes that I like can be found in the plays of Romeo Castellucci’s Socìetas Raffaello Sanzio and the Wooster Theater Group, both of which I find inspiring. As to the boundaries between music and sound design in film, I would say they have been nearly erased. I just completed the film Tomorrow You’re Gone, with Michelle Monaghan and Stephen Dorff. Kyma was used extensively in creating a very musical soundscape in which to set the traditional effects. 


Recording 5.1 ambience for Tomorrow You're Gone (photo: Cris Lombardi)


MM: About your album releases: do you think that detectable technical processes are an integral aspect of the composition’s overall aesthetic? Is it important in this composition that the listener is aware of the technical processes? 

HS: The album Rise and Fall is made mostly of live loop improvisations featuring fretless bass, acoustic bass guitar, and midi-following synths and effects. It grew out of musical feelings, a very simple midi-synth/live stereo mix chain, and the need to not multi-track, or manipulate the live performance. In that respect, technical qualities like midi delay, or tracking annomilies by the Axon pitch controller, were of secondary importance to the spontanious capture of the music. No meta statement should be implied from these technically primative recordings, other than they were all done with as little post-production as possible. As for Migration, the soundscape album I created with Grammy-winning musician Jimmy Haslip, that is a piece that was conceived and composed for surround. It’s feelings and scope are purposely cinematic. 


MM: What's the most important tip you've ever received regarding sound? 

HS: Often on big films, the amount of audio ideas brought to the process can be overwhelming. Sitting on the mixing stage listening to Steven Spielberg, Michael Kahn, and John Williams discuss how best to tell the story of a scene on War of the Worlds, I was struck by their equanimity toward music and sound effects. For them, it is all about story. Their years together have created a language around this idea. What tells the story in a particular moment, and what elements do you have available to do that? An agreement on what the story is allowed them to know what to emphasize on the track. Other filmmakers see story differently, or dissect story as myth and power, and therefore take a very different approach. I love the sound of Jean-Luc Godard’s films because it is a featured element in the argument. Film Socialisme begins with a line-up tone that moves from speaker to speaker around a 5.1 mix. It introduces his dialectic between sound and picture within the contemporary structure of multi-track films. It’s brilliant and very funny. 


MM: What is the most important topic you would want to talk about to make post sound better? 

HS: Forcing USA corporations, either by massive tax penalties or heavy import tariffs, to hire the workers in their own country. Too many of our sound jobs are being outsourced. Germany has good unions, pays its workers well, and has an export rate second to none. The old saw that in-country labor produces products that can’t compete is obviously not true: Germany has a 7% trade surplus. The sad truth is that USA corporate profit is at an all time high, CEO salaries are at an all time high, and too many people are unemployed. Corporate contempt for basic decency is the primary problem at this moment in history. It will eventually change, one way or another. 


MM: Do you have any advice for anyone who is interested in a career in the sound dept? 

HS: With the technology of audio, music, and picture in an ever-increasing cascade toward the infinitely complex, having the time to learn the programs, plug-ins, hardware, softward, picture formats, and optimal work-flow process, is itself becomming a full-time job. Having the mental space to discover why you want to do it, and what doing it even means, to you and to society, is something that young people should consider. This work used to be the wild west. Most of it, for the time being, now sits inside the corporate world. That is not a world that should be perpetuated. So then it becomes about making art, with no potentially viable means of making a living. Last quarter Migration streamed 2500 times and made 3 cents. So is this a risk you are willing to take? Do you see the world differently, and have something to say about it? If so – and now I’ll paraphrase Stanley Kubrick’s advice to young filmmakers – “Get a camera, as soon as you can, and start making films.”... or music, or soundscapes, or installation art. If you are meant to work on a handful of great films in your career, somehow, with luck, you will make it happen. 


MM: Silence is mentioned a lot when discussing sound. What was your approach in its usage? 

HS: As John Cage pointed out, silence is never truly silent. But one must be silent in order to listen. 


Wednesday, July 25, 2012

Lis10er is now available in the App Store

We would like to introduce you our brand new iPhone app called Lis10er (pronounced Listener). It's a binaural audio augmented reality iPhone application that creates an “augmented soundspace”, warping and mixing in realtime the device live microphone with prerecorded imagined sounds and the elaborated version of the real-time input, designed by U.S.O. Project (Matteo Milani, Federico Placidi) and Tue Haste Andersen.

The app is a mobile installation that places the emphasis on the surfaces of the world in which we live. It contains 10 themes: each one is a carefully composed “virtual place”, where the sonic environment is encapsulated and transformed to invent a new reality, mixing "Live" sounds and their relative aural context with designed ones. 


How It Works

Step 1: To get started you only need the earphones or any other external headset with microphone.

Step 2: With the wet/dry slider you can blend the amount of the input source with the processed output.

Step 3: Swipe left and right to switch among 10 themes.



Compatible with iPhone 3GS, iPhone 4, iPhone 4S. Requires iOS 5.0 or later.


What’s in the next versions:
  • New Themes
  • Non linear selection and crossfading between Themes
  • Background recording
  • iTunes file transfer
  • Soundcloud integration
  • Support for iPhone’s built-in microphone

Friday, June 29, 2012

News: Sonic Screens 2012

U.S.O. Project is happy to announce the selected artists for the 2012 edition of Sonic Screens, which will take place in Milan on November 2012 @ O’.
Here they are:

Andrea Valle
Dispacci dal fronte interno
for Violin and Cello, spatialized electronics and printers 

Agostino Di Scipio
Two Sound Pieces with Repertoire String Music
for any number of bowed string instruments and live electronics


 [photo courtesy of Franz Rosati]

The two works will be performed live by Ana Topalovic & Èdua Amarilla Zádory and will later be released (in the first months of 2013) as a digital download by Synesthesia Recordings.

Monday, June 04, 2012

OnMedia - GRM Tools Workshop

Milan - Saturday, June 9th 10:00 to 12:00 and 13:00 to 17:00

Fifth round of the cycle, 'European centers of research on sound and new media'

Focus FRANCE: Ina-GRM_Groupe de Recherches Musicales, Paris
Guest speakers: Emmanuel Favreau (Chief Engineer for the Development of GRM Tools), Francois Bonnet (Research, Teaching and Curating activities)

[Pierre Schaeffer and Bernard Parmegiani, courtesy of Ina-GRM]

Ina-GRM (Institut National de l'Audiovisual - Groupe de Recherches Musicales) in Paris is a pioneering organization for musique concrete, acousmatic and electro-acoustic music, whose history dates back to the '50s, when it was founded by Pierre Schaeffer. Always engaged in the development of creative activity, research, preservation and dissemination in field of music and recorded sound, the GRM is an experimental laboratory unique in the world. In response to expectations and needs of musicians, composers and sound designers, it is highly specialized in the development of a range of innovative tools to treat and represent the sound: the GRM Tools and the Acousmographe. The activities of music creation and production are mainly grouped at Studio 116 in the Maison de la Radio in Paris.

Grm Tools Workshop
Up to 10 participants.
Bring your own laptop and headphones.
The workshop is free and is held in Italian by Emmanuel Favreau along with Francois Bonnet. During the seminar, after outlining a brief history, Emmanuel Favreau will explore the possibilities of digital sound processing with the latest Tools developed by GRM; he will also deal with issues related on how to interact with the musician.
These notions will be illustrated by demonstrations in real time and musical examples from the repertory of electroacoustic music. After the workshop Francois Bonnet will present the lecture 'Music and sound in space, an introduction to multichannel compositions'. The meeting is open to public, and will present the research activities of the Centre in Paris plays through spatialized listening sessions and projections.

For information and registration: on@on-o.org

OnMedia is focused - from September 2011 and throughout 2012 - in a wide range of events including a series of conferences dedicated to the most important European centers for research on multimedia sound, art and technology, a series of workshops on subversive listening, presentation of international visual artists and authors who use different media and languages, concerts and performances.

[More info: on-o.org]

Related Posts:

Tuesday, May 29, 2012

"Digital Re-Working / Re-Appropriation of Electro-Acoustic Music"



What is DREAM

DREAM is a EU funded project, aimed at preserving, reconstructing, and exhibiting the devices and the music of the Studio di Fonologia Musicale della Rai di Milano. During the 1950s and 1960s, this was one of the leading places in Europe for the production of electroacustic music, together with Paris and Cologne.
During the project, part of the equipment of the Studio (oscillators and non-linear filters) has been virtually reconstructed and will become part of the permanent exhibit at the Museum of Musical Instruments in Milan.

The aim of this one-day symposium is to present to the public the main results of the DREAM project, including the installation that recreates part of the original devices of the Studio di Fonologia di Milano della Rai, as well as the book “The Studio di Fonologia – A musical journey”, edited by Maria Maddalena Novati and John Dack, and published by Ricordi.

The event is comprised of two parts.
The morning will be devoted to the workshop Conservare, mostrare, interagire: per un museo da toccare [Preserve, exhibit, interact: for a tangible museum]. During the workshop, DREAM researchers and invited speakers will discuss applications of novel interactive technologies to museum exhibits, with particular reference to music and musical instruments museums.
The afternoon session will present to the large public the results of the DREAM project, through the movie Avevamo 9 oscillatori [We used to have 9 oscillators], additional talks by DREAM researchers, and two musical performances that make use of sonic materials produced at the Studio di Fonologia.

Program
[http://dream.dei.unipd.it/?page_id=645]

Friday, June 15, 2012
Castello Sforzesco, Museo degli Strumenti Musicali, Sala della Balla Milano 
Free of charge, limited places available

Thursday, April 19, 2012

Unseen Noises [USO002]

24-bit/48kHz Royalty-free Sound Design Collection



Unseen Noises is the second sound effects bundle created by sound designers and electronic composers Matteo Milani and Federico Placidi (aka Unidentified Sound Object - U.S.O. Project).

Electromagnetic informations are invisible and omnipresent. In every city, especially the big ones, an infinite number of electromagnetic waves is hidden: we can't hear them, but they're everywhere! We explored this invisible noise pollution transducing electromagnetic fields into audio signals with a telephone pickup: it acts like a radio antenna for hum and weird electromagnetic noises.

We plugged it into a SONOSAX SX-R4 recorder, moving it close to electrical devices - like a stethoscope - to locate interesting and curious sounds, just like LCD television, internet antennas, lighting systems, transformers, game consoles, tablet, electronic security systems, scanners, computer monitors and hard-drives, printers, navigation systems, fax machines...

All of the audio files have been embedded with metadata for detailed and accurate searches in your asset management software.

As the previous library, this collection has not been peak normalized, but loudness normalized. Through loudness normalization, the gain of a signal is modified so that the signal’s loudness level equals -23 LUFS. Loudness normalization helps us solve the problem where we wish to balance the loudness level of multiple sound files.

Here is what you get in "Unseen Noises":
  • Stereo Files (40 items)
  • Tab-delimited file (.txt)
  • Excel spreadsheet (.xls)
  • License Agreement (.pdf)
  • Artwork (.jpg)

Audio Format: Broadcast Wave Files (.wav)
Sample Rate: 48 kHz
Bit Depth: 24-bit
Size: 2.43 GB
Download size is 1.9 GB (compressed .rar archive)

Available on www.unidentifiedsoundobject.com