SONIC DESIGN - FINAL PROJECT

 SONIC DESIGN - FINAL PROJECT

27TH AUG - 6TH DECEMBER (WEEK 1 - WEEK 14)
NG VEYHAN (0349223) / BACHELORS OF DESIGN (HONS) IN CREATIVE MEDIA
SONIC DESIGN
ASSIGNMENTS DOCUMENTATION


Module Information



Practical

EXERCISES

Distributed over the first half of the semester, the exercises were meant to give us an introductory course to the basics of sonic design.

For the first exercise, we were given a reference track and 4 more tracks that we were to edit the EQ to match the levels of the reference. This exercise was meant to train us in the basics of what to listen for the determine which frequencies of a track required correcting.

(Fig 01, EQ Exercise Track 1, 6/9/2022)

First Track: It sounded that it had less body than the reference track, therefore a lack in the low end. It also sounded muddy, and needed an increase in the high end as well. The mid-tones were well balanced and required no correction.

(Fig 02, EQ Exercise Track 2, 6/9/2022)

Second Track: The high ends of this track were unclear, therefore an increase in the high ends would make it sound crisper. The bass on this track was too heavy, and required a little lowering.

(Fig 03, EQ Exercise Track 3, 6/9/2022)

Third Track: The sound of this track was very tinny, which meant the high ends were too strong, and required increasing the low end a little to "round off" the sound. Increasing the Q width of the low end node also helped smoothen out the sound a little.

(Fig 04, EQ Exercise Track 4, 6/9/2022)

Fourth Track: The sound profile of this track was relatively uniform, though it was still a little bass heavy. The changes along the high end were a lot less prominent, only requiring a slight increase in the highest end.

The next exercise we were given was to edit a reference track so that it matched the sound profile of a given environment. The reference track had a neutral sound and we had to look for examples of sounds in an environment and try to match it.

For the audio imitating a hall, its sound is characterized by its long delay between the initial sound and its echo. A large diffusion of its sound is also a result of the large space that a hall is usually associated with. Therefore the effect rack has a large decay time and large diffusion.

(Fig 05, Hall Reverb Panel, 13/9/2022)

(Fig 06, Hall Reverb Audio, 13/9/2022)

For the audio in a bathroom, its the opposite, with the small space of a bathroom, the echo is near instant as there is not much distance for the sound to travel and bounce. There is also less diffusion as the sound can't disperse as wide. However, there is still an echo, therefore the decay time and diffusion is still present, though less prominent. 

(Fig 07, Bathroom Reverb Panel, 13/9/2022)

(Fig 08, Bathroom Reverb Audio, 13/9/2022)

One of the sounds to mimic was one in a phone call. Phone calls tend to have a low quality sound to it, with a very tinny sound akin to something heard through cheap earphones. Therefore, the method I used to replicate said sound was to crank the high end all the way up to a level that would probably peak the audio levels. The low ends were also reduced all the way to further emphasize the low quality sound.

(Fig 09, Telephone EQ Panel, 13/9/2022)

(Fig 10, Telephone EQ Audio, 13/9/2022)

For the muffled sound profile, it was the opposite of the phone call sound. There is a distinct low end but very low high end. The low end wasn't increased as much as the high end of a phone call otherwise the sound would be too muddy and dialogue unclear. The high ends were reduced as much as possible.

(Fig 11, Muffled EQ Panel, 13/9/2022)

(Fig 10, Telephone EQ Audio, 13/9/2022)

For the next short exercise, we were given a sample of a jet passing by a recording device. The goal was to extract an explosion sound and edit it to make the explosion sound fuller, and add more impact to the noise. 

One method to do so was to layer multiple instances of this audio, with each layer having differences in terms of mixing and effects. For the average explosion sound, it could use more reverb. After using an equalizer to give the explosion more sounds on the lower end and make the explosion sound less sharper by lowering the high end slightly adding some reverb.

(Fig 11, Explosion Waveform, 20/9/2022)

After, to create the explosion sound with effects, I took the normal explosion sound, layered it with a version with more reverb. The preset effects rack also helped create some very interesting sounds, particularly one reminiscent of a spaceship from older sci-fi games. I felt that it made a good addition to the sound as a lingering sound effect following the explosion.

(Fig 12, Explosion Multitrack, 20/9/2022)

(Fig 13, Reverb Explosion, 20/9/2022)


(Fig 14, Sci-Fi Explosion, 20/9/2022)

A quick exercise this week was to use compression effects to help normalize the sound across a piece of audio. 

(Fig 15, Dynamic Compressor, 27/9/2022)

We also worked to normalize the audio by using a multiband compressor. This effect window gave a clearer picture of having each frequency be represented by a wave with different colour, making it easy to identify which frequencies were the ones that required altering.  Using the slider, the audio is then adjusted to ensure it remains within acceptable frequencies.

(Fig 16, Multiband Compressor, 27/9/2022)

Using everything learned over the past exercises, we were instructed to record a advertisement read and create a radio-style advertisement with sound effects. The exercise was more a practical demonstration of usage for the recording studio equipment. 

After recording the ad-read for the exercise, the voice file was processed with an equalizer and some compression. Noise removal was not required for the audio recorded at the school studio as there was sufficient sound treatment and soundproofing to prevent any echo or sound leakage. 

(Fig 17 Exercise 5 Multitrack, 4/10/2022)

The sound effects used in this exercise could be just found online as it was not a requirement to record our own sound effects. However, many of these sounds did not match the levels of the audio or required some correction. Using a combination of equalizing, normalizing, reverb and some sound altering effects, I was able to patch together an advertisement.

(Fig 18, Exercise 5 Advertisement, 4/10/2022)

PROJECT 1

The first major project given in this module was to to create an image of an environment by using audio only. Recording our own audio was not a requirement for this project, therefore the sounds could all be retrieved online. 

We were given a list of choices for the environments to choose from, I eventually settled on a daily life kitchen setting. The environment envisioned was cooking a simple dish in an outdoor-facing kitchen, with birds still chirping and a slight few people walking past outside.

First, I listed out every sound source that I would expect to hear within the scene of the audio environment. I mapped out the environment that the scene takes place in, as the position of the listener would dictate the volume of each individual component of the audio. The location itself was based off the actual layout of the kitchen in my own home, therefore it was easier to imagine. 

Some of the sounds included:
  • Footsteps
  • Doors to the room
  • Stovetop
  • Cupboards and Drawers
  • Silverware
  • Fridge
  • Kettle
  • Air Conditioner
  • Dog
  • Passerby(s)
  • Chirping birds
The audio files were all sourced from online websites. As with most files sourced from the internet, they required correction to fit into the audio of the project. For example, the sound file for footsteps were done on wooden flooring, when I intended it to be on tile flooring. Thus the volume of the lower end was decreased to make the flooring sound less hollow, and more dense like solid tiling.

(Fig 19, Editing Individual Sound Files, 6/10/2022)

Afterwards, all the sounds were arranged onto the multitrack. However, when all the sounds were compiled together, some of the sounds that seemed completed individually would sound out of place when combined. Therefore, clip-specific effects helped alleviate these issues.

(Fig 20, Clip-Specific Effect Rack, 10/10/2022)

Some clips, such as the audio of birds chirping, had only around a minute of usable audio. However, as the birds are present in the audio for longer than that, it had to be repeated. Conveniently, Audition automatically crossfades clips that overlap each other, creating a relatively seamless transition between the two clips that are low in volume.

(Fig 21, Clip Crossfading, 11/10/2022)

Another big part of the audio processing in this project was the relative distances between the different objects in the scene. This wasn't as important for aspects such as the birds chirping which were far away enough for this to be less relevant. But foreground objects such as the stovetop and kettle required their volume to be altered based on their proximity to the listener.

(Fig 22, Clip Volume Control, 14/10/2022)

Once of all this was completed, I listened through the entire mix, and some sounds were overall still too soft, or needed a little tweaking. I applied the changes through controls for the entire track, and the master mixer for the entire mix to make it sound slightly louder with gain.

(Fig 23, Entire Project 1 Mixdown, 14/10/2022)

(Fig 24, PROJECT 1 Audio, 15/10/2022)

PROJECT 2

This project was focused on voice recordings and dubbing over a scene. We were to pick a folklore story and dub over the material with our own voice recordings.

For the material that I would use for this project, keeping with the theme of folklore stories, I picked one of the stories from Aesop's Fables that had a movie adaptation. "The Ant and the Grasshopper" served as the base for the story of "An Ant's Life", a Pixar movie from 1998.

First, I summarized the entire story to find an excerpt that could be used for the dubbing of the project. As my voice is relatively deep, I wanted to avoid scenes that involved the princess ant or any female characters as the timbre of my voice would make it difficult to edit into a female voice regardless of editing.

(Fig 25, PROJECT 2 Story Summary, 20/10/2022)

Choosing between the different scenes of the movie, I eventually settled on a scene that involved the main villain hopper, with an intimidating voice, giving a monologue, with a few other characters, with only a minor line from the princess ant. Selecting specific lines of dialogue that had more impact in the story, most of the unnecessary dialogue was skipped and replaced with narration that gave an explanation of the course of the scene.

(Fig 26, PROJECT 2 Narration Script, 21/10/2022)

After the scripting was done, it was time to record the dubbing of the audio. Unfortunately, I only had access to a simple Fifine K699B microphone for use in this recording. I did have a simple pop filter attached to the microphone which did help alleviate some of the popping noises in the recording.

(Fig 27, Fifine K699B mic, 21/10/2022)

Unsurprisingly, after checking through the audio of the voice recording there was still noise in the audio. Despite turning off any fans and electric appliances in the room, the soft whirring of the laptop could still be heard in the recording.

Also, the levels of the vocal recordings were all over the place. The audio recordings had to be normalized quite heavily as certain lines had more explosive phonetics which resulted in higher spikes in the audio levels.

(Fig 28, Vocal Recording Processing, 22/10/2022)

For the villain character Hopper that I dubbed over. I used an equalizer with a higher low end to make the voice more booming, and added a pitch shifter to deepen the voice further and make it seem even more intimidating. For the pitch shift, anything more than 2 semitones higher or lower tend to make the voice sound completely unnatural, and just a little adjustment goes a long way for the tone of voice.

(Fig 29, Vocal Pitch Shifting, 23/10/2022)

In terms of layering everything together in the mix, it wasn't as complicated as the one in Project 1, as there wasn't any need to to account for the spacing of the environment. However, the volume of the clips still had to be edited as some lines of dialogue were spoken louder or softer than others. In particular, the lines spoken by the princess ant was very soft and needed extensive changes in terms of gain. The tracks were separated into individual characters, like one for Hopper, one for the princess and one for the narrator, among other characters.


(Fig 30, PROJECT 2 Mixdown, 24/10/2022)

When editing the video, the audio of course has to match the timing of the mouth movements of the characters. Using a video track really helped sync the movement to the audio. The lack of being able to cut or do any simple editing to the video was slightly inconvenient, though.

(Fig 31, Video Track Reference, 24/10/2022)

After the mixdown was exported, all that was left was to combine with the reference video in After Effects to create final dubbed video output.

(Fig 32, PROJECT 2 Final Video, 24/10/2022)

PROJECT 3

The final project involves doing a full recording of a suite of sounds that are going to be used to dub over the sound effects found in a game clip. Picking a clip from the fantasy game "Child of Light", I felt I had the most suitable props to create the required sounds.

Similar the Project 1, I started off by watching through the sample video and listing out some of the audio assets that I would need to record:
  • Footsteps
  • Flying Sprite
  • Grass Rustling
  • Footsteps
  • Monster Calling
  • XP collection chime
  • Action Select
  • Target Select
  • Hit Sounds
As I happened to return home for a short holiday, I had decided to record most of the audio there as I had access to better equipment to use for recording. this setup included an AT2020 microphone combined with a Scarlett Solo interface. 

(Fig 33, AT2020 microphone, 14/11/2022)

Some of the props that I used to create the relevant sounds include the following:

The main workhorse for most of the sounds used in this project, the guitar is capable of outputting an extreme variety of sounds, not just plucking the strings. For the sound of footsteps, I used my fingernails to tap on the sides of the guitar body to emulate a walking sound. Dragging a pick along the rough brass strings would also create a screeching noise that was used for the monster call. Of course a simple original background music track was done with the guitar, with small harmonics played as user interface elements.

(Fig 34, Ibanez AW54CE, 14/11/2022)

A pair of keys were used to create jingling sounds, especially those for collecting light orbs which are associated with the jingling sound of coins being collected. By hitting them in different ways, they also produced ringing sounds, or more solid "clangs".

(Fig 35, A pair of keys, 14/11/2022)

One of the more unorthodox props used to create sounds for this project was this metal fixture used for securing curtain poles. Its solid metal construction and shape meant it could create a lot of metal hit sounds in different tones. When hit on different surfaces, it also created different tones and resonance, such as hitting wood or glass with it.

(Fig 36, Metal Fixture, 14/11/2022)

This glass jar had a solid body and top, which made it suitable in creating noises for this particular game. The metal piece described above hitting the lid of this jar helped create the sound of the characters sword slamming against the ground. The body also gave a nice resonating sound when tapped with metal objects.

(Fig 37, Glass Jar, 14/11/2022)

Most of the audio files were recorded with FL studio as that was the DAW that particular computer was running. However the recording was unedited and then ported into Audition for processing in a different device. 

Much like the previous projects, the individual audio clips were all edited with a compressor and equalizer as basics, and then further edited with special effects if required. Especially for sound effects concerning the monsters and combat, those required effects that almost entirely changed the profile of the sound.

Music tracks that I recorded mostly required reverb as it makes the guitar sound fuller. Sounds with the monster had greatly exaggerated EQ values to make it sound foreign and weird. Sounds of swords hitting had low ends boosted with pitch shifts.

(Fig 38, Editing Individual Audio Clips, 20/11/2022)

Following in the techniques used in Project 2 for editing audio along to videos, a video track came in handy here. Layering the audio clips together, I ensured that the volume of each element was controlled according to their priority. Footsteps would be relatively quiet compared to the screech of a monster that would demand and draw your attention.

(Fig 39, Editing Project 3 Mix, 24/11/2022)

There was still a small lack of sounds here and there, to supplement this small lack of sounds, some of the existing sounds were heavily altered with effects to make them completely different from their base.
This allowed one sound to fill in more than just one function.

Some audio files are meant to be used repeatedly, such as those for selecting and scrolling around objects. Therefore, some pitch shifting to create some variations in sound helps prevent the sounds from becoming too repetitive and annoying.

Exporting the mixdown, it was also combined with silent video in After Effects to create the final output.

(Fig 40, PROJECT 3 Final Video, 2/12/2022)


Reflection

WEEK 1: As it was the first week of the semester, I was curious as to how deep would this module explore the aspects of sonic design, and whether it would extend into audio engineering.

WEEK 2: A simple warmup exercise was given to us this week. It did server as a sort of introduction to sound design terminologies. As I had an interest in music prior, I found it relatively easy to adopt this knowledge into this module.

WEEK 3: Creating the soundstage of various environments was something that I considered the first real step into the activities of this module. On paper, it is easy to identify the different traits that an environment confers onto a sound, but actually applying it onto a sound with effects and balancing them is something that can't be picked up in a day.

WEEK 4: The exercises for this week were quite light and simple. Though, the practice using the multiband compression was quite difficult at times, as I couldn't pinpoint exactly which frequencies were the ones that were too loud. 

WEEK 5: We were to begin planning for Project 1 in this week. From its description, the scale of this assignment seemed pretty large. 

WEEK 6: This week we had a visit to the recording studio on campus. As someone who was interested in this kind of tech and band recordings, I was quite excited to see the professionally created environment for recording. It made the recording process for this week's exercise so much more easier.

WEEK 7: I was beginning to wrap up on the work for Project 1. The quality of the overall audio wasn't completely satisfactory, as I felt that there were slight inconsistencies with how the pacing of change in some clips didn't match with others.

WEEK 8: It was Independent Study Week, therefore no lessons were held. I continued to work on Project 2 by recording the vocal narration for the video. I was pleasantly surprised by how quiet everything was once I prepared the room for recording. 

WEEK 9: Pacing the recorded narration to the source material (movie) was quite difficult. As the narration in between dialogue did not match the pace of the characters conversation, I had to do some jump edits across scenes to help pace the video properly, which was mildly inconvenient.

WEEK 10: Submission for Project 2 was this week. As I had delayed recording some parts of the recording until last week, it was a little bit of a rush to get everything completed. I was quite satisfied with how I voice acted considering it was quite literally my first attempt at it.

WEEK 11: Prepping for the start of Project 3, I was still working on some of the assignments from different modules. When deciding on the game clip that I would use for my assignment, I was quite torn between a game that I was personally familiar with, and one that logically seemed easier to execute.

WEEK 12: Having returned home for a short while, I took advantage of the audio equipment that I had accumulated there over a few years to have a better recording environment for all the audio clips that I require for Project 3. I was quite happy to be able to use this set of gadgets again after quite an extended period.

WEEK 13: I was working on stitching the various audio clips I have recorded to the video of the game demo. Matching the tempo of certain sounds like footsteps to those of the character of the video was a little frustrating. The timing would always seem to be slightly off.

WEEK 14: This week was submission week, therefore I was in quite the rush completing all the assignments that were due this week. While the work itself wasn't terribly complicated, the final stretch of this project seems more as a test of time management.

END OF SEMESTER REFLECTIONS

Experience

Overall, this module was quite a pleasant experience as it was my first experience working with audio so far in this program. Understandably, this module was catered towards the basics of working with audio, so it was quite easy for me to grasp the basics as it was part of my hobbies. As the semester went on and the projects became more and more elaborate, it was apparent that how much work is required to polish any audio clip to a level that was pleasing to listen to.

Observation

Much of what most people associate to the "quality" of sound more or less boils down to the subjective traits of a sound. A recording that sounds muddy would often be labelled as a poorly produced sound, but would sound right at home in an underwater setting. Also, many effects that can be applied to the voices of individuals could help shore up any imperfections or further elevate the performance of a voice actor.

Findings

When recording audio for Project 3, I came to realize that the sounds that are used don't have to be replicated with similar objects in real life, but can come from completely different and almost unrelated sources such as using an instrument to emulate the sound of a creature. In some cases, this would also be preferable over replicating the sound with accurate objects as sometimes these sounds can be quite underwhelming in person.


END OF SUBMISSION

Comments

Popular Posts