Sonic Design - Weekly Journal


Jump Links

  • Week 1 - Synchronize all tracks
  • Week 2 - Create sounds in various scenarios
  • Week 3 - Exploring new effects
  • Week 4 - Panning & Environmental sound
  • Week 5 - Introduction to Project 1
  • Week 9 - Recording
  • Week 12 - Game Audio


Week 1 (26/09/2024)

Today was our first sonic design class of the semester, where we were introduced to the concept of sonic design and received the assignment brief. Since I have experience with musical instruments and music editing, I feel more confident in handling this subject as I am more sensitive to sound. In this class, we learned how to use a parametric equalizer to adjust the bass and treble of audio in Adobe Audition, which is fundamental but still a bit tricky for me.


Lecture Notes

  • To add "Parametric Equalizer" - At the left panel > Effect Track > Filter and ED > Parametric Equalizer
  • In "Parametric Equalizer", the left side represents bass, while the right side represents treble
  • To add multitrack, - At the left panel > Multitrack > Drag the audio into the column
  • To add effect in multitrack - click "fx" button 

Class Activity

Given with different audio tracks, we were tasked with adjusting the bass and treble for each one in Adobe Audition to make them sound the same as "flat" audio track. All screenshots are taken at 0:00.

Figure 1.1 EQ1


Figure 1.2 EQ2


Figure 1.3 EQ3


Figure 1.4 EQ4



Figure 1.5 EQ5


Figure 1.6 EQ6


Figure 1.7 Filter 1


Figure 1.8 Filter 2



I'm not sure if I tuned the audio correctly but based on my comparison between the original track and the ones I adjusted, I'd say they sound similar according to what I heard, though minor adjustments could improve them. 


Reflection

After this class, I have a much better understanding of how Adobe Audition works, and this exercise has motivated me to keep learning and exploring it further. Beside from learning the basics of adjusting bass and treble using the parametric equalizer, I was amazed by how minor changes can lead to significant differences. I hadn’t realized before that even small tweaks to the bass and treble could completely change how the sound fits into different scenarios. Additionally, I was glad to have prior experience with musical instruments, which has made me more sensitive to sound. This background helped make the exercise easier for me to cope.

< back


Week 2 (3/10/2024)

Lecture Notes

In our online lecture, we were introduced to sonic design. Before that, it was important to understand the physics of sound and the biology of how sound is transmitted from our ears to our brain.


Figure 2.1 Ear structure


In biology, sound is transmitted into our ear as vibrations in the air, which travel through the ear canal, causing the eardrum to vibrate. These vibrations are then converted into electrical signals by the inner ear and sent to the brain. This is how sound captured and translated in our brain.


Figure 2.2 States of matter


In physics, sound waves consist of three properties, which are wavelength, frequency and amplitude. 

  • Wavelength: The distance between two consecutive peaks or troughs in a sound wave.
  • Frequency: The number of sound wave cycles that pass a point per second, measured in kilohertz (kHz); it determines the pitch of the sound.
  • Amplitude: The height of the sound wave, representing the loudness or intensity of the sound.

Sound travels fastest through solids, slower through liquids, and slowest through gases because particles in solids are more tightly packed, allowing sound waves to transmit more efficiently. Waves can be split into two types: transverse waves and longitudinal waves. Sound waves are longitudinal, created by compressions and rarefactions (expansions) that represent the wavelength.


Figure 2.3 Longitudinal waves



Class Activity

In our physical class, we were taught how to create different sounds for various environments and scenarios. It was very interesting to learn that sounds in different scenes can be adjusted simply by using a parametric equalizer, and that echo can be controlled through reverb. This exercise gave me a better understanding of how changes in frequency, bass, and treble affect sounds in different settings, and how even minor adjustments can make a big difference. I am also glad that all my adjustments are almost correct and acknowledged by lecturer. 


Figure 2.4 Phone call



Figure 2.5 Closet



Figure 2.6 Walkie-talkie



Figure 2.7 Bathroom


Figure 2.8 Stadium


Reflection

This lesson was fun for me since it was completely new, and I didn't know we could manipulate sounds like this before. The audio tracks were originally the same, but with a few adjustments with the treble and bass, they could fit into different scenes, which amazed me. This lesson has given me motivation to learn more about sound editing. 

back


Week 3 (10/10/2024)

Lecture Notes

During our online lecture, we are given notes about essential tools needed to be used in designing sound, which are layering, time stretching, pitch shifting, reversing and mouth it. 

  • Layering - using two or more soundtrack 
  • Time stretching - stretching the length of the audio without changing the pitch
  • Pitch shifting - change the pitch of the sound without changing the actual length
  • Reversing - reverse can give weird and unnatural sound
  • Mouth it - if you can't find the sound that you want!
How to find sound effects?
  • Video Copilot
  • Epidemic Sounds

 Start with Atmosphere:

  • Begin by adding ambient sounds such as wind, ocean waves, and city traffic to set the scene.
  • Match ambient sounds to the environment (e.g., wind and waves for ocean shots, forest sounds for wooded areas).

Focus on Key Subjects:

  • Identify the main subject of each scene and base sound design around it.
  • Example: For an elephant statue, use elephant sound effects; for waves crashing, use ocean sounds.

Layering and Limiting:

  • Avoid overloading the scene with too many sounds. Typically, use no more than three sound effects per scene.
  • Sometimes one well-chosen sound effect can suffice.

Dynamic Movement:

  • Adding sounds to dynamic actions requires precision and perfect timing.
  • Use subtle sounds and ensure they are perfectly synchronized with the visual movements.

Volume and Clarity:

  • Ensure sound effects are loud enough to be heard clearly, even on devices with low volume settings.
  • Edit sound at 60-70% volume to ensure it is audible for most viewers.


Class Activity

In today’s physical class, we were introduced to various new audio effects, such as stretch, pitch bender, reverse, and more, to make sounds more interesting. For our first activity, we were tasked with creating our own version of an explosion sound effect using different effects. While editing this explosion, I added pitch down, stretch, reverse, reverb, parametric equalizer and chorus effect.


Figure 3.1 Explosion


For our exercise, we were tasked with creating firecracker and punching game sounds using suitable effects. For the firecracker sound, I used a parametric equalizer to increase the treble and slightly lower the bass. To create a spatial effect, as if the firecracker is in the air, I added reverb, echo, and a bit of delay to prolong the sound. For the punching sound effect, I used a parametric equalizer to adjust the sound, making the punches slightly different to represent punches from the left hand and the right hand.


Figure 3.2 Firecracker


Figure 3.3 Punch punch punch


Reflection

This class has inspired me to explore more interesting effects that may be useful for our upcoming assignments. Each effect can bring significant changes to the sound, making it suitable for different purposes and scenarios. After this lesson, I feel more motivated to create unique sound effects, and I realize that I can also apply these techniques in my game development.

back


Week 4 (17/10/2024)

In today's class, we learned about panning, which can mimic the sensation of sound moving around our ears. I found it interesting because it can be used with video to immerse the audience in the scene. Other than panning, we can also adjust the volume and track EQ to enhance the panning effect. 

Lecture Notes

  • To make panning effect - adjust "Stereo balance" at the multitrack
  • Multitrack > Show envelopes > Enable "Pan"

Figure 4.1 Class exercise



Figure 4.2 Jet plane


Figure 4.3 People talking


For our exercise, we were required to create sound effects based on the given images. The images look high-tech and are often seen in sci-fi movies or films. However, since I rarely watch this genre, I wasn't very familiar with applying the most suitable sound effects for these environments.


Figure 4.4 Environment 1


In this environment, I have identified several sound sources. Firstly, the tree capsule in the center would produce bubbling sounds for respiration and the rustling of leaves. Additionally, there are desktops with screens turned on, and it seems like there are machines humming in the surroundings. Therefore, I have used these main sources of the sound for this environment. Below is the prove that how I edited the sound for this environment.


Figure 4.5 Multitrack in Adobe Audition


I used effects such as parametric equalizer, panning, reverb, and others to adjust the sound effects for different events, which I learned in previous class. Below is the outcome of my edited sound.


Figure 4.6 Outcome for Environment 1


The next environment is a high-tech factory with a large laser. The identified sound sources include the laser beam emitting from the machine, the muffled sound of the machine, some soft talking, and buzzing sound from the blue holographic screens.


Figure 4.7 Environment 2


I also play around with the effects like parametric equalizer, chorus and panning to make the sound more related to the environment. Below is the outcome for Environment 2.


Figure 4.8 Multitrack in Adobe Audition


Figure 4.9 Outcome for Environment 2


Reflection

After this class, especially with the lessons on creating panning effects and exercises focused on environmental sounds, I feel more confident in creating spatial sound for specific environments or scenes. I’ve learned to identify sound sources and make adjustments that enhance immersion. By applying the various effects we learned in previous lessons, I was able to produce a presentable scenario sound effect. These exercises have been incredibly useful, and the lectures have guided me from having no experience to being able to create environmental sounds, which is a huge milestone for me. Therefore, I’m now ready to take on the challenge of Project 1.

back


Week 5 (24/10/2024)

In class, we were introduced to Assignment 1. For this project, we are required to create a scene without dialogue or background music. The scene can depict daily life, a city walk, wildlife, or any scenario that includes various sounds. We were also given some tips and references on using multitrack for the project. Additionally, we used our class time to come up with a story for our project. For my project, I decided to create a scene of my holiday life at home, which involves binge-playing video games and eating instant noodles.


Lecture Notes

  • Label each track with colours
  • The sound should not exceed the limit
  • Hard limiter can be added to limit the amplitude at last
  • Multitrack > Track > Add Stereo Bus Tracks - to group tracks together
  • Mastering - used for adjusting your overall audio track
  • Export in 48000Hz, 16-bit, wav format


Reflection

In this class, I learned more about the features in Adobe Audition that we can utilize for our project, especially when editing multiple tracks. It seems like a challenging task, as demonstrated by a senior's project, where even a 2-3 minutes audio scene used around 40-50 tracks with different filters and modifiers. Additionally, coming up with a storyline with audio descriptions, including foreground and background sounds is not easy. However, the use of AI has given us a lot of inspiration and has helped smooth out our storylines. My story has been approved by the lecturer, but adjustments such as adding background elements and tweaking the wording to make it less bombastic are needed. Now, I'm ready to start working on my sonic design.

back


Week 9 (21/11/2024)

In today's class, we had a lecture on recording using a microphone and learned techniques to adjust the audio of our recordings, such as noise reduction, autogating, and using a de-esser, among others.


Lecture Notes

  • Omnidirectional: Captures sound equally from all directions; ideal for environmental sounds.
  • Cardioid: Focuses on sound from the front, reducing noise from sides; commonly used for vocals and presentations.
  • Hypercardioid (Shotgun Mic): Highly directional, captures clear audio from the front while rejecting most side and rear noise.
  • Figure-of-Eight: Captures sound from the front and back while rejecting sides; rare, mostly used in studios for interviews or dual-source recording.
  • Effect > Noise Reduction > Noise Reduction (process)
  • In Waveform > Highlight clip that wanted to make adjustment > Adjust volume/noise reduction
  • Amplitude and Compression > Dynamics
  • Autogate - Adjust the threshold of the volume, Compressor - compression and etc.
  • DeEsser - lower down "s" sound
  • Multitrack > Tracks > Add video track

Figure 9.1 Sample voice


Figure 9.2 Adjusting dynamics


Figure 9.3 Adjusting noise reduction


back


Week 12 (12/12/2024)

Lecture Notes

  • Event mapping - occurs in game, determine all ppossible actions that would require suond or change in sound state. For instance, triggers, cues, events.
  • Action, character, music, reaction, environment
  • record multiple times with multiple variations
back

Comments

Popular Posts