Studio Recording and Live Performance

Acousitics

Natural reverb is when the sound begins to bounce around the room leaving a sustained reverb tail in the music. Studio recording generally has rooms that help get rid of natural reverb in the room via pads or soft surfaces, this is because hard surfaces reflect soundwaves which causes natural reverb. In a studio recording session you may want to get rid of this to add the reverb later on in the mixing process, soft surfaces are used to get rid of natural reverb as they absorb the soundwaves and stop them from reflecting. In studio recording health and safety is not much of an issue as it is during live performance as not as much equipment is needed to fit in the room, however cables must still be neat and tidy to avoid tripping hazards. In live performance the natural reverb will be longer as the venue will be larger in order to fit the audience, there will also be less padding due to budgeting which means the reverb cannot be controlled like it is in studio recording. Reverb is also good in live performance as it adds a sense of space within the music.

Live vs. Studio

In studio recording you have unlimited time to get the music takes right in order be able to get a perfect recording of your music, this can then get mixed and published. This is why all the songs you hear released on Spotify sound perfect, it’s because they have the money to work in the studio until they get the perfect recording. When performing in front of an audience in live sound you cannot repeat the music every time you make a mistake, instead you must move on as there will be an audience watching you and stopping would be unprofessional.

Phase

Microphones work by picking up the soundwaves that are near it, however when there are multiple microphones different distances away from an amplifier the soundwaves will take longer to reach the furthest. When one microphone picks up the signal later than the other it causes a slight delay in one of the microphones, meaning you can hear the two signals from the microphone play however they are out of sync. We call this effect phase, phase makes the overall output from the microphones sound much thinner, therefore you must be aware of phase when setting up your microphones for a live sound performance and studio recording session. There are 3 different types of phase in live sound which are: In-phase, Slight-phase and Out-of-phase. In phase is when both microphones are aligned and there is no delay at all, this means the signal output is clear and loud. A slight phase causes the signal to have a slight delay and have a thinner texture to the sound due to the unaligned inputs, this sound is similar to a chorus effect which is often used on guitars. When a signal is out of phase the signal cannot be heard at all, this is because the wave are at completely opposite locations meaning that no sound comes through at all. Generally when recording vocals the signal is in phase, this is because it has a thicker and clearer texture to the signal.

Signal Chains

A signal chain used to convert sound into energy which we can then manipulate into a form of audio output via speakers. Understanding the signal chain helps sound engineers identify an issues in live sound and make sure everything in the chain is chronologically linked together. The chain starts with acoustic energy which is made when the instrument is played to create soundwaves, the soundwaves are then picked up by a microphone which then turns the acoustic energy into a electrical signal. This signal can then be sent into the mixing desk which then amplifies the audio signal send it to an audio interface, the audio interface then converts the electrical signal into binary code. The binary code can then be inputted into a computer, this is because computers can only read code and cannot take a pure soundwave. The sound can then be manipulated from the computer via Logic Pro X or Ableton and then sent back into the audio interface. The audio interface then turns the binary code from the computer back into electrical energy which is then outputted through the speakers. When the sound comes out the speakers it once again become acoustic energy, which it started out as. Only know the signal has been manipulated as binary code and amplified through the speakers.