I explain things about phase that I wish I had known way sooner. This is a much more advanced topic so I recommend that you have some experience and understanding of phase and its general uses before taking this in, but here it goes:
The video is here: https://www.youtube.com/watch?v=0JPBwJEh-cc This has visuals and narrative with actual examples, but I think many of you will appreciate the script to, (I do add a bit more in the video that is not in the script)
Hey Composing gloves here. What are the applications of phase in audio? Is all phasing equal? No, of course not! And that is what we are going to take a look at. Things like “How wide can I make my vocal” and “Will this processing destroy my stereo image can be easily solved with a correct understanding of phase. It is expected that you have some knowledge of phase or have watched the video on phase in the sound and synth basics series. Phase has some applications that many people probably aren’t aware of, especially in mixing and sound design. You have probably been very worried about “Mono Compatibility” at some point which is just code for will my mix suffer from “phasing”. That is, when you sum together your left and right channel, what do you get? Its called being summed to mono, but its not “real” mono because the signal started out in stereo. Real Mono Signal are actually mono, just like how mono signals being panned is not real stereo. Phasing is the source of unnatural AND natural sounds it is natural because it is a natural phenomena, the speed of sound is faster or slower in various mediums and you always have a medium between you and a sound source. Phase is also the source of delay FX processing such as comb filtering and chorusing as well as many more. Understanding it is a must for the audio engineer.
Firstly, Mono simply means that the information in both your right and left channel are the same. Stereo means that one channel has a different signal than the other channel and this creates a sense of “space” using inter-aural intensity cues which is covered in my critical listening series. Phase is the signals relationship to time. Phasing occurs when two signals begin to cancel and sum to each other. This can occur in the air, as well as in your DAW and the results are quite different, which we will consider a little later. Phase generally is most noticeable with very similar signals, but this is usually as far as some people take it. It should also be noted that in complex waveforms phase is built into the natural sound by the instrument itself, this is a kind of acoustic phase.
Let's start with some basics, first we have a sine wave playing a frequency. Then we introduce another sine wave of the same frequency. Both sine waves start at the same point in their cycle. In this case at 0 degrees. As a result because they are the same frequency they therefore oscillate at the same speed, meaning their phases will remain synced. At this point we can do one of two things, one is we can alter the starting point of one of the sine waves. Because they are still the same frequency the will cancel each other out but remain in sync. This creates a static phase relationship and in more complex tones can be used to color sound. Also consider how the sine waves sum, there will almost always be an increase or decrease in amplitude depending on the phase relationship (this is a scientific relationship, not a love relationship XD). With two simple frequencies we can easily adjust the volume of each track to half its original amplitude to obtain the amplitude of one sine wave by itself, but consider the additional math that must be performed for this to happen and that this only works in cases where we have single frequencies and our processing is incredibly accurate. If we use a more complex wave and use more processing this can easily become impossible to do without complex analyzation programs. We will look at this more in a bit. If we cause our signal to be 180 degrees out of phase they will completely cancel each other out because when summed they will always equal 0. The second thing we can do is change the frequency of one of the waves. This will cause a “warble” effect because one cycle will be faster than the other, so they are not synced up and we will get moments where they are out of phase and moments where they are in phase. Using this relationship we can create relationships between the two frequencies to achieve different results in sound design and this is often referred to as a reese. It should be noted that up until now the amplitude of both our waves has always been the same, this results in a perfect cancellation or summation when in phase or 180 degrees out of phase. If one of the amplitudes is greater or smaller than the other then they will never perfectly cancel and we will always have a little bit of sound resulting.
Also consider changing both the phase and frequency, now things are getting complicated, this is because higher frequencies are so fast that phasing can create acoustic modulation. Meaning the summation of the phase can create new side-bands. This is normally a non-issue because higher frequencies almost always have much lower amplitude than lower frequencies in a sound, and are usually far more abundant. Meaning most of the time side-bands created this way will be very soft and similar to phased noise, but when using much higher amplitudes they can become very apparent. This means we will get a huge variety of sounds from phase from acoustic as well as electrical results. We could talk about this and its potential applications for a while, but I will just request that you go and personally mess around with this principle to get a feel for it with an additive synth.
Ok, lets step it a little bit. We can move from single tones to harmonic series. In this case we can control the phase of many harmonics at the same time. I will use sytrus to demonstrate this. The top row is the amplitude of each harmonic, the bottom row is the phase of each harmonic. Please note the amplitude now decreases as we progress up the series. This has many effects and we should also take note of several things. The first 10 or so harmonics play a huge role in a sound as they are given the most amplitude in most sounds. We are also ignoring flux, which is the rate of change, meaning we are keeping our amplitude, phase and frequency constant. They do not change over time to keep things simple. This also means we are ignoring the transient part of sounds which is a major factor in sound. We can now take advantage of phase to alter the shape of a wave from within itself. We are working the basic harmonic series, saw, square, sine and triangle. We notice right away that altering only the phase does little to nothing to the audible sound in the higher harmonics. Lower harmonics have a much larger impact but it is still surprising at how little effect is happening despite such a large waveform change. That is because we are dealing with a single sound. Its one reason why linear phase eq’s may have a small impact on a sound but not very much. The impact can be much more important if we have several harmonics next to each other with very high amplitude. We see that a change in phase here has a much larger impact. Which is why linear phase cross-over filters can be much more important in a compressor where harmonics may receive a boost causing much more color to occur if phases are shifted.
Phase modulation, and various forms of frequency modulation have a critical need to understand how to influence the shape of a waveform in order to take advantage of these methods of synthesis, and I have a separate series called learn FM synthesis for this. So it will not be covered here.
Ok, now we are ready to look at combining these tone. Complex waveforms such as a final mix are often mystifying to look at, how can such a wave produce a full track we so easily know and realize?! Its pretty amazing, and we are going to see some of how this works here. Let's take a saw wave. We see that higher harmonics in this fashion create a straight line. Now we add a pure sine wave. Our sine wave will impact the tone closest to it in the saw wave the most, in this case the fundamental. It will also impact every harmonically related value in the saw wave with decreasing impact as the Harmonics continue up. This means that adjusting the phase of one of these tones can have an impact on the timbre of the sound. This small coloration of the signal carries with it an enormous change on the shape of the wave, resulting in the complex shapes you know and are familiar with. This can be applied to any wave. Note that these are static relationships right now, as soon as different pitches occur and if we change our phase we can get much more complicated results. I would also like to point out that the phase of two signals who are totally different is generally ignored in every audio class I have ever had, and online to. People tend to think of phase only in relation to similar signals and as a result they can miss a very important opportunity in mixing. Ironically many people use it without realizing it, these effects happen with every processor you use that requires filters to break down the spectrum and reassemble it. Its part of the “color” of a plugin. Perhaps there is a build up in your track and phase could have solved this. Its also now clear that we are dealing with phase of many frequencies and how they combine so knowing things like higher amplitude harmonics near each other will have a larger audible impact when their phase is adjusted than larger sweeping changes to phase is very important information so you don’t waste your time altering things where it makes no sense. Lets do another static example. Here we have a square wave and a saw wave. We see as we flip the polarity our signal summing changes to reveal two completely different characters from the combined result. This time we have 2 harmonic series instead of a single tone so the coloring of the signal this time takes place over a series where an easy to define pattern appears. With phasing occurring at odd harmonics principles because of the square wave. At this point you can see you have an awful lot of ways to color your signal.
Now consider real life phasing. There are a couple forms of this. We have looked at this digitally, but we can pan our phasing so that the left and right speakers are out of phase, this means that while the right speaker is pushing out the left would be pulling in. This creates a phasing “acoustically” this sounds very thin because we have now added the speed of sound through the air, the table, and whatever the conditions of the space already are. The various mediums make this particular kind of phasing. This is a big problem, as well as the fact that the frequencies will be treated differently depending on the space, and its construction. For example, air absorbs higher frequencies much more than lower frequencies and phasing in the upper end will be much harder to detect depending on your rooms set up. This means some frequencies may experience a phase shift while other do not. When playing back audio its usually pretty obvious someone has wired their speakers wrong, what may be less obvious is that phase does occur naturally. You hear sound in a room or space and it has to get into your ears, the only difference here is that you are used to hearing this kind of phasing. This is also the reason why simply adjusting the timing on a track will not work if your track has phasing most of the time. The problem is when 2 microphones are picking up the same source. If the microphones are placed at a distance where one is pulling in while the other is pushing then they will produce signals out of phase, a phase flip button can work sometimes, but just going out and simply moving the microphone is the better move. This is because we typically only consider the sound moving through the air and figure if we just change the timing then all will be fine, but in reality sound is moving through the air, ground, instruments and so forth and all at different speeds, so phasing from a stereo spaced pair is much more problematic than phasing created from static conditions. This means if you want to fix this you will need a software that can analyze each frequency and adjust the phase of each one if needed. It must also be able to keep the natural cancellation that happens. Usually plugs like this are only used if an obvious problem is noted so they almost always improve the relationship. Polarity flip buttons also help a great deal especially if microphones are very close because it's more of a timing issue than a room issue. The more the room is involved typically the worse your situation when it comes to phasing.
We must consider another kind of phase, now that we understand what is happening here with unrelated phase let's look at related phase, this is the secret behind many kinds of processing. We can duplicate a signal and slightly delay it creating a subtle phase shift. This is known as flanging. Flanging can be static or have a modulated amplitude amount so that the relationships create the iconic flanging sound. Phasers send a phase inverted version of the sound back into itself through the use of all pass filters with the goal of deliberately creating holes in your sound. Some phasers don’t actually use phase at all, some just straight up filter our parts of the spectrum. Additive synths usually use this method because of their extreme control of the harmonics. Some phasers actually cause resonance because elements of the signal are in phase. Chorusing is similar to the reese, here the signal is duplicated and pitched very slightly, the result is the phases now no longer line up between the two copies and phasing occurs when the signals components are summed. Comb Filtering is a achieved through a much larger delay than the previously mention FX offer. You can accomplish comb filtering by simply using a delay with a very short delay time. Its called comb filtering because the phase cancellation looks like a comb, and there are filters designed to do this as well rather than using phase. Check out the Hass Effect video in the critical listening series for more info.
What about noise? Can noise have phase? Well, yes and no, it can’t in the sense that it does not have a periodic cycle, but it can in the fact that we can break it down into sine waves. True stereo noise would not have phase because it has no period and the right and left channels are unrelated so any phasing would go noticed, however many times we are using mono noise, and if we duplicate and pan it then it is not true stereo noise because the right and left channels are only receiving an amplitude difference of the same signal, so we can have phase in this case. This is very important to note, because even though the signal are correlated, the mono signal is still random in nature, the phasing from this will be similar to the phasing of a high unison, chorus, or flanger setting. When summed down to mono you will hear a kind of swirly sound as the signals sum. It is very obvious in the flanger, and phaser and less so in the chorus. This is because the first two work much more on time than does the chorus which works on pitch. Unison is the equivalent to chorus only it is additively generated. This means the swirling effect as the phases collide is far more controlled as it is completely generated rather than cloned and manipulated. For a flanger to work it must delay your sound and the chorus must re-pitch your sound. The “sound” of each of these plugs is a result of the methods they choose.
Correlated Noise would suffer the same effect because a high unison setting generates the same level of complex mess in upper harmonics of a signal. The biggest difference will be the lower frequencies of the noise.
There is one more aspect here and that is post processing. The color of a plug sounds the way it does because it does more than just compress or eq, it is changing the phase relationships and when put back with the rest of the audio or mixed in with the original signal and the summation of the signals is very similar to the square and saw wave combining that we looked at earlier. Hopeful you can now see many of the properties and application of phase! Leave a comment on something you found useful or something that perhaps I missed.
Have a blessed day.
Submitted February 28, 2017 at 08:11PM by Composing_Gloves https://www.reddit.com/r/edmproduction/comments/5wr7gp/not_all_phase_is_equal/?utm_source=ifttt