5 Tips For Mixing With Headphones

Written By Kyle Mathias  |  Audio Basics 

In a world where most consumers of music listen on wireless earbuds or headphones, are studio monitors even relevant anymore? Should you be mixing exclusively on headphones because that’s the way most people will ultimately enjoy your music? There’s an ongoing debate about this, but I’d say “having both is better than having one or the other”.

But when you compare the cost and portability of a good pair of headphones with that of a good pair of speakers and acoustic treatment, it’s clear that the headphone route is much more accessible. So if you’re mixing on headphones, here are 5 tips for you.

Thanks to RME for sponsoring this video and supporting audio education.

5 Tips For Mixing With Headphones

1) Use High-Quality Headphones

One reason mixing with studio monitors is so expensive compared to mixing with headphones is that you not only need to invest in a pair of high-quality studio monitors, but you also need to invest in acoustic treatment within your listening space.

The ultimate goal when setting up any monitoring system for mixing is to build something you can rely on in order to make decisions about what sounds good and what doesn’t. As you’ll see in a moment, this isn’t necessarily the same for headphones as it is for studio monitors.

A good pair of studio monitors will usually have a very flat frequency response, meaning what comes out of the monitors will closely resemble what goes into the monitors from a frequency balance perspective.

But when you’re mixing with speakers, the acoustic reflections and resonances in your space will alter that frequency balance. In practice, this means you end up needing to spend even more money on acoustic treatment to correct those issues so that the frequency balance is preserved by the time the sound reaches your listening position.

The ultimate destination of all signals in my studio is the RME ADI-2 Pro FS. It connects to my audio interface digitally, converts the digital audio to analog audio, and feeds my studio monitors and my headphones. The ADI-2 Pro FS has a parametric EQ on each of the studio monitor outputs, which can be used to make adjustments that fine-tune some of the remaining non-linearities in the frequency response of my system after acoustic treatment has been applied.

But, there’s still a lot of work to do in terms of physical acoustic treatment within my room, so I’ll almost certainly get more accurate sound from my headphones which aren’t affected by the acoustics of my room at all.

There are several ways to implement corrective EQ, but I prefer implementing it as hardware in my signal chain rather than as software on my computer because this workflow allows for separate corrective EQ curves for speakers and for headphones without the risk of forgetting to switch a preset and without the need to deal with problematic drivers or software.

There are also parametric EQs on the headphone outputs of the ADI-2 Pro FS, but these aren’t necessarily used to target a perfectly flat frequency response as we might target for studio monitors.

See, we’ve grown so accustomed to the sound of speakers in a room that when you directly inject sound into your ears with a very flat frequency response using headphones, it ends up sounding different from what you’d expect. Therefore, many modern mixing headphones will not have a flat frequency response but will rather have a frequency response that attempts to approximate the sound of listening through speakers. So don’t be alarmed by a non-linear frequency response when shopping for headphones. This is often intentional.

There are several targets used in modern headphone production. One is to measure the frequency response of a high-quality speaker in an anechoic free-field. Another is to measure the frequency response of a high-quality speaker in a reverberant room from a distance far enough away from the speaker that the indirect sound is equal in level to the direct sound, aka diffuse field. And in recent years, the HARMAN target curve has become very popular because it incorporates a high-quality speaker, a high-quality room, and human perception and preference. In any case, these curves are often far from flat.

Many people use frequency response correction for their headphones – in fact here’s a Reddit post that gives you the specific parameters to input into the RME ADI for several different headphone models in order to get closer to the Harman Target Curve.

In my opinion, I think it’s best to listen to the “un-corrected” frequency response of your headphones because you can take that with you anywhere. That means you can rely on your headphones as a reference in any situation even when you don’t have a corrective EQ curve applied.

I tend to use the parametric EQs on my headphone outputs for getting an idea for how my mixes would sound on a variety of systems, instead. That’s one benefit of speakers that you don’t get through headphones – the ability to walk around the room and make small tweaks in an attempt to create a mix that translates to a variety of listening systems. This can be mimicked from a frequency response perspective by testing your mix with a variety of EQ profiles.

There is a wide range of options when choosing headphones, but even some of the best headphones out there cost a fraction of what you’d pay for studio monitors. I’d avoid using active noise canceling headphones or wireless headphones though. You can check out this post if you’re interested in learning why these should be avoided.

2) Use Appropriate Headphone Amplifier

When choosing headphones, it’s also important to consider the headphone amplifier that will be used to drive the headphones. Be aware of the impedance and the sensitivity of the headphones and the power and voltage that your headphone amplifier can provide.

The sensitivity of your headphones tells you how much sound pressure level can be expected at a given voltage or power input. For example, the sensitivity of the Neumann NDH-30 headphones can be found in the technical specifications. This specification tells us that you can expect an output of 104 dBSPL when you send a 1kHz tone through the headphones at 1 volt RMS. Like I said, other headphones may show sensitivity as dBSPL at a specific power input.

This can all be explained by Ohm’s Law, V = I * Z (or Voltage = Current * Impedance). If we rearrange the equation to I = V / Z (or Current = Voltage / Impedance), it more clearly shows that at a given voltage, current will increase as impedance decreases.

Another relevant formula is P = V * I (or Power = Voltage * Current). This means there are two ways to increase the power consumed by the headphones – either we increase the voltage or we increase the current. When using a battery-powered device like a smartphone or a laptop, there is a limit to how much voltage can be supplied. Therefore, the only option at that point is to increase the current which can be done by decreasing the impedance of the headphones as we just saw through Ohm’s Law.

OK. Enough math… What does this mean for mixing in headphones?

If you’re using the headphone amplifier built into the headphone jack on your laptop, you’ll probably want to stick to using headphones with high sensitivity and low impedance. Lower impedance means the headphones can consume more power at a given voltage, while higher sensitivity means you’ll get more acoustic output at a given power input.

Trying to drive high-impedance, low-sensitivity headphones with an inadequate headphone amplifier will result in low output level and perhaps even diminish the low-frequency response, as low frequencies require more power to reproduce. Remember – we want a reliable frequency response when mixing, so make sure you have a proper pairing of headphones and headphone amplifier.

It’s important to note that Apple laptops now integrate adaptive headphone amps that can accommodate a wider range of headphone impedances by adapting the maximum output voltage depending on the impedance of the headphones connected to the headphone jack.

The headphone amplifiers in the RME ADI-2 Pro FS are very powerful, as the device is connected to wall power directly, rather than relying on battery power. They also adapt to various types of headphones through Low Power Mode and High Power Mode. Plus, the total harmonic distortion and signal-to-noise ratio specs on this thing are unfathomably good (even in the most extreme scenarios). If I hear noise or distortion in the signal I’m listening to, I can confidently assume it’s not from my headphone amplifier. It’s something else earlier in the signal chain.

3) Check Stereo Phase Correlation

When you listen to a recording on speakers, the stereo image will not be the same as when you listen on headphones. There are a few reasons for this…

First, a signal hard-panned all the way to the left speaker will be 30 degrees to the left on speakers, giving you a 60-degree stereo width. On headphones, an element hard-panned to the left will be 90 degrees to the left, resulting in a 180-degree stereo width.

In practice, this means you’ll tend to make panning decisions when mixing in headphones that will be too subtle when played back on speakers. On the other hand, panning decisions on speakers will sound much wider when listened to on headphones. And there’s no real solution except to be aware of this reality and adjust accordingly.

The second difference between stereo image on speakers and headphones is that there is a total isolation between the left and right speaker in headphones that doesn’t exist in speakers. With speakers, the left and right ear will hear both the left and right speaker.

The left ear hears mostly the left speaker but also hears the right speaker, albeit quieter and a bit later in time. The right ear primarily hears the right speaker, but also the quieter and slightly later signal from the left speaker.

This slight delay of the right speaker to the left ear and the left speaker to the right ear can have effects in speakers that can easily go overlooked when listening through headphones…

Let’s say you have a vocal panned directly to the center, which means the signal level is equal in both the left and right speaker. When the acoustic signals from each speaker meet at the left ear, they will interact at a slight time offset resulting in cancellations at some frequencies and summations at other frequencies. This is called a time of arrival difference and it results in comb filtering where these cancellations and summations resemble a comb when shown on a frequency graph. This will also occur when you use non-coincident stereo techniques and certain delay techniques.

Meanwhile, you will not hear these cancellations and summations when listening to that same center-panned vocal through headphones. Luckily, there are a few things you can do to highlight these problems even when you don’t have the luxury of using speakers.

The first would be to sum the left and right channel to mono, where both speakers play both channels. Then, listen for problematic interactions between the two channels. This is a worst-case-scenario, because the left and right channels now reach the left and right ear equally. So take the results with a grain of salt – you’re really checking mono compatibility here rather than “speaker compatibility”.

Another way to check phase correlation between the left and right channel of your mix is to use a vector audio scope. This type of meter can be accessed in software like RME DigiCheck or in plugin form, such as iZotope Ozone.

The basic principle is that there is a dot that will move throughout the graph, leaving a line behind it that traces its path. The shapes left by that trace give us insight into the relationship between the left and right signals.

Understanding stereo phase correlation and its implications for headphone mixing can be approached by hypothetically analyzing test signals with a vector audio scope. Imagine a scenario where a low-pitched tone, say 1 Hz, is introduced to only the left channel. This would visually appear as a line tilting left on the scope, indicating the direction and dominance of the signal. Adjusting the volume of this tone changes the line’s length, correlating with the signal’s volume.

Introducing a signal solely to the right channel creates a similar effect but in the opposite direction, illustrating a visual balance (or imbalance) between the left and right channels.

When the same tone is sent to both ears at equal levels, the outcome should be a vertical line, symbolizing perfect balance and synchronicity between the channels. Inverting the phase of one channel disrupts this balance, resulting in a horizontal line and indicating total phase opposition.

Transitioning from a simple tone to pink noise, which encompasses a broad frequency range akin to actual music, and then applying a slight delay to one channel, simulates the acoustic interactions in a stereo field. This scenario reveals phase cancellations and summations between the channels, visually represented as a comb-like pattern on the frequency graph. This illustrates the complex interplay between left and right channels that, although not directly observable when mixing with headphones, is crucial for ensuring that music translates well across various listening environments.

If you play a professionally mixed stereo record through a vector scope like this, you can expect to see some variation on the graph. That’s ok – that’s what makes it stereo. But you’ll notice that the formations will typically maintain a somewhat vertical orientation, indicating that the left and right channels are mostly correlated in terms of polarity and phase. This isn’t always the case though, so use it as a guide not a rule.

You can use this type of meter on pairs of individual instruments within your mix or on the stereo mix as a whole. This will help you identify when serious interference might occur between the left and right channel when summed to mono or when played through stereo speakers.

Some monitoring devices also offer a crossfeed feature, like the one built into the RME ADI-2. This will send some of the left channel to the right channel at a slight delay, and vice versa. The ADI-2 uses the Bauer Binaural Method, which I’ll admit is a bit over my head. But if you’re interested in learning the technical details, I’ll leave a link to an article below. In practice, this feature offers a headphone listening experience that is more comparable to speaker listening while mitigating the comb filter effects I just discussed.

4) Use Reference Tracks

It’s not a bad idea to drag and drop the audio file of a great-sounding mix into your session as a reference. I’d recommend finding a mix in a similar genre that you can use as a guide from the very beginning. You wouldn’t want to mix a country song with a metal recording as your reference, just as you wouldn’t want to mix a pop song with a classical recording as your reference.

Having one or two reference tracks will help you find your bearings before setting out on the mixing journey, and you can return to it throughout the process to make sure you’re generally on the right track to make a mix that conforms to basic genre standards (assuming that is your goal).

Some tools, such as iZotope Tonal Balance Control can help you more objectively understand the way your mixes compare to other mixes in similar genres. You pick the genre and the plugin will analyze your mix compared to a target curve.

In addition to having good reference tracks in your session, it’s important to develop your inner reference of what sounds good and what doesn’t for the particular genre you’re mixing. This can be done by listening to music often on your headphones and by training your ears with the method I discuss in this post.

5) Check Your Mixes On Speakers

While the tips I’ve shown you here can help if you’re mixing exclusively on headphones, there are still some things that speakers really can help with – even if your audience will ultimately listen in headphones anyway.

For one, you’ll never feel the haptic sensation of a kick drum in your chest while listening through headphones. Only speakers can reproduce that feeling…

There is also something to be said about checking reverb on speakers, because the additional noise floor and reverb within the room may show you that the subtle reverb you added in headphones is completely imperceptible when played back in a real room.

You can’t walk around your room (or outside your room) to listen to how your mix changes with headphones. And that’s a good thing if you want to adjust something that you can’t reach from within the sweet spot, but listening outside the sweet spot can also help you refine your mix so it stands up better to those types of situations in the real world.

Rather than setting mental notes not to pan too much in headphones, it’s better just to check your mix on speakers. Rather than relying on EQ profiles to know how it will sound in a car, in earbuds, or on TV speakers, it’s just as good to print your mix and listen to it with those systems. But that’s an exercise that takes time, so having these tools I’ve discussed in your mixing toolkit definitely helps from a workflow perspective.

Over time, you’ll learn the effect of your headphones and you’ll develop an ability to compensate to make mixes that sound good on other speakers without the need to test. Practice is where you’ll find the biggest improvements and you can practice with the basic tools that you’ve already got.


Disclaimer: This page contains affiliate links, which means that if you click them, I will receive a small commission at no cost to you.

>