Why some loudspeakers sound great.

Jørn Rune Kviserud
10 min readMar 4, 2021

--

We have heard the discussions, people arguing that this or that driver sounds better. Often, they try to explain why their favorite driver is just som much better. They highlight properties that certainly do play a role, but they way they are weighted against other parts of a loudspeaker driver comes down to one simple word; complexity. A word that makes it almost impossible to see why one driver should be superior to another, and maybe even more difficult to make one.

This is, and has been for years, the main focus of my work. I will try to make an understandable view of how complexity plays such a crucial role in drivers, and what we do to keep it under control. Hopefully, if you are a bit experienced in this line of business, you will also be able to understand why you not allways love the sound of a driver that on paper seems to do it all right, and also why two seemingly equal drivers can be perceived so differently by the listener.

Background

The basis for this way of viewing a driver lies in mathematics. It is easy to see how some minor mistakes would have made NASA miss Mars by quite some distance. Meteorologist Ed Lorenz accidentally discovered that by reducing the precision of the data from 6 decimals to 3, his simulation model kept going along a seemingly identical trajectory for a while, until wearing off by a margin so big he did not recognize the result from a previous simulation using what he thought was the exact same numbers. This shows that a complex system will be affected by minor variations along an extended scale.

Caotic systems can be studied mathematically, and what is commonly found is that chaos in deterministic systems tends to also find patterns, and that these patterns tends to be affected by the factors of instability and the number of unstable links involved. In other words, if you have two links in the system, affected by a similar event at a similar frequency and amplitude, they will also add a huge amplitude to the systems overall transfer function.

The double pendulum

A great way to illustrate this is by first looking at two identical pendulums. They will keep moving nearly identical for a long time. The system has very little complexity and is therefore easy to determine by prediction.

But if we add another pendulum to the end of the pendulum on each of the two, we add a level of complexity. If we drop the two at the same time, they will start moving in very different patterns almost immediately after releasing them. How quickly they start moving differently, and how different they move, will depend on the mass relationship between the primary and the secondary pendulum. If they have close to identical mass, the pattern will get seemingly random almost immediately, but if the primary pendulum is 10 times heavier than the secondary pendulum, the primary pendulum will be very little affected by the secondary pendulum, and the secondary pendulum will act like it is mounted to a rather rigid platform. Both systems have the same level of complexity, but one is far more predictable than the other, and therefore more stable.

Stability

One can simply define stability as a system that does not change its state from one point in time to another. This applies to both static and dynamic systems. The illustration with the small rocks is an example of a static system like this. It takes very little to move it just slightly, so it is only stable under certain very limited conditions. The way the different components in that system rely on each other will eventually make it fall apart, and the systems initial state would no longer be recognizable. Even if the system was able to self repair, it would be temporarily unrecognizable while a minor event took place. So a better way of determining if a system is stable would be to look at how much it changes temporarily with a huge range of input that goes slightly past the intended working range of a system.

A driver is a system of added complexity

So if we transfer this to a speaker driver, we can go a long way in analyzing individual parameters with modern tools. But looking at how these parameters work together can produce some interesting results. For example, there are several factors that individually affects the force factor of a speaker motor. We often view these factors as individual parameters like Bl(x), Bl(i) and Le(x), to name a few. It is also common that as x increases, Bl goes down, while i increases, Bl goes down, and while x goes negative, Le can sometimes go up by 50–100%, and while x goes positive, Le goes the other way.

For those who are not familiar with these parameters:

Bl is what we call the force factor. The force factor lets us know how much force is produced by a given current through the coil. Since the mass is basically fixed, and the sound pressure is given by the acceleration, the force factor directly translates to how a driver reproduces the input signal. Bl(x) lets us know the error of the force factor as the diaphragm moves. Bl(i) lets us know the error of the force factor as the current changes. Le is the inductance, and is not directly connected to the force factor, but as it is a part of the drive units impedance it affects both the actual current, and the working conditions of the crossover. Le(x) lets us know the error of the inductance when the diaphragm moves. Combined they will most often pull in the same direction producing large amounts of modulation where one part of the music signal affects something else. This is the effect most of us have heard when a system is about to break down completely and is basically what we call intermodulation distortion or IMD.

So when low frequency content is played along with high frequency content, Bl is affected by both x (amplitude*time) and i (current, which equals amplitude). At the same time, the crossover is modulated by the drivers Le jumping up and down as the driver moves. So just by examining these factors, we have identified three that works in the same direction distorting our midrange content.

An absolute mine field of risk factors

There are lots of other parameters to consider. For example, the diaphragm itself will have a number of resonance frequencies. The spider and the surround will have their own resonances, and they will also affect the diaphragm, and each other. Most drivers will have one or more cavity resonances.

All of these parameters are also subject to different loss factors. Where are they located in the system? Do we have a spider resonance that produces energy that needs to be transported through a thin cone to be shorted out in the surround? Or do we have an edge resonance that is transported through the cone to be shorted in the amplifer through the coil, even affecting the magnetic circuit on its way? How are they related to amplitude? Are they even linear? If not, which overtones will they produce? Do we keep the loss under control when we increase excursion? What if we increase the working range by an octave? Do we have heat under control? Do we have a secondary heating of neo magnets?

And they are often very much hidden

We may have viewed numerous CSD diagrams for the driver, and still a number of resonances are hidden in the graphics, but still very much audible. We could put seemingly identical drivers side by side, and they could sound very different. The diaphragm material will be a part of this, but it is often surprisingly insignificant compared to many other factors.

The sub harmonic breakup mode of high Q metal cones is a great example of a hidden parameter that affects the perceived amount of energy very differently than it seems in a measurement. Some people call it “detail” or even “clarity”. I call it distortion, because I like to call things by their actual names.

Another classical example of a parameter that is hard to see in traditional measurements is the difference between alnico and ferrite. Alnico is a better material for audio if you have a poorly designed magnetic circuit. It all depends on how well you are able to block the signal from penetrating the magnet. Many of the pre 1978 drivers (Cobalt pricing sky rocketed around 1978 due to a political conflict) did perform quite well with alnico, but not so well with ferrite. JBL designed a new motor to improve their ferrite motors since alnico became too expensive. According to their own statement, their new ferrite drivers outperformed their old alnico drivers. Anyway, these differences rarely translates to notable changes in frequency response or harmonic distortion.

That is, off course, not the same as saying these things can not be measured. You just have to know what to look for.

A practical example

I promised you to give at least some information on what to look for in a driver to determine if it is potentially a great one. What you can rarely see is what resonances comes from where in the driver. But one can still see a few clues that will tell if the driver has a potential problem.

Our 15 inch driver

The easiest right now will be to use our recent (and not yet 100% finished) 15 inch woofer/midrange driver. The driver is intended for studio and high end hifi applications. It thrives in a 150 liter enclosure, has its first minor mode at around 1500Hz, and keeps around 0,1% THD up to 4,5kHz. Special attention has been given to make sure this is an ultra low distortion subwoofer, and midwoofer and midrange driver by ensuring it can deliver very high excursion at very low distortion. It can handle extreme high current transients without distorting its permanent force field. The intrinsic inductance is very low, so even at low frequencies where the demodulation rings are not effective, it will not distort its force field. And last, but not least, special attention has been given to how the mass of the coil, the mass, damping and strength of the cone, mass and damping of the spider and surround, and the aerodynamic properties of the cavity are balanced together to make it perform ultimately as good as possible also as a midrange driver.

It is quite a marvel as it is. But a driver that on paper seems to be “pretty much the same thing” can be made a lot simpler. By describing how we could have achieved the goal with a simplified method I hope to be able to give some insight to why “the same thing” can still be very different:

If we changed the 250mm by 40mm magnet by one the fraction of the size, we could have achieved the same force factor by using more layers on the voice coil. We would have achieved pretty much the same T/S data, and it would simulate pretty much the same. We could have had almost the same frequency response too.

But it would have required a lighter cone to compensate for the heavier coil. The mass of the coil and suspension, essentially carried by the cone, would be significantly higher relative to the strength of the cone. The system would be less stable and far more volatile. We would probably easily see this on an ETC (energy time curve), showing the amount of non harmonic energy that is released from the cone over time after an impulse.

The relationship between the fixed magnetic field and the magnetic field induced by the voice coil would be altered. The fixed magnetic field should be as constant as possible. In the altered version, the signal dependent field produced by the coil would be significantly amplified, and the fixed magnetic field would be greatly reduced, thus no longer being the superior field. Since the magnetic field is the sum of the permanent field (from the magnet ring) and the applied field (from the voice coil), it would modulate like crazy even at lower levels.

The motor would loose saturation. This would cause the force factor to be modulated by the signal, the intrinsic inductance of the magnet circuit would increase, and the motor would loose its symmetric force factor. The motor would also be far more sensitive to the stray field (the weaker magnetic field not inside, but around the magnetic gap), greatly taking away its linearity.

The complexity of this task is not small, and there is a lot of ground to cover. I am trying to view how the parameters are interlinked, and how much difference one small change will make without changing the T/S parameters, sensitivity, basic frequency response etc of the driver. It would probably also have nearly as low harmonic distortion, but the intermodulation distortion would be very different, especially when combined with a passive crossover.

I did simulate how the change would affect the motor alone, and the 4 layer version with smaller magnet would be more than 160 times less table than the current version (the current driver has a modulated field at 160W similar to what the modified version has at only 1W). And this was still a high sensitivity (95dB/1W) driver. Imagine what would happen if it was a more common 85dB/1W bass driver that required 10 times the power for the same sound pressure level…

It is all about making this complex system work without ever showing its complexity. By keeping every link as robust as possible, we can keep them stable within a huge frequency range and dynamic range.

I will try to make detailed descriptions of many of these interlinked factors for each new driver we release. I also have a plan to make a document describing the minimum requirements one should look for in a midrange driver.

Spoiler alert: Real midrange drivers are not easy to find.

Jørn Rune Kviserud
Tonalab/Midgard Audio
2021

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Jørn Rune Kviserud
Jørn Rune Kviserud

Written by Jørn Rune Kviserud

25+ years of product development. 37 years of building loudspeakers.

No responses yet