Articles

Minority Interests

by Peter Mapp

As I have often said, you cannot necessarily rely on your own hearing when making decisions about sound quality and performance. Having said that, this is exactly what we have to do most of the time in order to get the job done!

Can you imagine a mixing console being simultaneously operated by a ‘committee’ of three mix engineers at a live event to even out individual proclivities?

So whilst we do indeed have to rely on our own hearing, we must be mindful that not everyone is going to be hearing what we hear and will react in the same way. Let me explain a little more about what I mean and some potential implications. Let’s start with intelligibility and required performance standards. Almost universally around the globe, it is now agreed that a sound system used for voice alarm or emergency communication purposes should achieve at least a value 0.5 STI. While there are variations on how this is defined such as an average or minimum values, the target is effectively the same, as this has been agreed to provide adequate intelligibility for the average listener. But what about the non-average listener consider for example:

  • Those with hearing loss
  • Those with noticeable hearing loss (around 12–14% of the population)
  • Children – primary school students and indeed those up to about the age of about 14 require far higher intelligibility in order to achieve the same level of speech understanding as adults. For this group the STI needs to be >0.60 to be equivalent to the 0.50 standard
  • Non-native language listeners or people whose first language is not that of the broadcast announcement also need a higher STI in order to adequately understand an announcement or broadcast speech. Again typically a value of ≥ 0.60 is required to be equivalent to the target 0.50 STI.

Remember that what you are ‘hearing’ is probably not the same as everyone else and do stop and consider the significant minorities that also may need to listen to your work

Therefore, if we are designing or setting up a sound system where it is known that such a group may make up a significant proportion of the potential listeners such as an international airport, a church with an elderly demographic or a school, then surely we should be taking this into account?

None of the emergency sound system standards, as far as I am aware, cater for these minorities. While on this particular topic and considering children in particular, does any engineer or legislator ever consider the effect that loud sounds (albeit speech but particularly alarm tones) can have on autistic children? Fire alarms (bells or sirens / klaxons etc.) are designed to be loud and can literally paralyse the autistic – they generally hate loud sound and instead of getting out of the building or danger, will often freeze from panic and remain where they are.

This means that teachers and carers have to put themselves in danger by staying with the panicked child and trying to get them out. But do these devices need to be so loud? The simple answer is NO they don’t. It’s just that the designer’s brief is to make them loud.

I have been in situations where fire alarms have gone off and the noise level was such that I couldn’t think straight and was not able to make a rational decision as to where or which way to go. You just want to get away from the noise – straight into the path of the fire or danger?

While on the topic of fire and smoke alarms, has their effectiveness ever been specifically tested with children? The assumption is that the design and test engineers can hear them, so why wouldn’t children? Well, if the research had been done, the lives of several children could probably have been saved. An investigation after a domestic fire here in the UK where six children (aged 5–13) died, found standard alarms to be ineffective. In a pilot study over 80% of the 34 children tested did not respond to the standard smoke detector alarms. In the study, only two children woke up every time the alarms were sounded and none of the 14 boys woke at all. Interestingly replacing the alarm signal with the voice of a parent had a 90% success rate. So let me say it again, just because you can hear something it doesn’t mean that the intended audience does.

Distortion is another sound system parameter that appears to provide a huge divergence concerning its audibility. I don’t know if I am particularly sensitive to distortion – and here I mean harmonic distortion (THD) not spectral or temporal, but I have often pointed out the unacceptability of system due to this.

Recently I was discussing a church sound system that, among a number of issues, distorted badly when the radio microphones were used, but no one seemed to be aware of this. Equally, I recall an occasion when listening to a particular loudspeaker with its designer. I needed to establish if the unit could produce the SPL required for a project, so we cranked it up. However, several dB short of what I needed and was being claimed, the sound began to clearly distort and as I diplomatically said ‘it was disgusting and totally unacceptable’. The designer looked to be quite offended at my opinion, which I couldn’t understand, as clearly the speaker was doing a very good impression of being a square wave generator. Only some time later, when formal measurements were made under anechoic conditions, was it shown that the acoustic output was around 6 dB lower than claimed and at anything higher, the sound was measured to be grossly distorted. The interesting issue to me was that this was a very competent loudspeaker design engineer, many of whose other designs I liked very much, so I could not understand why he was apparently happy with the gross distortion I was hearing. I can only assume that he simply did not hear it.

I also find it interesting to note the difference of opinion that exists as to the correct synchronisation delay that should be used when setting up a system (by this I mean the synchronisation of the sound arrivals from different loudspeakers at a listening position). Haas (of ‘Haas Effect’ fame) for example found that under a given set of conditions that 10% of the listeners were disturbed by a delay of 42ms whereas the average delay time for disturbance (ie 50%) was 68ms and not until 90ms was the delay disturbing to 90% of the listeners. In a different experiment he found that a delay of 60ms was disturbing to 10% of the listeners but 50% of listeners were not disturbed by the echo until it was increased to 110ms. These are huge differences but this helps explain why I find a given missynchronisation annoying whilst those around me do not hear the problem. When it comes to setting and optimising (tuning) the frequency balance of a system, then all bets are off and that is a discussion for another time. But until then, just remember that what you are ‘hearing’ is probably not the same as everyone else and do stop and consider the significant minorities that also may need to listen to your work.