• Welcome to the all-new HUG. All content has been converted from the old system, and over the next days we will re-style HUG in a more familiar way.

Human hearing, hearing loss and audio memory

A.S.

Administrator
Staff member
As I've admitted - to the disbelief of some on my Far East tour - I don't listen to hi-fi at home. I have several (treasured) portable radios scattered around the house. So when I was at Tropical Audio in Kuala Lumpur I had a rare opportunity to listen to all the models in a concentrated demonstration over an hour or two. It was very interesting, and not quite what I expected. There were differences that I didn't anticipate (or remember) and I can better understand why each model has its fans. And yes, there is something special about the SHL5.

One difficulty I have is that I can't just listen to music reproduced over the loudspeakers without deconstructing what I hear and trying to attribute the sound to this or that aspect of the design. That's because if I could distill the essence of the sonic differences with certainty, I could then continue to purify or emphasise those characteristics in future models. There are, in the designs, measurable differences in the energy spectrum and it is no surprise that this reflects on the sound balance. But which model is really right? That's for you to say.

There is a commonality at basic design between the SHL5 and M30 - both in production for many years - and these are slightly different as I've mentioned to the C7ES3, M40.1 and P3ESR. Again, which do you prefer? All of them create an illusion; there is no perfect answer. But my experience in Malaysia has brought me closer to really understanding the SHL5, designed over twenty years ago, and which I am still in awe of.

I'm now off to the pub to hear some live, traditional jazz!
 
J

jferreir

Guest
This is a fascinating little discussion! As someone interested in the philosophy of mind/cognition, I frequently encounter sensory phenomena such as this. One phenomenon that I find particularly interesting is the McGurk Effect. It provides a nice example in which two seemingly disparate sensory modalities overlap and indeed interact with one another. In simple terms, what we see (e.g., someone's lips moving) actually affects what we hear. I'm sure you can find an interesting example via google.

Lately, however, I've been focusing on instances of change blindness and inattentional blindness. Absolutely fascinating stuff, which provides prima facie empirical support for otherwise unpopular philosophical theories. What these experiments suggest is that the content of perceptual experience is not as fine-grained as we ordinarily take it to be. To use the example of visual perception, we usually report that we take in a great amount of detail in a visual scene. What is noteworthy, however, is that such detail is usually limited to foveal region or center of focus. Change blindness and inattentional blindness make this explicit, by demonstrating that we fail to detect rather significant changes in a scene, when our attention is directed to some feature in particular, or when our attention is briefly interrupted. Here are some interesting (and somewhat popular) examples:

Change Blindness here
Inattentional Blindness here

Such empirical work is beginning to support a growing trend in the philosophy of mind/cognition, which attempts to reconceive the nature of perception. The general idea, as it were, is that what we perceive is determined, in large part, by the way in which we actively engage with our environment (which, in turn, is partly determined by the kind of creatures we are). This is in response to the 'traditional view' that perception is passive, whereby the world makes sensory impressions upon us. I dare not elaborate further, for fear of mischaracterizing the position, but if anyone is interested, here are some interesting books on the subject:

'Action in Perception' (2004) - Alva Noe: Clearly written, very accessible, and nicely incorporates recent empirical research. Due the accessible style, Noe is sometimess imprecise with his analyses and can be seen contradicting himself in some places. Still, highly recommended.

'Mind in Life' (2007) - Evan Thompson: A much more holistic approach that combines biology, complex systems theory, phenomenological philosophy, psychology and neuroscience. Thompson argues that we cannot reduce everything to mere brain processess, as this is only part of the picture, so to speak. His larger goal is to gesture toward how we might bridge the gap between the 'mind sciences' and lived experience/subjectivity. Very comprehensive and compelling.

'Mind & World' (1994) - John McDowell: This is not for the faint of heart. Considered somewhat radical in philosophical circles, but chaulked full of interesting ideas and novel insights. Probably the worst contemporary writer in philosophy, although one should not dismiss him as a result.
 

Supersnake

New member
We are designing speakers to be listened to by fellow humans. The trick is to use the measuring and designing tools to keep the technical, objective measurements good (for example, flat-ish) yet to make the speaker sound natural. I've illustrated two aspects of the design process here and here in videos. In fact, now I look at these I can see that in reality, the way I work is actually on a daily basis a combination of the two ... that is, I am measuring as I listen as I tweak as I re-simulate. A four-way design loop. Which is why it takes so long to balance the hard objective data with an acceptable subjective sound. After all, when being peer reviewed, good measurements are expected.

The turning point for me was about twenty years ago at, as I recall it, the London Penta hi-fi show. I had hurriedly assembled a pre-production pair of speakers the day before the show and hadn't checked them carefully. The sound at the show to my ears and the public visitors was beautiful. I was completely seduced by my own creation. Imagine the sweat that broke out a few days later, back at base, when I discovered that I'd wired both tweeters out of phase. I should have heard that, but I was so romantically wrapped-up in the experience I dropped my objective guard. Since then, test equipment has always been a few steps away. You can hear what you want to hear.

It's to be expected that the public, unfamiliar with the speakers room or music would not hear my stupid mistake - but it was shocking indeed that I, purporting to be a professional, after three days exposure didn't.

P.S. There is a small (and only small) justification in my favour. I was standing continuously at the rear of the room, well above tweeter axis. The actual phase of the tweeter so far above axis would probably not have been material. But for the public, seated more or less on axis, the effect would have been magnified. Since then, I've not completely trusted myself sans test equipment.
Clearly summarized and explained. Now would be a good time for me to revisit your two videos.
Thank you!
 
S

STHLS5

Guest
Re:Can_you_trust_your_senses.pdf

That is exactly what our brains are all about. They don't see or hear like machines. It reinforces most audiophiles(or videophiles) belief that measurements may not able to prove the differences we hear or see and validly so because they only exist in our mind.

Taking the example of the darker square – can we dismiss one’s perception that one square is darker than the other because scientific tests and measurements say they are the same shades of colour? Another example that I would like to share is the batman symbol. I always see a mouth with teeth instead of a bat. I can see the bat but I only saw it for the first time when it was pointed out to me. Even before posting this I double checked and I still don't see a bat at first instance, I have to look for the bat. So, for more than 20 years my brain is incapable of recognizing the bat at first instance despite I know it’s a bat symbol. My brain’s interpretation is different than most people (there are many others who couldn’t see the bat at first instance besides me- in case anyone suggesting I need to check my head:)). A machine would probably interprets the symbol as a bat. But what if the machine decides to say it is actually an opened mouth with huge teeth? Neither the bat nor the mouth looks anything like what you see in the real world.

While optical illusion can be reproduced on a paper auditory illusion taking place in our head cannot be readily demonstrated. You can’t reproduce the sound that you hear. An example of auditory illusion is here or read the papers about Glissando illusion which even shows that a lefthanders and righthanders hear differently. In a research conducted by Heidelberg University the general population can be divided into two groups of people into those who perceive missing fundamentals and those who primarily hear overtones.

So while measurements may prove non existence of differences our brain may legitimately conclude otherwise.

My 2 cents.

ST
 
Y

yeecn

Guest
Why is that a musical melody played in a minor key most likely evokes a sad feeling (as opposed to a major key)? (e.g. Satie's Gymnopedie No. 1 and other compositions).

I presume that it is related to the particular combination or progression of frequencies that are within the minor key, but why?
Granted, one can embellish the melody to increase the likelihood it will evoke sadness by playing it slowly instead of rapidly, or play it at a low volume instead of high - but at the basis of it all is that it was played in a minor key.
Ha. I asked myself that question some 30 years ago, and I am still looking for an answer!

Anyway your perception is right. Human associate the emotion of music more with the tempo than with the major/minor key. In other words slow music is much more likely to evoke the feeling of sadness than music of minor key. See this report. What is more surprising is that even the 'sad' music evoked responses from the left frontal lobe - or the 'happy' side of the brain!

This can be interpreted as a dispute to the valence lateralization model of the brain, which states the left frontal lobe is the happy side and the right frontal lobe is the sad, melancholy side. But I am more inclined to believe that there is no sad music as such. Even funeral music is meant to offer relieve and consolation! The blue music with that tint of melancholy can be very seductive. It probably served much more to soften the man to get involve with the women than to drive them to suicide!

Anyway my current feeling is that the major/minor key are more like the brighter and softer shade of colors. They provide the necessary contrast that adds to the drama of music. The mode of the music is much more determined by the emotional state of the composer than anything else. One noted example is Brahms. His deep emotional nature is oozing out in every piece of his music. Even the supposedly sunny 2nd symphony in D major is almost drowning in a sea of emotion!

Read this as well: http://www.guardian.co.uk/notesandqueries/query/0,,-20342,00.html
 

Labarum

Member
It is said that each key has its own character or mood, never mind major or minor. Of course if they are played on an equally tempered keyboard they all ought to be the same (well the major and the minor), just at different pitches. Whether playing a tune in true or tempered intervals makes a difference, I don't know. I guess it must.

It is also said that the modes of medieval music preserved in plainchant, but found in other music all have their moods.

It is a fascinating subject, and I know far too little about musical theory and human perception to have much to say on the issue.
 

weaver

New member
Music and the Human Brain



Levitin lists eight "dimensions" of music, meaning attributes that can be individually varied without affecting the other dimensions. Among the most important dimensions are: perceived pitch; rhythm; timbre; melody; and reverberation. It seems that the brain contains specialized modules for extracting each of these attributes. The evidence for this is that brain damage can cause a loss of one of the functions without affecting the others. (There is similar evidence in the case of speech, where it is possible to lose an ability to speak verbs, or to speak nouns, for example). The brain appears to be a collection of highly specialized modules that are seamlessly integrated by an hierarchy of higher order modules.
I have just got around to reading the rest of the page in this link, thank you.

I was interested in the particular point regarding brain modules and whether it had a relevance to one of the problems I have when auditioning equipment.

It often strikes me when comparing two set-ups that one will sound comparatively more detailed, will appear to more accurately reproduce timbre and so on whilst the other will be more musically satisfying, will render music in a way that I find more natural and involving. It doesn't always go this way but often enough for me to have thought 'can you have both' - does a system either have to be detailed or musical with the two being somehow mutually exclusive?

I have become more aware of 'artificial' detail; the things that, particularly speaker designers, can do to make their products stand out in this area - but distinguishing between the real and the artificial in what is essentially an artificial (ie. not live) situation is something I still struggle with.

Back to the brain modules then; is it possible that if the first thing that we perceive is an abundance of detail, will the brain latch on to this and magnify it to the possible detriment of other functions such as rhythm or harmony ? Similarly, if there is comparatively less detail does our brain focus more on the musical elements?

I have also been looking through the 'designers notebook' sections of the main site with much interest, particularly with regard to measurements vs listening, or rather measuring + listening and wonder whether when things actually measure very similarly but sound substantially different it could in part be due to a magnifying effect that the brain applies when it is given a very slight abundance of one aspect of music.
 
Y

yeecn

Guest
Why do we hear what we hear

Why do we hear what we hear

Below is a workshop from the 27th AES (Audio Engineer Society) Convention, NY 2009.

http://www.youtube.com/watch?v=BYTlN6wjcvQ.

I posted this link earlier, but the subject matter is very relevant our current discussion. The first part is a staged snatch thief incident. The eye witnesses were interviewed after the incident. It was quite astonishing how wrong these witnesses can be!

With the fallibility and suggestibility of human perceptions as introduction, the rest of the workshop addressed many contentious issues like dither, eq, sampling word size (bits), masking effects, microphones, amplifiers, etc. These are stuffs that you won't get from an audiophile magazine.

This particular workshop is being debated at length and ad nauseam here. So I don't think we need to enter into debate about it again here.

-------------------------------------------------------------------------------

Now here is a more elaborate explanation on the perceptual processes - and why we hear what we hear.

http://www.aes.org/sections/pnw/ppt/jj/highlevelnobg.ppt

To sum it up:
- When somebody guides your listening, you will change what you listen to.
- If you know something is changed in the system, you will expect changes in the output, and probably refocus.
- This is normal human behavior.
- It is something everyone does
- It goes along with cognition, and is very nearly a property of cognition.
 
Y

yeecn

Guest
Yes, we have a very handy constant frame of reference right under our nose - literally.

Most of the artifacts (or coloration) that we associate with loudspeaker reproduction cannot be attributed to the live human voice box. So speech, reproduced over those speakers switched via the comparator, is invaluable for exposing what simply could not be present in the original voice but is (sadly) present to one degree or another in the reproduced voice. The real-live human voice doesn't 'do' spitty, wiry, gritty, grainy, pinched, peaky, barking, biting etc. - so if you hear those (and other) characteristics brought to the fore by instantaneous cross-comparison you have educated yourself.
It never occurred to me that there can be so many description of human voice. It reminded that the Eskimo has 18 descriptions of snow conditions. That elaborate level of distinction of snow conditions has no meaning for most of the rest of the world - and it will mean nothing for us unless we were forced to live in the North Pole - and our life depends on it.

But Alan description of human voices means a lot for me. Well - I have to look up the dictionary what 'gritty' means, but apart from that I know what Alan was talking about. These are qualities that I hear in human voices and it is an important basis on how I differentiate the voices of the people around me.

Our human brain do develop great elaboration in human voice recognition as part of evolutionary necessity. But can we say of other type of noises - for example with regards to musical instruments? I heard a few descriptions that audiophile used to describe the equipments - fast, slow, analytical, musical, PRaT. But what do they mean?

I mean - what is a fast amplifier? I can tell that my wife sounded wiry today. The pitch is higher and her vocal cord is more constricted amongst other things, and that is most probably because she had a bad day beating the traffic jam.

I have some idea what 'analytical' means. I read references that certain speakers make one feels like one is right in front of the soloist. I think it is very likely that the speaker has a lift in the critical midrange. It brings out a lot of details that one would not usually hear, but at the expense of hiding a lot of details outside the midrange region.

Bit What makes a fast amplifier 'fast' - and what makes a slow amplifier 'slow'? The term 'slow' for example gives the impression that the amplifier is slow to response to the input waveforms, so there is a time lag of some sort. That is surely a sign of very serious harmonic distortion. Has anybody seen any measurements or charts about it?

I brought an voltage regulator a while back, because the seller insisted that it made the sound better 'harnessed'. I brought the device because I did not doubt the sincerity of the sales person. But after some frustrating time trying to hear how the device made the soprano voice more 'taut', I begun to think that something is amiss.

I still trust the integrity of the salesperson, but I no longer trust his judgments on sound. I would classify it under the 'Believing is Hearing' syndrome, or the Emperor's New Cloth syndrome.

And what does PRaT means? The term is so devoid of any objective reference that it feel ridiculous to discuss it.
 
H

honmanm

Guest
Harbeths slow & fast

Harbeths slow & fast

The original and current Harbeth mini-monitors are a good example of the difference between "slow" and "fast" sound quality.

It's the speaker's ability to respond to musical transients that give that feeling of "speed". I don't know whether it is because the driver moves slowly, doesn't move far enough (or gets slowed down by mechanical drag near the limits of its travel), or whether the momentum that it has picked up prevents accurate reproduction of the sound that follows the transient.

When I compared the original early '90s HL-P3s against a pair of Quad ESL63s, it was clear that the lightweight diaphragms of the electrostatic speakers were doing a much better job of following the transients (transients courtesy of that nice Mr. Beethoven via Carlos Kleiber and the VPO).

Now this is measurable - see Stereophile's impulse and step response measurements for HL-P3 and ESL63.

When the P3ESRs arrived that same sense of speed & responsiveness was apparent (no implied comparison with the ESL63s, which had gone by then).

Now digging into the "fast" vs "slow" amplifier question, there are a few factors that might come into play - output impedance (including reactive components), slew rate (unlikely), but most likely in the stiffness of the power supply... in other words the ability of the power supply to supply current without a drop in voltage. The importance of this will be very dependent on the speakers; speakers that are an easy load will be less demanding of current (and less reactive) so that the amplifier is less of an issue. Equally if you listen at moderate volume the amplifier will have an easier time of it.

This is where the P3ESRs really shine compared to the original HL-P3s - the assortment of '90s mid-market amplifiers that friends have lent me each had a distinct character when driving the HL-P3s (I'm pretty sure this would be distinguishable via an ABX test) but when coupled with the P3ESRs the differences narrowed to a point where they are of the same order as changes in speaker position.

Now the funny thing is, the '90s HL-P3s have a marvellous way with that "vocalist right in front of you" type presentation, and that is one thing I miss with the P3ESRs. Alan has mentioned that the P3ESRs have a flatter response than their predecessors, so this is an example of a subjective preference for demonstrably less accurate reproduction.

Given the (according to Stereophile) "superbly flat response" of the HL-P3s, it seems that the presentation of soloists is strongly influenced by small deviations in the response curve.

I can't hear PRaT, but for some people it is a real issue that affects their enjoyment of music. It might have something to do with group delay and low frequency phase response...
 

A.S.

Administrator
Staff member
Diaphragm mass and bass ...

Diaphragm mass and bass ...

It's the speaker's ability to respond to musical transients that give that feeling of "speed". I don't know whether it is because the driver moves slowly, doesn't move far enough (or gets slowed down by mechanical drag near the limits of its travel), or whether the momentum that it has picked up prevents accurate reproduction of the sound ... Now the funny thing is, the '90s HL-P3s have a marvellous way with that "vocalist right in front of you" type presentation, ... Alan has mentioned that the P3ESRs have a flatter response than their predecessors, so this is an example of a subjective preference for demonstrably less accurate reproduction. ....
Goodness me, so many questions and so many suppositions!

First, I too don't really understand the adjectives 'fast' and 'slow' when applied to bass reproduction. Is it something to do with quality or quantity of bass? It must be one or both. Physically it absolutely cannot be due to the drive unit slowing down or speeding up, because if it was, the frequency of the note would change. The rate at which the drive unit moves backwards and forwards solely define the frequency that the unit traces. So the fast/slow thing can't actually be due to speed. Mechanical drag? Possibly.

But you know, one reason why the electrostatic bass is so 'fluffy' and lacking in punch is because the diaphragm is extremely light - just a fraction of a gram. You need some inertia (= mass) to give the bass some apparent sonic weight. The trick with any mini-monitor design process is to balance the apparent subjective bass 'weight' with acceptable sensitivity and impedance. You can optimise (at best) two of those three parameters, but not all three simultaneously. A heavy cone (like P3ES2) may well give a certain bass quality but with low efficiency and sensitivity.

It's also a fact that as bass units age, their cone (rubber) surrounds can undergo significant internal chemical changes after ten years or so, depending upon the material. The rubbers used in the Harbeth-made 5" and 8" woofers seem to be resistant to change with the passage of time, and hence have a very stable bass output. Mk1s made in 1977 show no significant change in the bass from new. Conversely, an original BBC-type 8" unit with a white PVC surround has over a period of years completely changed - see attached. Originally the surround would have had a half-roll, which permitted the cone to move forwards and backwards. Unfortunately, the PVC material has a 'memory' and wants to return to the original flat-sheet, which in this example it has. You can imagine that the bass quantity and quality is completely different now from when it was first made.

There is a great danger in reading reviews out of context: that was a snapshot of how a mechanical system (a loudspeaker) behaved at that point in its life. It's the same with an annual compulsory MOT safety check on motor vehicles in the UK: it is a statement about the vehicle's performance and safety on the day of testing only. Buying a used loudspeaker has much in common with buying a used car: they are both mechanical systems. You would not expect (would you?) a ten year old car, even one that has been stored and never used, to have the same suspension characteristics and ride as a brand new one. You'd be daft to. The shock absorbers will dry-out with age in the same way that a bass unit rubber surround will change. A little or a lot depending upon many factors.
 
Attachments only viewable to members
H

honmanm

Guest
Goodness me, so many questions and so many suppositions!
Sorry Alan, these were just a few of the questions floating around waiting for an appropriate moment to ask. For a few months now I've been mulling over why the '90s HL-P3s and new P3ESRs sound so different (even within the comfort zone of the older speaker).
 

A.S.

Administrator
Staff member
The inner ear, in graphic detail

The inner ear, in graphic detail

Attached a wonderful PDF with electron scans of the physical, mechanical structure of the inner ear, the cells that translate the sound pressure variations that enter our ear canal into the electrical bursts that our brain interprets as continuous sound.

In my humble opinion, to be able to talk with any credibility about comparing audio event A with B, and the innumerable traps for the unwary, casual, emotionally involved listener who claims objectivity, it is of paramount importance to make a basic study of the ear. This PDF goes a long way to doing that.

The point to take away is that the way most audiophiles think the ear works and the expectations of quality and sensitivity they assume, are wide of the mark. Examined under the electron microscope, no two ears are the same. Nor is the left ear the same as the right. Nor is the physical structure of the hair cells in the inner ear at the age of ten anything like that of the same ear at the age of 50. So to solely rely on the ear as an objective measure of audio is the height of folly. We have to give the easily damaged, naturally degrading ear a fighting chance, and that means we have to remove as many confounding variables from listening comparisons to allow the ear to work at its maximum, but necessarily limited, potential.

>
 
Attachments only viewable to members

Finbarr

New member
Rationalism, or not?

Rationalism, or not?

Not only must we distrust out senses. Our reasoning can also let us down. Here's a quick test for you (and it's not a trick question):

You are on a game show. You're trying to win a car and you're in the final round. Every week the same thing happens. The host shows the contestant 3 closed doors. Behind one is a car. Behind each of the other 2 is a goat. He asks the contestant to nominate a door. He then opens a different door to reveal a goat. He asks the contestant if he should stick with his original choice or switch to the last remaining door.

It's your turn... You nominate door 1. He opens door 3 and shows you a goat. He now asks you whether you will stick with door 1 or switch to door 2. Should you stick? Switch? Or does it make any difference?
 
Top