Hearing + Color vision Flashcards

1
Q

Describe what happens with mixing lights

A
  • If a light that appears blue is projected onto a white surface and a light that appears yellow is projected on top of the light that appears blue, the area where the lights are superimposed is perceived as white
  • Because the 2 spots of light are projected onto a white surface, which reflects all wavelengths, all of the wavelengths that hit the surface are reflected into an observer’s eyes
  • The blue spot consists of a band of short wavelengths, so when it is projected alone, the short-wavelength light is reflected into the observer’s eyes
  • Similarly, the yellow spot consists of medium and long wavelengths, so when presented alone, these wavelengths are reflected into the observer’s eyes
  • When colored lights are superimposed, all of the light that’s reflected from the surface by each light when alone is also reflected when the lights are superimposed
    -Thus, where the 2 spots are superimposed, the light from the blue spot and the light from the yellow spot are both reflected into the observer’s eye
  • The added-together light therefore contains short, medium, and long wavelengths, which results in the perception of white
  • Because mixing lights involves adding up the wavelengths of each light in the mixture, mixing lights is called an additive color mixture
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Summarize the connection between wavelength and color

A
  • Colors of light are associated with wavelengths in the visible spectrum
  • The colors of objects are associated with which wavelengths are reflected (for opaque objects) or transmitted (for transparent objects)
  • The colors that occur when we mix colors are also associated with which wavelengths are reflected into the eye
  • Mixing paints causes fewer wavelengths to be reflected (each paint subtracts wavelengths from the mixture); mixing lights causes more wavelengths to be reflected (each light adds wavelengths to the mixture)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Isaac Newton described the visible spectrum in his experiments in terms of what 7 colors?

A
  • Red
  • Orange
  • Yellow
  • Green
  • Blue
  • Indigo
  • Violet
  • His use of 7 color terms probably had more to do with mysticism than science, however, as he wanted to harmonize the visible spectrum (7 colors) with musical scales (7 notes), the passage of time (7 days in a week), astronomy (7 known planets at the time), and religion (7 deadly sins)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are spectral colors?

A

Colors that appear in the visible spectrum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why do modern vision scientists tend to exclude indigo from the list of spectral colors?

A

Because humans actually have a difficult time distinguishing it from blue and violet

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are non-spectral colors?

A
  • Colors that don’t appear in the spectrum because they are mixtures of other colors
  • Ex: magenta, which is a mixture of red and blue
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How many colors are humans estimated to be able to discriminate between?

A

A conservative estimate is that we can tell the difference between about 2.3 million different colors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What’s an example of something that highlights the enormous amount of colors we can differentiate

A
  • If you’ve ever decided to paint your bedroom wall, you will have discovered a dizzying number of color choices in the paint department of your local home improvement store
  • Major paint manufacturers have thousands of colors in their catalogs, and your computer monitor can display millions of different colors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How can we perceive millions of colors when we can describe the visible spectrum in terms of only 6 or 7 colors?

A

Because there are 3 perceptual dimensions of color, which together can create the large number of colors we can perceive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the 3 perceptual dimensions of color?

A
  • Hue
  • Saturation
  • Value
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What’s hue?

A
  • The experience of a chromatic color, such as red, green, yellow, or blue, or combinations of these colors
  • AKA chromatic colors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What’s saturation?

A
  • Refers to the intensity of color
  • The relative amount of whiteness in a chromatic color
  • The less whiteness a color contains, the more saturated it is
  • The more whiteness has been added the more saturation decreases
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What happens when hues become desaturated?

A

They can take on a faded or washed-out appearance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does desaturated mean?

A
  • Low saturation in chromatic colors as would occur when white is added to a color
  • Ex: pink isn’t as saturated as red
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What’s value?

A
  • AKA lightness
  • The light-to-dark dimension of color
  • Value decreases as the colors become darker
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What’s lightness?

A

The perception of shades ranging from white to gray to black

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What’s a color solid?

A
  • A solid in which colors are arranged in an orderly way based on their hue, saturation, and value
  • A way to arrange colors systematically within a three-dimensional color space
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What’s the Munsell color system?

A
  • Depiction of hue, saturation, and value developed by Albert Munsell in the early 1900s in which different hues are arranged around the circumference of a cylinder with perceptually similar hues placed next to each other
  • Hue is arranged in a circle around the vertical
  • The vertical represents value or lightness -> value is represented by the cylinder’s height, with lighter colors at the top and darker colors at the bottom
  • Saturation increases with distance away from the vertical -> depicted by placing more saturated colors toward the outer edge of the cylinder and more desaturated colors toward the center
  • The order of the hues around the cylinder matches the order of the colors in the visible spectrum
  • The color solid therefore creates a coordinate system in which our perception of any color can be defined by hue, saturation, and value
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How did Newton explain the retinal basis of color vision through his prism experiment?

A
  • When he separated white light into its components to reveal the visible spectrum, he argued that each component of the spectrum must stimulate the retina differently in order for us to perceive color
  • He proposed that “rays of light in falling upon the bottom of the eye excite vibrations in the retina. Which vibrations, being propagated along the fibres of the optic nerves into the brain, cause the sense of seeing”
  • Electrical signals, not “vibrations,” are what is transmitted down the optic nerve to the brain, but Newton was on the right track in proposing that activity associated with different lights gives rise to the perceptions of different colors
  • He thought each component of the spectrum must stimulate the retina differently in order for us to perceive colour
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How did Thomas Young expand on Newton’s explanation of the retinal basis of color vision through his prism experiment?

A
  • He suggested that Newton’s idea of a link between each size of vibration and each color won’t work, because a particular place on the retina can’t be capable of the large range of vibrations required
  • He stated “Now, as it is almost impossible to conceive of each sensitive point on the retina to contain an infinite number of particles, each capable of vibrating in perfect unison with every possible undulation, it becomes necessary to suppose the number limited, for instance, to the three principal colors, red, yellow, and blue” (Young, 1802)
  • It’s this proposal—that color vision is based on 3 principal colors—that marks the birth of what is today called the trichromacy of color vision
  • However, Young’s theory was little more than an insightful idea that, if correct, would provide a solution to the puzzle of color perception
  • Young had little interest in conducting experiments to test his ideas, however, and never published any research to support his theory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What’s the theory of trichromacy of color vision?

A
  • AKA Young-Helmholtz theory
  • The idea that our perception of color is determined by the ratio of activity in 3 receptor mechanisms with different spectral sensitivities
  • According to this theory, light of a particular wavelength stimulates each receptor mechanism to different degrees, and the pattern of activity in the 3 mechanisms results in the perception of a color
  • Each wavelength is therefore represented in the nervous system by its own pattern of activity in the 3 receptor mechanisms
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Who conducted experiments to prove Thomas Young’s theory of trichromacy of color vision?

A
  • James Clerk Maxwell (1831–1879) and Hermann von Helmholtz
  • Although Maxwell conducted his experiments before Helmholtz, Helmholtz’s name became attached to Young’s idea of 3 receptors, and trichromatic theory became known as the Young-Helmholtz theory
  • This has been attributed to Helmholtz’s prestige in the scientific community and to the popularity of his Handbook of Physiology (1860), in which he described the idea of 3 receptor mechanisms
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

The trichromacy of color vision is supported by the results of what kind of behavioural procedure?

A

A psychophysical procedure called color matching

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What’s color matching?

A

A procedure in which observers are asked to match the color in one field by mixing 2 or more lights in another field

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Describe the procedure of color matching

A
  • The experimenter presents a reference color that is created by shining a single wavelength of light on a “reference field”
  • The observer then matches the reference color by mixing the wavelengths of light in a “comparison field”
  • Ex: an observer could be shown a 500-nm light in the reference field and then be asked to adjust the amounts of 420-nm, 560-nm, and 640-nm lights in the comparison field, until the perceived color of the comparison field matches the reference field (bipartite field)
  • In a color-matching experiment, a wavelength in one field is matched by adjusting the proportions of 3 different wavelengths in another field
  • This result is interesting because the lights in the 2 fields are physically different (they contain different wavelengths) but they are perceptually identical (they look the same) -> metamerism
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What was Maxwell’s key finding for his color-matching experiments?

A
  • That any reference color could be matched provided that observers were able to adjust the proportions of 3 wavelengths in the comparison field
  • 2 wavelengths allowed participants to match some, but not all, reference colors, and they never needed 4 wavelengths to match any reference color
  • Based on the finding that people with normal color vision need at least 3 wavelengths to match any other wavelength, Maxwell reasoned that color vision depends on 3 receptor mechanisms, each with different spectral sensitivities
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

The discovery of 3 types of cones in the human retina was made using what technique?

A
  • Microspectrophotometry
  • This made it possible to direct a narrow beam of light into a single cone receptor
  • By presenting light at wavelengths across the spectrum, it was determined that there were 3 types of cones, with the absorption spectra
  • This discovery provided physiological support for the trichromacy that was based on the results of Maxwell’s color matching experiments
  • These new measurements were important because they were not only consistent with trichromacy as predicted by color matching, but they also revealed the exact spectra of the 3 cone mechanisms, and, revealed the large overlap between the L and M cones
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What’s microspectrophotometry?

A
  • A technique in which a narrow beam of light is directed into a single visual receptor
  • This technique makes it possible to determine the pigment absorption spectra of single receptors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Describe the absorption spectra of the 3 cone pigments

A
  • The short-wavelength pigment (S), absorbed maximally at 419-nm
  • The middle-wavelength pigment (M), at 531-nm
  • The long-wavelength pigment (L), at 558-nm
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What’s adaptive optical imaging?

A
  • Another advance in describing the cones
  • A technique that makes it possible to look into a person’s eye and take pictures of the receptor array in the retina (showed how the cones are arranged on the surface of the retina)
  • This was an impressive achievement, because the eye’s cornea and lens contain imperfections called aberrations that distort the light on its way to the retina
  • This method creates a sharp image by first measuring how the optical system of the eye distorts the image reaching the retina, and then taking a picture through a deformable mirror that cancels the distortion created by the eye
  • The result is a clear picture of the cone mosaic, which shows foveal cones
  • Colors are added after the images are created to distinguish the long- (red), medium- (green), and short-wavelength (blue) cones
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are abberations?

A

Imperfections on the eye’s cornea and lens that distort light on its way to the retina

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Can optometrists see our cones?

A
  • No, they would need adaptive optical imaging
  • When your optometrist or ophthalmologist uses an ophthalmoscope to look into your eye, they can see blood vessels and the surface of the retina, but the image is too blurry to make out individual receptors -> this is usually due to abberations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What’s a cone mosaic?

A

Arrangement of long- (red), medium- (green), and short-wavelength (blue) cones in a particular area of the retina

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

How is short-wavelength light, which appears blue in the spectrum, signaled in the receptors?

A

It’s signaled by a large response in the S receptor, a smaller response in the M receptor, and an even smaller response in the L receptor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

How is yellow signaled in the receptors?

A

Yellow is signaled by a very small response in the S receptor and large responses in the M and L receptors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

How is white signaled in the receptors?

A

White is signaled by equal activity in all the receptors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Other than the varying wavelengths stimulating our receptors, what other factors affect our perception of color?

A
  • Our state of adaptation
  • The nature of our surroundings
  • Our interpretation of the illumination
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What’s metamerism?

A
  • The situation in which 2 physically different stimuli are perceptually identical
  • In vision, this refers to 2 lights with different wavelength distributions that are perceived as having the same color
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What are the 2 identical fields in a color-matching experiment called?

A

Metamers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What are metamers?

A
  • 2 lights that have different wavelength distributions but are perceptually identical
  • The reason metamers look alike is that they both result in the same pattern of response in the 3 cone receptors
  • Ex: when the proportions of a 620-nm red light that looks red and a 530-nm green light that looks green are adjusted so the mixture matches the color of a 580-nm light, which looks yellow, the 2 mixed wavelengths create the same pattern of activity in the cone receptors as the single 580-nm light
  • The 530-nm green light causes a large response in the M receptor, and the 620-nm red light causes a large response in the L receptor
  • Together, they result in a large response in the M and L receptors and a much smaller response in the S receptor
  • This is the pattern for yellow and is the same as the pattern generated by the 580-nm light
  • Thus, even though the lights in these 2 fields are physically different, the 2 lights result in identical physiological responses so they are identical as far as the brain is concerned and they are therefore perceived as being the same
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What’s monochromatism?

A
  • Rare form of color blindness in which the absence of cone receptors results in perception only of shades of lightness (white, gray, and black), with no chromatic color present
  • Usually hereditary and occurs in only about 10 people out of 1 million
  • Monochromats usually have no functioning cones, so their vision is created only by the rods
  • Their vision, therefore, has the characteristics of rod vision in both dim and bright lights so they see only in shades of lightness (white, gray, and black)
  • A person with normal color vision can experience what it’s like to be a monochromat by sitting in the dark for several minutes
  • When dark adaptation is complete, vision is controlled by the rods, which causes the world to appear in shades of gray
  • Because monochromats perceive all wavelengths as shades of gray, they can match any wavelength by picking another wavelength and adjusting its intensity
  • Thus, a monochromat needs only one wavelength to match any wavelength in the spectrum
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Why is color vision not possible in a person with just one receptor type?

A
  • We can understand this by considering how a person with just one pigment would perceive 2 lights, one 480 nm and one 600 nm, which a person with normal color vision sees as blue and red
  • The absorption spectrum for the single pigment indicates that the pigment absorbs 10% of 480-nm light and 5% of 600-nm light
  • When light is absorbed by the retinal part of the visual pigment molecule, the retina changes shape (isomerization)
  • The visual pigment molecule isomerizes when the molecule absorbs one photon of light
  • This isomerization activates the molecule and triggers the process that activates the visual receptor and leads to seeing the light
  • If the intensity of each light is adjusted so 1,000 photons of each light enter our one-pigment observer’s eyes, the 480-nm light isomerizes 1000 x 0.10 = 100 visual pigment molecules and the 600-nm light isomerizes 1000 x 0.05 = 50 molecules
  • Because the 480-nm light isomerizes 2x as many visual pigment molecules as the 600-nm light, it will cause a larger response in the receptor, resulting in perception of a brighter light
  • But if we increase the intensity of the 600-nm light to 2,000 photons, then this light will also isomerize 100 visual pigment molecules
  • When the 1,000 photon 480-nm light and the 2,000 photon 600-nm light both isomerize the same number of molecules, the result will be that the 2 spots of light will appear identical
  • Thus, by adjusting the intensities of the 2 lights, we can cause the single pigment to result in identical responses, so the lights will appear the same even though their wavelengths are different
  • A person with only one visual pigment can match any wavelength in the spectrum by adjusting the intensity of any other wavelength and sees all of the wavelengths as shades of gray
  • Thus, adjusting the intensity appropriately can make the 480-nm and 600-nm lights (or any other wavelengths) look identical
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What does a person need to perceive chromatic color?

A

More than one type of receptor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What does it mean to be color blind?

A
  • A condition in which a person perceives no chromatic color
  • This can be caused by absent or malfunctioning cone receptors or by cortical damage
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Why does the difference in the wavelengths of light not matter?

A
  • Because of the principle of univariance, which states that once a photon of light is absorbed by a visual pigment molecule, the identity of the light’s wavelength is lost
  • Absorption of a photon causes the same effect, no matter what the wavelength is
  • Any two wavelengths can cause the same response by changing the intensity
  • An isomerization is an isomerization no matter what wavelength caused it
  • Univariance means that the receptor doesn’t know the wavelength of light it has absorbed, only the total amount it has absorbed
  • Thus, by adjusting the intensities of 2 lights, we can cause a single pigment to result in identical responses, so the lights will appear the same even though their wavelengths are different
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What are dichromats?

A
  • A person who has a form of color deficiency
  • People with just 2 types of cone pigment
  • They see chromatic colors, just as our calculations predict, but because they have only 2 types of cones, they confuse some colors that trichromats can distinguish
  • ## Dichromats can match any wavelength in the spectrum by mixing 2 other wavelengths
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What are trichromats?

A
  • A person with normal color vision
  • Trichromats can match any wavelength in the spectrum by mixing 3 other wavelengths in various proportions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What are ways of determining the presence of colour deficiency?

A
  1. By using the color-matching procedure to determine the minimum number of wavelengths needed to match any other wavelength in the spectrum
  2. With a color vision test that uses stimuli called Ishihara plates
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What are Ishihara plates?

A
  • A display of colored dots used to test for the presence of color deficiency
  • The dots are colored so that people with normal (trichromatic) color vision can perceive numbers in the plate, but people with color deficiency cannot perceive these numbers or perceive different numbers than someone with trichromatic vision
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What’s a unilateral dichromat?

A
  • A person who has dichromatic vision in one eye and trichromatic vision in the other eye
  • People with this condition (which is extremely rare) have been tested to determine what colors dichromats perceive by asking them to compare the perceptions they experience with their dichromatic eye and their trichromatic eye
  • Both of the unilateral dichromat’s eyes are connected to the same brain, so this person can look at a color with his dichromatic eye and then determine which color it corresponds to in his trichromatic eye
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What do we need to do too determine what a dichromat perceives compared to a trichromat?

A

We need to locate and experiment on a unilateral dichromat

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

What are the 3 major forms of dichromatism?

A
  • Protanopia
  • Deuteranopia
  • Tritanopia
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What are the 2 most common kinds of dichromatism?

A
  • Protanopia and Deuteranopia
  • These are inherited through a gene located on the X chromosome
  • They result in the same perception of blues, greys and yellows
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Why are males more likely than females to have a colour deficiency?

A
  • Because both Protanopia and Deuteranopia are inherited through a gene located on the X chromosome
  • Males (XY) have only one X chromosome, so a defect in the visual pigment gene on this chromosome causes color deficiency
  • Females (XX), on the other hand, with their 2 X chromosomes, are less likely to become color deficient because only one normal gene is required for normal color vision
  • These forms of color vision are therefore called sex-linked because women can carry the gene for color deficiency without being color deficient themselves
  • Thus, many more males than females are dichromats
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

What’s protanopia?

A
  • A form of dichromatism in which a protanope is missing the long-wavelength pigment (no red), and perceives short-wavelength light as blue and long-wavelength light as yellow
  • This affects 1% of males and 0.02% of females
  • Results in the perception of colors as only blues and yellows (no red)
  • As a result, a protanope perceives short-wavelength light as blue, and as the wavelength is increased, the blue becomes less and less saturated until, at 492 nm, the protanope perceives gray
  • The wavelength at which the protanope perceives gray is called the neutral point
  • At wavelengths above the neutral point, the protanope perceives yellow, which becomes less intense at the long wavelength end of the spectrum
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

What’s deuteranopia?

A
  • A form of dichromatism in which a person is missing the medium-wavelength pigment (no green)
  • A deuteranope perceives turquoise at short wavelengths, sees yellow at long wavelengths, and has a neutral point at about 498 nm
  • This affects about 1% of males and 0.01% of females and results in the perception of colour as blues and yellows
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

What’s tritanopia?

A
  • A form of dichromatism in which a person is missing the short-wavelength pigment
  • A tritanope sees blue at short wavelengths, red at long wavelengths
  • Very rare -> affecting only about 0.002% of males and 0.001% of females
  • A tritanope sees colors and the spectrum as blues, greys and reds and sees the spectrum the same
  • Has a neutral point at 570 nm
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Other than monochromatism and dichromatism, what’s another prominent type of colour deficiency?

A

Anomalous trichromatism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

What’s anomalous trichromatism?

A
  • A type of color deficiency in which a person needs to mix a minimum of 3 wavelengths to match any other wavelength in the spectrum but mixes these wavelengths in different proportions than a trichromat
  • They are not as good as a trichromat at discriminating between wavelengths that are close together
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

What’s Hering’s opponent-process theory of color vision?

A
  • Theory originally proposed by Hering, which claimed that our perception of colour is determined by the activity of 3 opponent mechanisms: a blue–yellow mechanism, a red–green mechanism and a white-black mechanism
  • The responses to the 2 colours in each mechanism oppose each other, one being an excitatory response and the other an inhibitory response
  • These responses were believed to be the result of chemical reactions in the retina
  • This theory also includes a black–white mechanism, which is concerned with the perception of brightness
  • He picked these pairs of colors based on phenomenological observations—observations in which observers described the colors they were experiencing
  • This was based on his observation that if you stare at something that’s coloured, you’re going to see something that’s coloured differently
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

What are the 2 types of behavioural evidence for opponent-process theory?

A
  • Phenomenological
  • Psychophysical
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Describe the phenomenological evidence for Hering’s opponent-process theory

A
  • This evidence is based on colour experience
  • This evidence was central to Hering’s proposal of opponent-process theory
  • His ideas about opponent colours were based on people’s colour experiences when looking at a colour circle
  • Hering identified 4 primary colors (red, yellow, green, and blue) and proposed that each of the other colors are made up of combinations of these primary colors
  • This was demonstrated using a procedure called hue scaling, in which participants were given colors from around the hue circle and told to indicate the proportions of red, yellow, blue, and green that they perceived in each color
  • One result was that each of the primaries was “pure”
  • Ex: there’s no yellow, blue, or green in the red
  • The other result was that each of the intermediate colors like purple or orange were judged to contain mixtures of 2 or more of the primaries
  • Results such as these led him to call the primary colors unique hues
  • He proposed that our color experience is built from the 4 primary chromatic colors arranged into 2 opponent pairs: yellow–blue and red–green
  • To these chromatic colors, Hering also considered black and white to be an opponent achromatic pair
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

What’s a colour circle?

A
  • Perceptually similar colors located next to each other around its perimeter and arranged in a circle
  • In the color circle, colors across from each other are complementary colors
  • The difference between a color circle and a color solid is simply that the color circle focuses only on hue, without considering variations in saturation or value
  • Hering’s colour circle has colors on the left appear blueish, colors on the right appear yellowish, colors on the top appear reddish, and colors on the bottom appear greenish
  • Lines connect opponent colors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

What are complementary colours?

A

Colours which when combined cancel each other to create white or gray

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

What are the 4 primary colours that Hering identified?

A
  • Red
  • Yellow
  • Green
  • Blue
  • He referred to these as unique hues
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

What’s hue scaling?

A

Procedure in which participants are given colors from around the hue circle and told to indicate the proportions of red, yellow, blue, and green that they perceive in each color

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

What are the 3 reasons why Hering’s phenomenological opponent-process proposal wasn’t widely accepted?

A
  1. Its main competition, trichromatic theory, was championed by Helmholtz, who had great prestige in the scientific community
  2. Hering’s phenomenological evidence, which was based on describing the appearance of colors, couldn’t compete with Maxwell’s quantitative color mixing data
  3. There was no neural mechanism known at that time that could respond in opposite ways
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Describe the psychophysical evidence for Hering’s opponent-process theory

A
  • The idea of opponency was given a boost in the 1950s by Leo Hurvich and Dorthea Jameson’s (1957) hue cancellation experiments
  • The purpose of the hue cancellation experiments was to provide quantitative measurements of the strengths of the B–Y and R–G components of the opponent mechanisms
  • Through these, they found that blue opposes yellow and that green opposes red
  • Hurvich and Jameson’s hue cancellation experiments were an important step toward acceptance of opponent-process theory because they went beyond Hering’s phenomenological observations by providing quantitative measurements of the strengths of the opponent mechanisms
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Describe the method of hue cancellation

A
  • Procedure in which a subject is shown a monochromatic reference light and is asked to remove, or “cancel,” the one of the colors in the reference light by adding a 2nd wavelength
  • Ex: We begin with a 430-nm light, which appears blue
  • Hurvich & Jameson (1957) reasoned that since yellow is the opposite of blue and therefore cancels it, they could determine the amount of blueness in a 430-nm light by determining how much yellow needs to be added to cancel all perception of “blueness”
  • Once this is determined for the 430-nm light, the measurement is repeated for 440nm and so on, across the spectrum, until reaching the wavelength where there is no blueness
  • This method was then used to determine the strength of the yellow mechanism by determining how much blue needs to be added to cancel yellowness at each wavelength
  • For red and green, the strength of the red mechanism is determined by measuring how much green needs to be added to cancel the perception of redness, and the strength of the green mechanism, by measuring how much red needs to be added to cancel the perception of greenness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Describe the physiological evidence for Opponent-Process Theory

A
  • Even more crucial for the acceptance of opponent-process theory was the discovery of opponent neurons that responded with an excitatory response to light from one part of the spectrum and with an inhibitory response to light from another part
  • In an early paper that reported opponent neurons in the lateral geniculate nucleus (LGN) of the monkey, Russell DeValois (1960) recorded from neurons that responded with an excitatory response to light from one part of the spectrum and with an inhibitory response to light from another part
  • Later work identified opponent cells with different receptive field layouts (circular single opponent, circular double opponent, and side-by-side single opponent)
  • This provided physiological evidence for the opponency of color vision
  • The opponent neurons can be created by inputs from the 3 cones
  • Ex: the L-cone sends excitatory input to a bipolar cell, whereas the M-cone sends inhibitory input to the cell. This creates an +L –M cell that responds with excitation to the long wavelengths that cause the L-cone to fire and with inhibition to the medium wavelengths that cause the M-cone to fire
  • Ex: the +S –ML cell also receives inputs from the cones. It receives an excitatory input from the S cone and an inhibitory input from cell A, which sums the inputs from the M and L cones
  • Opponent responding has also been observed in a number of cortical areas, including the visual receiving area (V1)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

What are the 3 different receptive field layouts for opponent cells?

A
  • Circular single opponent
  • Circular double opponent
  • Side-by-side single opponent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

What kind of colour display do circular single opponent cells and side-by-side single opponent cells respond to?

A

Large areas of colour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

What kind of colour display do circular double opponent cells respond to?

A

Colour patterns and borders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

What are opponent neurons?

A

A neuron that has an excitatory response to wavelengths in one part of the spectrum and an inhibitory response to wavelengths in the other part of the spectrum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Describe how researchers questioned the idea of unique hues

A
  • The proposed specialness of unique hues led researchers who first recorded from opponent neurons to give them names like +B –Y and +R –G that corresponded to the unique hues
  • The implication of these labels is that these neurons are responsible for our perception of these hues
  • One argument against the idea of a direct connection between the firing of opponent neurons and perceiving primary or unique hues is that the wavelengths that cause maximum excitation and inhibition don’t match the wavelengths associated with the unique hues
  • And recent research has repeated hue scaling experiments, using different primaries—orange, lime, purple, and teal—and obtained results similar to what occurred with red, green, blue, and yellow
  • That is, orange, lime, purple, and teal were rated as if they were “pure” (ex: orange was rated as not containing any lime, purple, or teal)
  • Opponent neurons are certainly important for color perception, because opponent responding is how color is represented in the cortex
  • But perhaps, the idea of unique hues may not be helping us figure out how neural responding results in specific colors
  • Apparently, it’s not as simple as +M –L equals +G –R, which is directly related to perceiving green and red
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

If responses of +M –L neurons can’t be linked to the perception of green and red (or Hering’s unique hues), what is the function of these neurons?

A
  • One idea is that opponent neurons indicate the difference in responding of pairs of cones to different wavelengths
  • We can understand how this works at a neural level where we see how a +L –M neuron receiving excitation from the L-cone and inhibition from the M-cone responds to 500-nm and 600-nm lights
    -Ex: if the 500-nm light results in an inhibitory signal of
    –80 and an excitatory signal of +50, the response of the +L –M neuron would be –30 (meaning the action of the 500-nm light on this neuron will cause a decrease in any ongoing activity). And if the 600-nm light results in an inhibitory signal of –25 and an excitatory signal of +75, the response of the +L –M neuron would be +50 (this wavelength causes an increase in the response of this neuron)
  • This “difference information” could be important in dealing with the large overlap in the spectra of the M and L cones
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

How have neurons with side-by-side receptive fields been used to provide evidence for a connection between color and form?

A
  • These neurons can fire to oriented bars even when the intensity of the side-by-side bars is adjusted so they appear equally bright
  • In other words, these cells fire when the bar’s form is determined only by differences in color
  • This evidence has been used to support the idea of a close bridge between the processing of color and the processing of form in the cortex
  • Thus, when you look out at a colorful scene, the colors you see are not only “filling in” the objects and areas in the scene but may also be helping define the edges and shapes of these objects and areas
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

How was the idea of brain area specialized for colour popularized?

A
  • It was popularized by Semir Zeki based on his finding that many neurons in a visual area called V4 respond to colour
  • However, additional evidence has led many researchers to reject the idea of a “colour center” in favour of the idea that colour processing is distributed across a number of cortical areas
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Describe Rosa Lafer-Sousa and coworkers (2016) study on the brain areas specialized in processing colour vision

A
  • They scanned participants’ brains while they watched 3-second video clips that contained images (both coloured and BW) -> these images included faces, places, bodies and objects
  • From looking at the participants’ brains, they found that areas that responded to colour were sandwiched between areas that responded to faces and places
  • Faces, colour, and places are associated with different areas that are located next to each other in the brain
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

What’s evidence that colour and shape/form are processed idependently?

A
  • The independence of shape and color is indicated by some cases of brain damage
  • Patient D.F. who could mail a card but couldn’t orient the card or identify objects was described to illustrate a dissociation between action and object perception
  • Despite her difficulty in identifying objects, her colour perception was unimpaired
  • Another patient, however, had the opposite problem with impaired color perception but normal form perception
  • This double dissociation means that color and form are processed independently
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Sunlight contain what kind of energy for the different wavelengths?

A

Sunlight contains approx. equal amounts of energy at all wavelengths, which is a characteristic of white light

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Incandescent bulbs contain what kind of energy for the different wavelengths?

A

It contains much more energy at long wavelengths (which is why they look slightly yellow)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

LED bulbs contain what kind of energy for the different wavelengths?

A

LED bulbs emit light at substantially shorter wavelengths (which is why they look slightly blue)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

How was the idea that colour is not a property of wavelengths asserted by Isaac Newton in his Opticks (1704)?

A
  • Newton’s idea was that the colours we see in response to different wavelengths aren’t contained in the rays of light themselves, but that the rays “stir up a sensation of this or that color”
  • Light rays are simply energy, so there’s nothing intrinsically “blue” about short wavelengths or “red” about long wavelengths, and we perceive colour because of the way our nervous system responds to this energy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

In cases such as color vision, hearing, taste, and smell—the very essence of our perceptual experience is created by what?

A

The nervous system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Our perception of color is determined by the action of what?

A

3 different types of cone receptors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

The foundations of what are present at about 4 months of age?

A
  • The foundations of trichromatic vision
  • There’s also evidence that colour vision continues to develop into the teenage years
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

What does hearing provide us with that vision cannot?

A
  • Unlike vision, which depends on light traveling from objects to the eye, sound travels around corners to make us aware of events that otherwise would be invisible
  • Ex: in my office in the psychology department, I hear things that I would be unaware of if I had to rely only on my sense of vision: people talking in the hall; a car passing by on the street below; an ambulance, siren blaring, heading up the hill toward the hospital
  • If it weren’t for hearing, my world at this particular moment would be limited to what I can see in my office and the scene directly outside my window
  • Without hearing I would be unaware of many of the events in my environment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

How does our ability to hear events that we can’t see serve as an important signaling function for both animals and humans?

A
  • For an animal living in the forest, the rustle of leaves or the snap of a twig may signal the approach of a predator
  • For humans, hearing provides signals such as the warning sound of a smoke alarm or an ambulance siren, the distinctive high-pitched cry of a baby who is distressed, or telltale noises that indicate problems in a car engine
  • Hearing not only informs us about things that are happening that we can’t see, but it also adds richness to our lives through music and facilitates communication by means of speech
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

What does the question “If a tree falls in the forest and no one is there to hear it, is there a sound?” demonstrate?

A

This question shows that we can use the word sound both as a physical stimulus and as a perceptual response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

What’s sound?

A
  • The perceptual experience of hearing
  • The statement “I hear a sound” is using sound in this sense
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

What are the 2 definitions for sound?

A
  • Physical definition: Sound is pressure changes in the air or other medium
  • Perceptual definition: Sound is the experience we have when we hear
  • Ex: “the piercing sound of the trumpet filled the room” refers to the experience of sound, but “the sound had a frequency of 1,000 Hz” refers to sound as a physical stimulus
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

When does a sound stimulus occur?

A

A sound stimulus occurs when the movements or vibrations of an object cause pressure changes in air, water, or any other elastic medium that can transmit vibrations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

Describe the example of the process of the creation of a sound stimulus through a loudspeaker

A
  • A loudspeaker is a device for producing vibrations to be transmitted to the surrounding air
  • In extreme cases, such as standing near a speaker at a rock concert, these vibrations can be felt, but even at lower levels, the vibrations are there
  • The speaker’s vibrations affect the surrounding air
  • When the diaphragm of the speaker moves out, it pushes the surrounding air molecules together, a process called compression, which causes a slight increase in the density of molecules near the diaphragm
  • This increased density results in a local increase in the air pressure above atmospheric pressure
  • When the speaker diaphragm moves back in, air molecules spread out to fill in the increased space, a process called rarefaction
  • The decreased density of air molecules caused by rarefaction causes a slight decrease in air pressure
  • By repeating this process hundreds or thousands of times a second, the speaker creates a pattern of alternating high- and low-pressure regions in the air, as neighbouring air molecules affect each other
  • This pattern of air pressure changes, which travels through air at 340 meters per second (and through water at 1,500 meters per second), is called a sound wave
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

What’s a sound wave?

A
  • Pattern of pressure changes in a medium
  • Most of the sounds we hear are due to pressure changes in the air, although sound can be transmitted through water and solids as well
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

What’s the process of compression?

A

When the diaphragm of the speaker moves out, it pushes the surrounding air molecules together, which causes a slight increase in the density of molecules near the diaphragm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

What’s the process of rarefaction?

A
  • When the speaker diaphragm moves back in, air molecules spread out to fill in the increased space
  • The decreased density of air molecules caused by rarefaction causes a slight decrease in air pressure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

Does the traveling sound wave cause air to move outward from the speaker into the environment?

A
  • Yes
  • However, although air pressure changes move outward from the speaker, the air molecules at each location move back and forth but stay in about the same place
  • What is transmitted is the pattern of increases and decreases in pressure that eventually reach the listener’s ear
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

What’s a pure tone?

A
  • A tone with pressure changes that can be described by a single sine wave
  • A simple kind of sound wave
  • A pure tone occurs when changes in air pressure occur in a pattern described by a mathematical function called a sine wave
  • Tones with this pattern of pressure changes are occasionally found in the environment
  • Ex: a person whistling or the high-pitched notes produced by a flute are close to pure tones
  • Tuning forks, which are designed to vibrate with a sine-wave motion, also produce pure tones
  • For laboratory studies of hearing, computers generate pure tones that cause a speaker diaphragm to vibrate in and out with a sine-wave motion. This vibration can be described by noting its frequency and its amplitude
  • Pure tones are important because they are the fundamental building blocks of sounds, and have been used extensively in auditory research
  • They’re rare in the environment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

What’s frequency?

A
  • The number of times/cycles per second that pressure changes of a sound stimulus repeat
  • Frequency is measured in Hertz, where 1 Hertz is one cycle per second
  • Humans can perceive frequencies ranging from ~20 Hz to ~20,000 Hz, with higher frequencies usually being associated with higher pitches
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

What’s amplitude?

A
  • In the case of a repeating sound wave, such as the sine wave of a pure tone, amplitude represents the pressure difference between atmospheric pressure and the maximum pressure of the wave
  • The size of the pressure change
  • The range of amplitudes we can encounter in the environment is extremely large, ranging from a whisper to a jet taking off
  • The amplitude of a sound wave is associated with the loudness of a sound
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

What’s a Hertz (Hz)?

A
  • The unit for designating the frequency of a tone
  • One Hertz equals one cycle per second
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

What kind of frequencies are associated with what kind of pitches?

A
  • Higher frequencies are associated with the perception of higher pitches
  • Lower frequencies are associated with the perception of lower pitches
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

What would the difference in pressure between the high and low peaks of the sound wave indicate?

A

A sound’s amplitude

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

What kind of amplitude is associated with what kind of loudness?

A
  • Larger amplitude is associated with the perception of greater loudness (louder)
  • Smaller amplitude is associated with the perception of smaller loudness (softer)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

Give an example of how large the range of amplitudes is

A
  • If the pressure change plotted, in which the sine wave representing a near-threshold sound like a whisper is about 1/2-inch high on the page, then in order to plot the graph for a very loud sound, such as music at a rock concert, you would need to represent the sine wave by a curve several miles high
  • Because this is somewhat impractical, auditory researchers have devised a unit of sound called the decibel (dB), which converts this large range of sound pressures into a more manageable scale
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

What’s a decibel (dB)?

A
  • Unit of sound
  • Converts the large range of sound pressures into a more manageable scale
  • Unit that indicates the pressure of a sound stimulus relative to a reference pressure: dB = 20 log (p/po) where p is the pressure of the tone and po is the reference pressure
  • Uses logarithms to shrink the large range of sound pressures
  • Ex: The range of sound pressures encountered in the environment ranges from 1 to 10,000,000, which in powers of 10 is a range of 7 log units
  • Multiplying sound pressure by 10 causes an increase of 20 dB
  • When the sound pressure increases from 1 to 10,000,000, the dBs increase only from 0 to 140
  • This means that we don’t have to deal with graphs that are several miles high
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

What term is used when specifying the sound pressure in decibels?

A
  • When specifying the sound pressure in decibels, the notation SPL, for sound pressure level, is added to indicate that decibels were determined using the standard pressure of 20 micropascals
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

What’s sound pressure level (SPL)?

A

A designation used to indicate that the reference pressure used for calculating a tone’s decibel rating is set at 20 micropascals, near the threshold in the most sensitive frequency range for hearing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

What’s level/sound level?

A

The pressure of a sound stimulus, expressed in decibels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

What’s a periodic waveform?

A

For the stimulus for hearing, a pattern of repeating pressure changes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

What’s the fundamental frequency of a tone?

A
  • The first harmonic of a complex tone; usually the lowest frequency in the frequency spectrum of a complex tone
  • The tone’s other components, called higher harmonics, have frequencies that are multiples of the fundamental frequency
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

Complex tones are made up of what?

A
  • Of a number of pure tone (sine-wave) components added together
  • Each of these components is called a harmonic of the tone
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

What’s a harmonic?

A

Pure-tone components of a complex tone that have frequencies that are multiples of the fundamental frequency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

What’s the first harmonic?

A
  • A pure tone with frequency equal to the fundamental frequency of a complex tone
  • Usually called the fundamental of the tone
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

What are higher harmonics?

A
  • Pure tones with frequencies that are whole-number (2, 3, 4, etc.) multiples of the fundamental frequency
  • Ex: meaning that the second harmonic of a complex tone has a frequency of 200 x 2 = 400Hz, the third harmonic has a frequency of 200 x 3 = 600Hz, and so on
  • These additional tones are the higher harmonics of the tone
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

What does adding the fundamental and the higher harmonics together result in?

A

Results in the waveform of the complex tone

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

What’s a frequency spectra?

A
  • A plot that indicates the amplitudes of the various harmonics that make up a complex tone
  • Each harmonic is indicated by a line that’s positioned along the frequency axis, with the height of the line indicating the amplitude of the harmonic
  • Frequency spectra provide a way of indicating a complex tone’s fundamental frequency and harmonics that add up to the tone’s complex waveform
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

Do all the harmonics need to be present for the repetition rate in a complex waveform to stay the same?

A
  • No, not all the harmonics need to be present for the repetition rate to stay the same
  • Ex: if we remove the first harmonic of a complex tone
  • Removing a harmonic changes the tone’s waveform, but the rate of repetition remains the same
  • Even though the fundamental is no longer present, the 200-Hz repetition rate corresponds to the frequency of the fundamental
  • The same effect also occurs when removing higher harmonics
  • If the 400-Hz second harmonic is removed, the tone’s waveform changes, but the repetition rate is still 200
  • The spacing between harmonics equals the repetition rate
  • When the fundamental is removed, this spacing remains, so there’s still information in the waveform indicating the frequency of the fundamental
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

What’s the fundamental?

A

A pure tone with frequency equal to the fundamental frequency of a complex tone

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

How can we measure the physical aspects of the sound stimulus?

A

By a sound meter that registers pressure changes in the air

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

What are 2 perceptual dimensions of sound?

A
  1. Loudness -> involves differences in the perceived magnitude of a sound, illustrated by the difference between a whisper and a shout
  2. Pitch -> involves differences in the low to high quality of sounds, illustrated by what we hear playing notes from left to right on a piano keyboard
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

What’s the threshold of sound?

A

The smallest amount of sound energy that can just barely be detected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q

What’s loudness?

A
  • The perceived intensity of a sound that ranges from “just audible” to “very loud”
  • The quality of sound that ranges from soft to loud
  • For a tone of a particular frequency, loudness usually increases with increasing decibels
  • The perceptual quality most closely related to the level or amplitude of an auditory stimulus, which is expressed in decibels
  • Decibels are often associated with loudness, which indicates that a sound of 0 dB SPL is just barely detectible and 120 dB SPL is extremely loud (and can cause permanent damage to the receptors inside the ear)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
125
Q

Describe how S. S. Stevens used the magnitude estimation procedure to determine the relationship between level in decibels (physical) and loudness (perceptual)

A
  • In this experiment, loudness was judged relative to a 40-dB SPL tone, which was assigned a value of 1
  • Thus, a pure tone that sounds 10x louder than the 40-dB SPL tone would be judged to have a loudness of 10
  • He found that increasing the sound level by 10 dB (from 40 to 50) almost doubles the sound’s loudness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
126
Q

Is loudness only dependent on decibels?

A
  • Loudness depends not only on decibels but also on frequency
  • One way to appreciate the importance of frequency in the perception of loudness is to consider the audibility curve
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
127
Q

What’s the audibility curve?

A
  • A curve that indicates the sound pressure level (SPL) at threshold for frequencies across the audible spectrum
  • This audibility curve, which indicates the threshold for hearing VS frequency, indicates that we can hear sounds between about 20 Hz and 20,000 Hz and that we are most sensitive (the threshold for hearing is lowest) at frequencies between 2,000 and 4,000 Hz, which happens to be the range of frequencies that is most important for understanding speech
  • At intensities below the audibility curve, we can’t hear a tone
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
128
Q

What does it mean for sounds to have low/high thresholds?

A
  • Some frequencies have low thresholds -> it takes very little sound pressure change to hear them
  • Other frequencies have high thresholds -> large changes in sound pressure are needed to make them heard
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
129
Q

What’s the auditory response area?

A
  • The psychophysically measured area that defines the frequencies and sound pressure levels over which hearing functions (we can hear tones that fall within this area)
  • This area extends between the audibility curve and the curve for the threshold of feeling (tones with these high amplitudes are the ones we can “feel”; they can become painful and can cause damage to the auditory system)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
130
Q

Do animals have the same threshold for hearing or range of audible frequencies as us?

A
  • Although humans hear frequencies between ~20 Hz and 20,000 Hz, other animals can hear frequencies outside the range of human hearing
  • Elephants can hear stimuli below 20 Hz
  • Above the high end of the human range, dogs can hear frequencies above 40,000 Hz, cats can hear above 50,000 Hz, and the upper range for dolphins extends as high as 150,000 Hz
131
Q

What does “each frequency has a threshold or “baseline” mean?

A
  • The baseline indicates the decibels at which it can just barely be heard
  • Loudness increases as we increase the level above this baseline
132
Q

What are the equal loudness curves?

A
  • A curve that indicates the sound pressure levels that result in a perception of the same loudness at frequencies across the audible spectrum
  • An equal loudness curve is determined by presenting a standard pure tone of one frequency and level and having a listener adjust the level of pure tones with frequencies across the range of hearing to match the loudness of the standard
  • For example, a curve is determined by matching the loudness of frequencies across the range of hearing to the loudness of a 1,000-Hz 40-dB SPL tone
  • This means that a 100-Hz tone needs to be played at 60 dB (point D) to have the same loudness as the 1,000-Hz tone at 40 dB
  • The equal loudness curve marked 80 is almost flat between 30 and 5,000 Hz, meaning that tones at a level of 80 dB SPL are roughly equally loud between these frequencies. Thus, at threshold, the level can be very different for different frequencies, but at some level above threshold, different frequencies can have a similar loudness at the same decibel level
133
Q

Tones above the threshold of feeling result in what?

A

Pain

134
Q

What’s pitch?

A
  • The perceptual quality of sound, ranging from low to high, that is most closely associated with the frequency of a tone
  • Can be defined as the property of auditory sensation in terms of which sounds may be ordered on a musical scale extending from low to high
  • Pitch is the aspect of auditory sensation whose variation is associated with musical melodies
  • While often associated with music, pitch is also a property of speech (low-pitched or high-pitched voice) and other natural sounds.
  • Pitch is most closely related to the physical property of fundamental frequency (the repetition rate of the sound waveform)
  • Low fundamental frequencies are associated with low pitches (like the sound of a tuba), and high fundamental frequencies are associated with high pitches (like the sound of a piccolo)
  • Pitch is a perceptual, not a physical, property of sound
  • It can’t be measured in a physical way
  • Ex: you can’t say that a sound has a “pitch of 200 Hz”
  • Instead we say that a particular sound has a low pitch or a high pitch, based on how we perceive it
135
Q

How can we relate a piano keyboard to pitch?

A
  • Hitting a key on the left of the keyboard creates a low-pitched rumbling “bass” tone
  • Moving up the keyboard creates higher and higher pitches, until tones on the far right are high-pitched and might be described as “tinkly”
  • The physical property that’s related to this low to high perceptual experience is fundamental frequency, with the lowest note on the piano having a fundamental frequency of 27.5 Hz and the highest note 4,186 Hz
  • In addition to the increase in tone height that occurs as we move from the low to the high end of the piano keyboard, the letters of the notes A, B, C, D, E, F, and G repeat, and we notice that notes with the same letter sound similar
  • Because of this similarity, we say that notes with the same letter have the same tone chroma. Every time we pass the same letter on the keyboard, we have gone up an interval called an octave
  • Tones separated by octaves have the same tone chroma
136
Q

What’s tone height?

A
  • The increase in pitch that occurs as frequency is increased
  • The perceptual experience of increasing pitch that accompanies increases in a tone’s fundamental frequency
137
Q

What’s tone chroma?

A
  • The perceptual similarity of notes separated by one or more octaves
  • The letters of the notes A, B, C, D, E, F, and G on a piano keyboard repeat, and we notice that notes with the same letter sound similar -> same tone chroma
  • Every time we pass the same letter on the keyboard, we have gone up an interval called an octave
  • Tones separated by octaves have the same tone chroma
  • Ex: each of the A keys have the same tone chroma
  • Notes with the same chroma have fundamental frequencies that are separated by a multiple of 2
  • Ex: A0 has a fundamental frequency of 27.5 Hz, A1’s is 55 Hz, A2’s is 110 Hz, and so on
  • This doubling of frequency for each octave results in similar perceptual experiences
  • Thus, a male with a low-pitched voice and a female with a high-pitched voice can be regarded as singing “in unison,” even when their voices are separated by one or more octaves
138
Q

What happens if the fundamental or other harmonics of a complex tone are removed?

A

The tone’s pitch remains the same, so the 2 waveforms result in the same pitch (the effect of the missing fundamental)

139
Q

What’s the effect of the missing fundamental?

A
  • Removing the fundamental frequency and other lower harmonies from a musical tone doesn’t change the tone’s pitch
  • The fact that pitch remains the same, even when the fundamental or other harmonics are removed
  • The effect of the missing fundamental has practical consequences
  • Ex: when you listen to someone talking to you on a land-line phone -> even though the telephone doesn’t reproduce frequencies below about 300 Hz, you can hear the low pitch of a male voice that corresponds to a 100-Hz fundamental frequency because of the pitch created by the higher harmonics
140
Q

What’s timbre?

A
  • The quality that distinguishes between 2 tones that sound different even though they have the same loudness, pitch, and duration
  • Differences in timbre are illustrated by the sounds made by different musical instruments
  • Ex: when a flute and an oboe play the same note with the same loudness, we can still tell the difference between these 2 instruments. We might describe the sound of the flute as clear and the sound of the oboe as reedy
  • Timbre is closely related to the harmonic structure of a tone
  • Timbre also depends on the time course of a tone’s attack and of the tone’s decay
  • Thus, it is easy to tell the difference between a high note played on a clarinet and the same note played on a flute
  • It is difficult, however, to distinguish between the same instruments when their tones are recorded and the tone’s attack and decay are eliminated by erasing the first and last 1/2 second of each tone’s recording
  • Another way to make it difficult to distinguish one instrument from another is to play an instrument’s tone backward
  • Even though this doesn’t affect the tone’s harmonic structure, a piano tone played backward sounds more like an organ than a piano because the tone’s original decay has become the attack and the attack has become the decay
  • Thus, timbre depends both on the tone’s steady-state harmonic structure and on the time course of the attack and decay of the tone’s harmonics
141
Q

Although removing harmonics doesn’t affect a tone’s pitch, what does change?

A

The tone’s Timbre changes

142
Q

Give examples of different instruments having different timbres

A
  • Ex: the harmonics of a guitar, a bassoon, and an alto saxophone playing the same note with a fundamental frequency of 196 Hz. However, both the relative strengths of the harmonics and the number of harmonics can be different in these instruments
  • Although the frequencies of the harmonics are always multiples of the fundamental frequency, harmonics may be absent
  • It’s also easy to notice differences in the timbre of people’s voices
  • Ex: when we describe one person’s voice as sounding “nasal” and another’s as being “mellow,” we’re referring to the timbres of their voices
143
Q

What’s a tone’s attack?

A

The buildup of sound energy that occurs at the beginning of a tone

144
Q

What’s a tone’s decay?

A

The decrease in the sound signal that occurs at the end of a tone

145
Q

What are periodic sounds?

A
  • A sound stimulus in which the pattern of pressure changes in the waveform repeats
  • Ex: pure tones and the tones produced by musical instruments
  • Only periodic sounds can generate a perception of pitch
146
Q

What are aperiodic sounds?

A
  • Sound waves/waveforms that do not repeat
  • Ex: a door slamming shut, a large group of people talking simultaneously, and noises such as the static on a radio not tuned to a station
147
Q

The auditory system accomplishes what 3 basic tasks during its journey that begins as sound enters the ear and culminates deep inside the ear at the receptors for hearing?

A
  1. It delivers the sound stimulus to the receptors
  2. It transduces this stimulus from pressure changes into electrical signals
  3. It processes these electrical signals so they can indicate qualities of the sound source, such as pitch, loudness, timbre, and location
148
Q

The ear is divided into what 3 divisions?

A
  • Outer
  • Middle
  • Inner
149
Q

What’s the outer ear?

A

The pinna and the auditory canal

150
Q

What’s the pinnae?

A

The part of the ear that’s visible on the outside of the head

151
Q

Sound waves first pass through what structure of the ear?

A

The outer ear

152
Q

What’s the auditory canal?

A
  • The canal through which air vibrations travel from the environment to the tympanic membrane
  • A tubelike recess about 3cm long in adults
  • It protects the delicate structures of the middle ear from the hazards of the outside world
  • The auditory canal’s recess, along with its wax, protects the delicate tympanic membrane, or eardrum, at the end of the canal and helps keep this membrane and the structures in the middle ear at a relatively constant temperature
  • It also enhances the intensities of some sounds by means of the physical principle of resonance
153
Q

What’s the part of the ear we could most easily do without?

A
  • The pinnae
  • Despite them being the most obvious part of the ear and helping us determine the location of sounds
  • Van Gogh did not make himself deaf in his left ear when he attacked his pinna
154
Q

What’s the tympanic membrane?

A
  • AKA eardrum
  • A membrane at the end of the auditory canal that vibrates in response to vibrations of the air and pressure changes and transmits these vibrations to the ossicles in the middle ear
155
Q

What’s the physical principle of resonance?

A
  • A mechanism that enhances the intensity of certain frequencies because of the reflection of sound waves in a closed tube
  • Resonance occurs in the auditory canal when sound waves that are reflected back from the closed end of the auditory canal interact with sound waves that are entering the canal
  • This interaction reinforces some of the sound’s frequencies, with the frequency that is reinforced the most being determined by the length of the canal
  • Measurements of the sound pressures inside the ear indicate that the resonance in the auditory canal has a slight amplifying effect that increases the sound pressure level of frequencies between ~1,000 and 5,000 Hz, which covers the most sensitive range of human hearing
156
Q

What’s the resonant frequency of the canal?

A
  • The frequency that’s most strongly enhanced by resonance
  • The resonance frequency of a closed tube is determined by the length of the tube
157
Q

What happens when airborne sound waves reach the tympanic membrane at the end of the auditory canal?

A

They set it into vibration, and this vibration is transmitted to structures in the middle ear, on the other side of the tympanic membrane

158
Q

What’s the middle ear?

A
  • The small air-filled space between the auditory canal and the cochlea that contains the ossicles
  • Small cavity that separates the outer and inner ear
159
Q

What are the ossicles?

A
  • The 3 smallest bones in the body located in the middle ear that transmit vibrations from the outer to the inner ear
  • Malleus, Incus, and Stapes
160
Q

Describe the relationship between the ossicles (malleus, incus and stapes)

A
  • The first of these bones, the malleus (aka the hammer), is set into vibration by the tympanic membrane, to which it is attached, and transmits its vibrations to the incus (or anvil), which, in turn, transmits its vibrations to the stapes (or stirrup)
  • The stapes then transmits its vibrations to the inner ear by pushing on the membrane covering the oval window
161
Q

What’s the oval window?

A

A small, membrane-covered hole in the cochlea that receives vibrations from the stapes

162
Q

Why are the ossicles necessary?

A
  • The outer ear and middle ear are filled with air, but the inner ear contains a watery liquid that is much denser than the air
  • The mismatch between the low density of the air and the high density of this liquid creates a problem: pressure changes in the air are transmitted poorly to the much denser liquid
  • Ex: difficulty you would have hearing people talking to you if you were underwater and they were above the surface
  • If vibrations had to pass directly from the air in the middle ear to the liquid in the inner ear, less than 1 % of the vibrations would be transmitted
  • The ossicles help solve this problem in 2 ways:
    1. By concentrating the vibration of the large tympanic membrane onto the much smaller stapes, which increases the pressure by a factor of ~20
    2. By being hinged to create a lever action -> the lever action of the ossicles amplifies the sound vibrations reaching the tympanic inner ear
  • In patients whose ossicles have been damaged beyond surgical repair, it’s necessary to increase the sound pressure by a factor of 10 to 50 to achieve the same hearing as when the ossicles were functioning
  • Not all animals require the concentration of pressure and lever effect provided by the ossicles in the human ear
  • Ex: there’s only a small mismatch between the density of water, which transmits sound in a fish’s environment, and the liquid inside the fish’s ear. Fish have no outer or middle ear
163
Q

What are middle-ear muscles?

A
  • Muscles attached to the ossicles in the middle ear
  • The smallest skeletal muscles in the body, they contract in response to very intense sounds and dampen the vibration of the ossicles
  • This reduces the transmission of low-frequency sounds and helps to prevent intense low-frequency components from interfering with our perception of high frequencies
  • Contraction of the muscles may prevent our own vocalizations, and sounds from chewing, from interfering with our perception of speech from other people—an important function in a noisy restaurant
164
Q

What’s the inner ear?

A

The innermost division of the ear, containing the cochlea and the receptors for hearing

165
Q

What’s the cochlea?

A

The snail-shaped, liquid-filled structure that contains the structures of the inner ear, the most important of which are the basilar membrane, the tectorial membrane, and the hair cells

166
Q

What’s cochlear partition?

A
  • A partition in the cochlea, extending almost its full length, that separates the scala tympani (lower half) and the scala vestibuli (upper half)
  • The organ of Corti, which contains the hair cells, is part of the cochlear partition
  • It’s relatively large and contains the structures that transform the vibrations inside the cochlea into electricity
167
Q

What’s the organ of Corti?

A
  • The major structure of the cochlear partition, containing the basilar membrane, the tectorial membrane, and the receptors for hearing
  • Contains the hair cells
168
Q

What are the hair cells?

A
  • The receptors for hearing
  • Neurons in the cochlea that contain small hairs, or cilia, that are displaced by vibration of the basilar membrane and fluids inside the inner ear
  • There are 2 kinds of hair cells: inner and outer
  • There are hair cells from one end of the cochlea to the other
  • The cilia of the outer hair cells are embedded in the tectorial membrane, but the cilia of the inner hair cells aren’t
  • At the tips of the hair cells are small processes called stereocilia
  • The human ear contains one row of inner hair cells and about 3 rows of outer hair cells, with ~3,500 inner hair cells and 12,000 outer hair cells
169
Q

What’s the basilar membrane?

A

A membrane that stretches the length of the cochlea and controls the vibration of the cochlear partition

170
Q

What’s the tectorial membrane?

A
  • A membrane that stretches the length of the cochlea and is located directly over the hair cells
  • Vibrations of the cochlear partition cause the tectorial membrane to bend the hair cells by rubbing against them
171
Q

Motions of the basilar membrane and tectorial membrane are caused by what?

A

Vibration of the cochlear partition

172
Q

What are stereocilia?

A
  • Thin processes that protrude from the tops of the hair cells in the cochlea that bend in response to pressure changes
  • The stereocilia of the tallest row of outer hair cells are embedded in the tectorial membrane, and the stereocilia of the rest of the outer hair cells and all of the inner hair cells are not
173
Q

Describe the process of vibration from the ossicles ultimately bending the stereocilia

A
  • Vibration of the stapes in the middle ear sets the oval window into motion
  • The back and forth motion of the oval window transmits vibrations to the liquid inside the cochlea, which sets the basilar membrane into motion
  • The up-and-down motion of the basilar membrane has 2 results:
    1. It sets the organ of Corti into an up-and-down vibration
    2. It causes the tectorial membrane to move back and forth
  • These 2 motions mean that the tectorial membrane slides back and forward just above the hair cells
  • The movement of the tectorial membrane causes the stereocilia of the outer hair cells that are embedded in the membrane to bend
  • The stereocilia of the other outer hair cells and the inner hair cells also bend, but in response to pressure waves in the liquid surrounding the stereocilia
174
Q

Describe the process of bending of hair cells causing electrical signals

A
  • The stereocilia of the hair cells bend in one direction
  • This bending causes structures called tip links to stretch, and this opens tiny ion channels in the membrane of the stereocilia, which function like trapdoors
  • When the ion channels are open, positively charged potassium ions flow into the cell causing the interior of the cell to become more positive and an electrical signal results
  • When the stereocilia bend in the other direction, the tip links slacken, the ion channels close, and ion flow stops
  • Thus, the back-and-forth bending of the hair cells causes alternating bursts of electrical signals (when the stereocilia bend in one direction) and no electrical signals (when the stereocilia bend in the opposite direction)
  • The electrical signals in the hair cells result in the release of neurotransmitters at the synapse separating the inner hair cells from the auditory nerve fibers, which causes these auditory nerve fibers to fire
175
Q

What are the main receptors responsible for generating signals that are sent to the cortex in auditory nerve fibers?

A

Inner hair cells

176
Q

What are tip links?

A

Structures at the tops of the cilia of auditory hair cells, which stretch or slacken as the cilia move, causing ion channels to open or close

177
Q

Describe how the electrical Signals Are Synchronized With the Pressure Changes of a Pure Tone

A
  • The bending of the stereocilia follows the increases and decreases of the pressure of a pure tone sound stimulus
  • When the pressure increases, the stereocilia bend to the right, the hair cell is activated, and attached auditory nerve fibers will tend to fire
  • When the pressure decreases, the stereocilia bend to the left, and no firing occurs
  • Meaning that auditory nerve fibers fire in synchrony with the rising and falling pressure of the pure tone
178
Q

Describe how hair cell activation and auditory nerve fiber firing are synchronized with pressure changes of the stimulus

A
  • The auditory nerve fiber fires when the cilia are bent to the right
  • This occurs at the peak of the sine-wave change in pressure
  • For high-frequency tones, a nerve fiber may not fire every time the pressure changes because it needs to rest after it fires (refractory period)
  • But when the fiber does fire, it fires at the same time in the sound stimulus
  • Since many fibers respond to the tone, it’s likely that if some “miss” a particular pressure change, other fibers will be firing at that time
  • When we combine the response of many fibers, each of which fires at the peak of the sound wave, the overall firing matches the frequency of the sound stimulus
  • A sound’s repetition rate produces a pattern of nerve firing in which the timing of nerve spikes matches the timing of the repeating sound stimulus
179
Q

What’s phase locking?

A

Firing of auditory neurons in synchrony with the phase of an auditory stimulus

180
Q

Describe Békésy’s Discovery of How the Basilar Membrane Vibrates

A
  • He determined how the basilar membrane vibrates to different frequencies by observing the vibration of the basilar membrane
  • He accomplished this by boring a hole in cochleas taken from animal and human cadavers
  • He presented different frequencies of sound and observed the membrane’s vibration by using a technique similar to that used to create stop-action photographs of high-speed events
  • When he observed the membrane’s position at different points in time, he saw the basilar membrane’s vibration as a traveling wave, like the motion that occurs when a person holds the end of a rope and “snaps” it, sending a wave traveling down the rope
  • Békésy’s measurements showed that most of the membrane vibrates, but that some parts vibrate more than others
  • If you were at one point on the basilar membrane you would see the membrane vibrating up and down at the frequency of the tone
  • If you observed the entire membrane, you would see that vibration occurs over a large portion of the membrane, and that there is one place that vibrates the most.
  • His most important finding was that the place that vibrates the most depends on the frequency of the tone
  • As the frequency increases, the place on the membrane that vibrates the most moves from the apex at the end of the cochlea toward the base at the oval window
  • Because the place of maximum vibration depends on frequency, this means that basilar membrane vibration effectively functions as a filter that sorts tones by frequency
181
Q

What’s a traveling wave?

A

In the auditory system, vibration of the basilar membrane in which the peak of the vibration travels from the base of the membrane to its apex

182
Q

What’s the apex of the basilar membrane?

A

The end of the cochlea farthest from the middle ear

183
Q

What’s the base of the basilar membrane?

A

The end of the cochlea nearest the middle ear

184
Q

Describe how frequencies can be localized along the basilar membrane

A
  • The different places of maximum vibration along the length of the basilar membrane separate sound stimuli by frequency
  • High frequencies cause more vibration near the base end of the cochlea, and low frequencies cause more vibration at the apex of the cochlea
  • Vibration of the basilar membrane “sorts” or “filters” by frequency so hair cells are activated at different places along the cochlea for different frequencies
185
Q

What’s the tonotopic map?

A
  • An ordered map of frequencies created by the responding of neurons within structures in the auditory system
  • There’s a tonotopic map of neurons along the length of the cochlea, with neurons at the apex responding best to low frequencies and neurons at the base responding best to high frequencies
  • This “map” of the cochlea illustrates the sorting of frequencies, with high frequencies activating the base of the cochlea and low frequencies activating the apex
  • The tonotopic map can be measured by placing electrodes at different positions on the outer surface of the cochlea and stimulating with different frequencies while recording from single auditory nerve fibers located at different places along the cochlea
  • Measurement of the response of auditory nerve fibers to frequency is depicted by a fiber’s neural frequency tuning curve
  • This indicates the location of the maximum electrical response for each frequency
186
Q

Describe the method of determining a neuron’s Frequency Tuning Curve

A
  • It’s determined by presenting pure tones of different frequencies and measuring the sound level necessary to cause the neuron to increase its firing above the baseline or “spontaneous” rate in the absence of sounds
  • This level is the threshold for that frequency
  • Plotting the threshold for each frequency results in frequency tuning curves
  • The frequency to which the neuron is most sensitive (has the lowest sound level threshold) is called the characteristic frequency of the particular auditory nerve fiber
187
Q

What’s the frequency tuning curve?

A
  • Curve relating frequency and the threshold intensity for activating an auditory neuron
  • Each of the 3,500 inner hair cells has its own tuning curve, and because each inner hair cell sends signals to about 20 auditory nerve fibers, each frequency is represented by a number of neurons located at that frequency’s place along the basilar membrane
188
Q

What’s the characteristic frequency of an auditory nerve fiber?

A

The frequency at which a neuron in the auditory system has its lowest threshold

189
Q

The cochlea’s filtering action is reflected by what 2 things?

A
  1. The neurons respond best to one frequency
  2. Each frequency is associated with nerve fibers located at a specific place along the basilar membrane, with fibers originating near the base of the cochlea having high characteristic frequencies and those originating near the apex having low characteristic frequencies
190
Q

When modern researchers used more advanced technology that enabled them to measure vibration in live cochleas, what were they able to demonstrate?

A
  • They showed that the pattern of vibration for specific frequencies was much narrower than what Békésy had observed
  • What was responsible for this narrower vibration -> in 1983 Hallowell Davis published a paper titled “An Active Process in Cochlear Mechanics,” where he went on to propose a mechanism that he named the cochlear amplifier, which explained why neural turning curves were narrower than what would be expected based on Békésy’s measurements of basilar membrane vibration
  • He proposed that the cochlear amplifier was an active mechanical process that took place in the outer hair cells
191
Q

What’s the function of outer hair cells?

A
  • The major purpose of outer hair cells is to influence the way the basilar membrane vibrates, and they accomplish this by changing length
  • While ion flow in inner hair cells causes an electrical response in auditory nerve fibers, ion flow in outer hair cells causes mechanical changes inside the cell that causes the cell to expand and contract
  • The outer hair cells become elongated when the stereocilia bend in one direction and contract when they bend in the other direction
  • This mechanical response of elongation and contraction pushes and pulls on the basilar membrane, which increases the motion of the basilar membrane and sharpens its response to specific frequencies
192
Q

What’s the cochlear amplifier?

A
  • Expansion and contraction of the outer hair cells in response to sound sharpens the movement of the basilar membrane to specific frequencies
  • This amplifying effect plays an important role in determining the frequency selectivity of auditory nerve fibers
193
Q

What happens when the cochlear amplifier is removed?

A
  • Ex: Let’s say the frequency tuning of a cat’s auditory nerve fiber with a characteristic frequency of about 8,000 Hz
  • We eliminate the cochlear amplifier by destroying the outer hair cells with a chemical that attacked the outer hair cells but left the inner hair cells intact
  • Whereas originally the fiber had a low threshold at 8,000 Hz, it now takes much higher intensities to get the auditory nerve fiber to respond to 8,000 Hz and nearby frequencies
  • Conclusion: the cochlear amplifier greatly sharpens the tuning of each place along the cochlea
194
Q

It has been proposed that pitch perception is determined by what?

A

The firing of neurons that respond best to specific frequencies

195
Q

What’s the place theory?

A
  • The proposal that the frequency of a sound is indicated by the place along the organ of Corti at which nerve firing is highest
  • Modern place theory is based on Békésy’s traveling wave theory of hearing
  • The association of frequency with place led to the following explanation of the physiology of pitch perception:
  • A pure tone causes a peak of activity at a specific place on the basilar membrane
  • The neurons connected to that place respond strongly to that frequency, as indicated by the auditory nerve fiber frequency tuning curves and this information is carried up the auditory nerve to the brain
  • The brain identifies which neurons are responding the most and uses this information to determine the pitch
  • This explanation of the physiology of pitch perception has been called the place theory, because it is based on the relation between a sound’s frequency and the place along the basilar membrane that is activated
196
Q

What’s an argument against place theory?

A
  • One argument against it was based on the effect of the missing fundamental, in which removing the fundamental frequency of a complex tone doesn’t change the tone’s pitch
  • Thus, a tone which has a fundamental frequency of 200 Hz, has the same pitch after the 200 Hz fundamental is removed
  • What this means is that there’s no longer peak vibration at the place associated with 200 Hz
  • A modified version of place theory explains this result by considering how the basilar membrane vibrates to complex tones
197
Q

Describe the modified version of place theory that explains how the basilar membrane vibrates to complex tones

A
  • A complex tone causes peaks in vibration for the fundamental (200 Hz) and for each harmonic
  • Thus, removing the fundamental eliminates the peak at 200 Hz, but peaks would remain at 400, 600, and 800, and this pattern of places, spaced 200 Hz apart, matches the fundamental frequency so can be used to determine the pitch
198
Q

Why does the idea that pitch can be determined by harmonics work only for low harmonics—harmonics that are close to the fundamental?

A
  • We can see why this is by considering a tone with fundamental frequency of 440 Hz
  • When the 440 Hz tone is presented, it most strongly activates the filter (cochlear filter banks which correspond to frequency tuning curves) and the 880 Hz second harmonic
  • Moving up to higher harmonics, the 5,720-Hz 13th harmonic and the 6,160-Hz 14th harmonic both activate the 2 overlapping filters (frequency tuning curves of cochlear nerve fibers)
  • Meaning that lower harmonics activate separated and narrower filters while high harmonics can activate the same filters
  • Taking the properties of the filter bank into account results in the excitation curve, which is essentially a picture of the amplitude of basilar membrane vibration caused by each of the tone’s harmonics
  • What stands out about the excitation curve is that the tone’s lower harmonics each cause a distinct bump in the excitation curve
  • Because each of these lower harmonics can be distinguished by a peak, they are called resolved harmonics, and frequency information is available for perceiving pitch
  • In contrast, the excitations caused by the higher harmonics create a smooth function that doesn’t indicate the individual harmonics
  • These higher harmonics are called unresolved harmonics
199
Q

What are resolved harmonics?

A
  • Harmonics in a complex tone that create separated peaks in basilar membrane vibration, and so can be distinguished from one another
  • Usually lower harmonics of a complex tone
200
Q

What are unresolved harmonics?

A
  • Harmonics of a complex tone that can’t be distinguished from one another because they are not indicated by separate peaks in the basilar membrane vibration
  • The higher harmonics of a tone are most likely to be unresolved
201
Q

A series of resolved harmonics results in ____, but unresolved harmonics result in ____.

A
  • A strong perception of pitch
  • A weak perception of pitch
  • Ex: a tone with the spectral composition 400, 600, 800, and 1,000 Hz results in a strong perception of pitch corresponding to the 200-Hz fundamental
  • However, the smeared out pattern that would be caused by higher harmonics of the 200 Hz fundamental, such as 2,000, 2,200, 2,400, and 2,600 Hz results in a weak perception of pitch corresponding to 200 Hz
  • This shows that place information provides an incomplete explanation of pitch perception
202
Q

Describe Edward Burns and Neal Viemeister (1976) work on the amplitude-modulated noise

A
  • They created a sound stimulus that wasn’t associated with vibration of a particular place on the basilar membrane, but which created a perception of pitch, called amplitude-modulated noise
  • Noise is a stimulus that contains many random frequencies so it doesn’t create a vibration pattern on the basilar membrane that corresponds to a specific frequency
  • Amplitude modulation means that the level (or intensity) of the noise was changed so the loudness of the noise fluctuated rapidly up and down
  • They found that this noise stimulus resulted in a perception of pitch, which they could change by varying the rate of the up-and-down changes in level
  • The conclusion from this finding, that pitch can be perceived even in the absence of place information, has been demonstrated in a large number of experiments using different types of stimuli
203
Q

What’s amplitude-modulated noise?

A

A noise sound stimulus that is amplitude modulated

204
Q

What’s noise?

A

A sound stimulus that contains many random frequencies

205
Q

What’s amplitude modulation?

A

Adjusting the level (or intensity) of a sound stimulus so it fluctuates up and down

206
Q

What’s the reason phase locking has been linked to pitch perception?

A
  • Because pitch perception occurs only for frequencies up to about 5,000 Hz, and phase locking also occurs only up to 5,000 Hz
  • Ex: when tones are strung together to create a melody, we only perceive a melody if the tones are below 5,000 Hz
  • Hence probably why the highest note on an orchestral instrument (the piccolo) is about 4,500 Hz
  • Melodies played using frequencies above 5,000 Hz sound rather strange
  • You can tell that something is changing but it doesn’t sound musical
  • So it seems that our sense of musical pitch may be limited to those frequencies that create phase locking
207
Q

What’s temporal coding?

A
  • The connection between the frequency of a sound stimulus and the timing of the auditory nerve fiber firing
  • The existence of phase locking below 5,000 Hz, along with other evidence, has led most researchers to conclude that temporal coding is the major mechanism of pitch perception
208
Q

Describe the research by Andrew Oxenham and coworkers (2011) on whether pitch can be perceived for frequencies above 5,000 Hz

A
  • They answered this question by showing that if a large number of high-frequency harmonics are presented, participants do, in fact, perceive pitch
  • Ex: when presented with 7,200, 8,400, 9,600, 10,800, and 12,000 Hz, which are harmonics of a tone with 1,200 Hz fundamental frequency, participants perceived a pitch corresponding to 1,200 Hz, which is the spacing between the harmonics (although the perception of pitch was weaker than the perception to lower harmonics)
  • A particularly interesting aspect of this result is that although each harmonic presented alone did not result in perception of pitch (because they are all above 5,000 Hz), pitch was perceived when a number of harmonics were presented together
209
Q

What is pitch perception created by?

A

Not by the cochlea but by the brain

210
Q

Describe the pathway of auditory signals from the hair cells to the brain

A
  • Signals generated in the hair cells of the cochlea are transmitted out of the cochlea in nerve fibers of the auditory nerve
  • The auditory nerve carries the signals generated by the inner hair cells away from the cochlea along the auditory pathway, eventually reaching the auditory cortex
  • Auditory nerve fibers from the cochlea synapse in a sequence of subcortical structures
  • This sequence begins with the cochlear nucleus and continues to the superior olivary nucleus in the brain stem, the inferior colliculus in the midbrain, and the medial geniculate nucleus in the thalamus
  • From the medial geniculate nucleus, fibers continue to the primary auditory cortex in the temporal lobe of the cortex
  • This sequence of structures corresponds to SONIC MG (a very fast sports car)
  • A great deal of processing occurs as signals travel through the subcortical structures along the pathway from the cochlea to the cortex
211
Q

What’s the cochlear nucleus?

A

The nucleus where nerve fibers from the cochlea first synapse

212
Q

What’s the superior olivary nucleus?

A
  • A nucleus along the auditory pathway from the cochlea to the auditory cortex
  • The superior olivary nucleus receives inputs from the cochlear nucleus
213
Q

What’s the inferior colliculus?

A
  • A nucleus in the hearing system along the pathway from the cochlea to the auditory cortex
  • The inferior colliculus receives inputs from the superior olivary nucleus
214
Q

What’s the medial geniculate nucleus?

A
  • An auditory nucleus in the thalamus that’s part of the pathway from the cochlea to the auditory cortex
  • It receives inputs from the inferior colliculus and transmits signals to the auditory cortex
215
Q

Are auditory structures unilateral or bilateral?

A
  • Bilateral
  • They exist on both the left and right sides of the body and messages can cross over between the 2 sides
216
Q

Processing in the superior olivary nucleus (SON) is important for what?

A

For locating sounds because it’s here that signals from the left and right ears first meet

217
Q

What happens as nerve impulses are traveling up the SONIC MG pathway to the auditory cortex?

A
  • The temporal information that dominated pitch coding in the cochlea and auditory nerve fibers becomes less important
  • The main indication of this is that phase locking, which occurred up to about 5,000 Hz in auditory nerve fibers, occurs only up to 100–200 Hz in the auditory cortex
  • But while temporal information decreases as nerve impulses travel toward the cortex, experiments in the marmoset have demonstrated the existence of individual neurons that seem to be responding to pitch, and experiments in humans have located areas in the auditory cortex that also appear to be responding to pitch
218
Q

Describe the experiment by Daniel Bendor and Xiaoqin Wang (2005) that determined how neurons in regions partially overlapping the primary auditory cortex of a marmoset (monkey) responded to complex tones

A
  • These complex tones differed in their harmonic structure but would be perceived by humans as having the same pitch
  • They found neurons that responded similarly to complex tones with the same fundamental frequency but with different harmonic structures
  • For example, a tone with a fundamental frequency of 182 Hz. In the top record, the tone contains the fundamental frequency and the second and third harmonics; in the second record, harmonics 4–6 are present; and so on, until at the bottom, only harmonics 12–14 are present. Even though these stimuli contain different frequencies (for example, 182, 364, and 546 Hz in the top record; 2,184, 2,366, and 2,548 Hz in the bottom record), they’re all perceived by humans as having a pitch corresponding to the 182-Hz fundamental frequency
  • These stimuli all caused an increase in firing
  • To demonstrate that this firing occurred only when information about the 182-Hz fundamental frequency was present, Bendor and Wang showed that the neuron responded well to a 182-Hz tone presented alone, but not to any of the higher harmonics when they were presented individually
  • These cortical neurons, therefore, responded only to stimuli associated with the 182-Hz tone, which is associated with a specific pitch
  • Bendor and Wang called these neurons pitch neurons
219
Q

What are pitch neurons?

A
  • A neuron that responds to stimuli associated with a specific pitch
  • These neurons fire to the pitch of a complex tone even if the first harmonic or other harmonics of the tone are not present
220
Q

Research on where pitch is processed in the human cortex has used what kind of method?

A
  • Brain scanning (fMRI) to measure the response to stimuli associated with different pitches
  • This isn’t as simple as it may seem, because when a neuron responds to sound, this doesn’t necessarily mean it’s involved in perceiving pitch
  • To determine whether areas of the brain are responding to pitch, researchers have looked for brain regions that are more active in response to a pitch-evoking sound, such as a complex tone, than to another sound, such as a band of noise that has similar physical features but doesn’t produce a pitch
  • By doing this, researchers hope to locate brain regions that respond to pitch, irrespective of other properties of the sound
221
Q

Describe the experiment by Sam Norman-Haignere and coworkers (2013) using a pitch-evoking stimulus and a noise stimulus to locate brain regions involved with processing pitch

A
  • The pitch stimulus they used is the 3rd, 4th, 5th, and 6th harmonics of a complex tone with a fundamental frequency of 100 Hz (300, 400, 500, and 600 Hz)
  • The noise they used consists of a band of frequencies from 300 to 600 Hz
  • Because the noise stimulus covers the same range as the pitch stimulus, it’s called frequency-matched noise
  • By comparing fMRI responses generated by the pitch-evoking stimulus to the response from the frequency-matched noise, Norman-Haignere located areas in the primary auditory cortex and some nearby areas that responded more to the pitch-evoking stimulus
  • The areas most responsive to pitch are located in the anterior auditory cortex (area close to the front of the brain)
  • In other experiments, Norman-Haignere determined that the regions that were most responsive to pitch responded to resolved harmonics, but didn’t respond as well to unresolved harmonics
  • Because resolved harmonics are associated with pitch perception, this result strengthens the conclusion that these cortical areas are involved in pitch perception
222
Q

How many people in the US suffer from impaired hearing?

A

Roughly 17% of the U.S. adult population suffers from some form of impaired hearing

223
Q

What are some causes of hearing loss?

A
  • Noise in the environment -> the ears are often bombarded with noises such as crowds of people talking (or yelling, if at a sporting event), construction sounds, and traffic noise
  • Noises such as these are the most common cause of hearing loss
  • Hearing loss is usually associated with damage to the outer hair cells, and recent evidence indicates that damage to auditory nerve fibers may be involved as well
  • When the outer hair cells are damaged, the response of the basilar membrane becomes similar to the broad response seen for the dead cochleas examined by Békésy; this results in a loss of sensitivity (inability to hear quiet sounds) and a loss of the sharp frequency tuning seen in healthy ears
  • The broad tuning makes it harder for hearing-impaired people to separate sounds (ex: to hear speech sounds in noisy environments)
  • Inner hair cell damage can also cause a loss of sensitivity
  • For both inner and outer hair cells, hearing loss occurs for the frequencies corresponding to the frequencies detected by the damaged hair cells
  • Sometimes inner hair cells are lost over an entire region of the cochlea (a “dead region”), and sensitivity to the frequencies that normally excite that region of the cochlea becomes much reduced
  • Sometimes we expose ourselves to sounds that over the long term do result in hair cell damage
  • One of the things that contributes to hair cell damage is living in an industrialized environment, which contains sounds that contribute to a type of hearing loss called presbycusis
224
Q

What’s Presbycusis?

A
  • A form of sensorineural hearing loss that occurs as a function of age and is usually associated with a decrease in the ability to hear high frequencies
  • Since this loss also appears to be related to exposure to environmental sounds, it’s also called sociocusis
  • It’s caused by hair cell damage resulting from the cumulative effects over time of noise exposure, the ingestion of drugs that damage the hair cells, and age-related degeneration
  • The loss of sensitivity associated with presbycusis, which is greatest at high frequencies, affects males more severely than females
  • Unlike the visual problem of presbyopia, which is an inevitable consequence of aging, presbycusis is more likely to be caused by factors in addition to aging
  • Ex: people in preindustrial cultures, who haven’t been exposed to the noises that accompany industrialization or to drugs that could damage the ear, often don’t experience large decreases in high-frequency hearing in old age
  • This may be why males, who historically have been exposed to more workplace noise than females, as well as to noises associated with hunting and wartime, experience a greater presbycusis effect
  • Although presbycusis may be unavoidable, since most people are exposed over a long period of time to the everyday sounds of our modern environment, there are situations in which people expose their ears to loud sounds that could be avoided
  • This exposure to particularly loud sounds results in noise-induced hearing loss
225
Q

What’s noise-induced hearing loss?

A
  • A form of sensorineural hearing loss that occurs when loud noises cause degeneration of the hair cells
  • This degeneration has been observed in examinations of the cochleas of people who have worked in noisy environments and have willed their ear structures to medical research
  • Damage to the organ of Corti is often observed in these cases
  • Ex: examination of the cochlea of a man who worked in a steel mill indicated that his organ of Corti had collapsed and no receptor cells remained
  • More controlled studies of animals exposed to loud sounds provide further evidence that high-intensity sounds can damage or completely destroy inner hair cells
  • Because of the danger to hair cells posed by workplace noise, the United States Occupational Safety and Health Agency (OSHA) has mandated that workers not be exposed to sound levels greater than 85 dB for an 8-hour work shift
  • Other sources of intense sound can cause hair cell damage leading to hearing loss
  • Ex: if you turn up the volume on your smartphone, you are exposing yourself to what hearing professionals call leisure noise
226
Q

What’s leisure noise?

A
  • Noise associated with leisure activities such as listening to music, hunting, and woodworking
  • Exposure to high levels of leisure noise for extended periods can cause hearing loss
  • Ex: if you turn up the volume on your smartphone, you are exposing yourself to leisure noise
  • Other sources of leisure noise are activities such as recreational gun use, riding motorcycles, playing musical instruments, and working with power tools
  • A number of studies have demonstrated hearing loss in people who listen to music with earphones, play in rock/pop bands, use power tools, and attend sports events
  • The amount of hearing loss depends on the level of sound intensity and the duration of exposure
  • Given the high levels of sound that occur in these activities, such as the levels above 90 dB SPL that can occur for the 3 hours of a hockey game, about 100 dB SPL for music venues such as clubs or concerts, and levels as high as 90 dB SPL while using power tools in woodworking, it isn’t surprising that both temporary and permanent hearing losses are associated with these leisure activities
  • The potential for hearing loss from listening to music at high volume for extended periods of time cannot be overemphasized, because at their highest settings, smartphones reach levels of 100 dB SPL or higher—far above OSHA’s recommended maximum of 85 dB
  • This has led Apple Computer to add a setting to their devices that limits the maximum volume
227
Q

What’s hidden hearing loss?

A
  • Hearing loss that occurs at high sound levels, even though the person’s thresholds, as indicated by the audiogram, are normal (have normal hearing as measured by a standard hearing test)
  • ## People with “normal” hearing who have trouble hearing in noisy environments
228
Q

What does the standard hearing test measure?

A
  • It involves measuring thresholds for hearing tones across the frequency spectrum
  • The person sits in a quiet room and is instructed to indicate when he or she hears very faint tones being presented by the tester
  • The results of this test can be plotted as thresholds covering a range of frequencies (like the audibility curve) or as an audiogram (a plot of hearing loss VS frequency)
  • “Normal” hearing is indicated by a horizontal function at 0 dB on the audiogram, indicating no deviation from the normal standard
  • This hearing test, along with the audiograms it produces, has been called the gold standard of hearing test function
  • One reason for the popularity of this test is that it’s thought to indicate hair cell functioning
  • But for hearing complex sounds like speech, especially under noisy conditions such as at a party or in the noise of city traffic, the auditory nerve fibers that transmit signals from the cochlea are also important
  • Although the auditory nerve fibers can get permanently damaged, the behavioral thresholds to quiet sounds had returned to normal
  • Thus, a normal audiogram doesn’t necessarily indicate normal auditory functioning
  • This is why hearing loss due to nerve fiber damage has been described as “hidden” hearing loss
  • Hidden hearing loss can be lurking in the background, causing serious problems in day-to-day functioning that involves hearing in noisy environments
229
Q

Describe how Sharon Kujawa and Charles Liberman (2009) determined the importance of having intact auditory nerve fibers through experiments on the effect of noise on hair cells and auditory nerve fibers in the mouse

A
  • They exposed the mice to a 100-dB SPL noise for 2 hours and then measured their hair cell and auditory nerve functioning using physiological techniques
  • One day after the noise exposure, hair cell function was decreased below normal
  • However, by 8 weeks after the noise exposure, hair cell function had returned almost to normal
  • For the the auditory nerve fibers, their function was also decreased right after the noise, but unlike the hair cells, auditory nerve function never returned to normal
  • The response of nerve fibers to low-level sounds did recover completely, but the response to high-level sounds, like the 75-dB tone, remained below normal
  • This lack of recovery reflects the fact that the noise exposure had permanently damaged some of the auditory nerve fibers, particularly those that represent information about high sound levels
  • It’s thought that similar effects occur in humans, so that even when people have normal sensitivity to low-level sounds and therefore have “clinically normal” hearing, the damaged auditory nerve fibers are responsible for problems hearing speech in noisy environments
  • Even though some auditory nerve fibers were permanently damaged, the behavioral thresholds to quiet sounds had returned to normal
  • Thus, a normal audiogram doesn’t necessarily indicate normal auditory functioning
  • This is why hearing loss due to nerve fiber damage has been described as “hidden” hearing loss
230
Q

What’s an audiogram?

A

Plot of hearing loss versus frequency

231
Q

Describe how Lynne Werner Olsho and coworkers (1988) determined infants’ audibility curves

A
  • An infant is fitted with earphones and sits on the parent’s lap
  • An observer, sitting out of view of the infant, watches the infant through a window
  • A light blinks on, indicating that a trial has begun, and a tone is either presented or not
  • The observer’s task is to decide whether the infant heard the tone
  • For observers to tell whether the infant has heard a tone, they look for responses such as eye movements, changes in facial expression, a wide-eyed look, a turn of the head, or changes in activity level
  • These judgments resulted in an audibility curve for a 2,000-Hz tone
  • Observers only occasionally indicated that the 3-month-old infants had heard a tone that was presented at low intensity or not at all
  • They were more likely to say that the infant had heard the tone when the tone was presented at high intensity
  • The infant’s threshold was determined from this curve, and the results from a number of other frequencies were combined to create audibility functions
  • The curves for 3- and 6-month-olds and adults indicate that infant and adult audibility functions look similar and that by 6 months of age the infant’s threshold is within about 10 to 15 dB of the adult threshold
232
Q

Describe Anthony DeCasper and William Fifer (1980) study showing that newborns can identify sounds they have heard before

A
  • They demonstrated this capacity in newborns by showing that 2-day-old infants will modify their sucking on a nipple in order to hear the sound of their mother’s voice
  • They first observed that infants usually suck on a nipple in bursts separated by pauses
  • They fitted infants with earphones and let the length of the pause in the infant’s sucking determine whether the infant heard a recording of the mother’s voice or a recording of a stranger’s voice
  • For 1/2 of the infants, long pauses activated the tape of the mother’s voice, and short pauses activated the tape of the stranger’s voice
  • For the other 1/2, these conditions were reversed
  • They found that the babies regulated the pauses in their sucking so that they heard their mother’s voice more than the stranger’s voice
  • This is a remarkable accomplishment for a 2-day-old, especially because most had been with their mothers for only a few hours between birth and the time they were tested
  • They suggested that newborns recognized their mother’s voice because they had heard the mother talking during development in the womb
  • This suggestion is supported by the results of another experiment, in which DeCasper and M. J. Spence (1986) had one group of pregnant women read from Dr. Seuss’s book The Cat in the Hat and another group read the same story with the words cat and hat replaced with dog and fog
  • When the children were born, they regulated the pauses in their sucking in a way that caused them to hear the version of the story their mother had read when they were in the womb
  • Moon and coworkers (1993) obtained a similar result by showing that 2-day-old infants regulated their sucking to hear a recording of their native language rather than a foreign language
233
Q

Describe Barbara Kisilevsky and coworkers (2003) study on fetuses recognizing sounds in the womb

A
  • They presented loud (95-dB) recordings of the mother reading a 2-minute passage and a stranger reading a 2-minute passage through a loudspeaker held 10 cm above the abdomen of full-term pregnant women
  • When they measured the fetus’s movement and heart rate as these recordings were being presented, they found that the fetus moved more in response to the mother’s voice, and that heart rate increased in response to the mother’s voice but decreased in response to the stranger’s voice
  • Kisilevsky concluded from these results that fetal voice processing is influenced by experience, just as the results of earlier experiments had suggested
234
Q

What’s the acoustic shadow?

A
  • The shadow created by the head that decreases the level of high-frequency sounds on the opposite side of the head
  • The acoustic shadow is the basis of the localization cue of interaural level difference
235
Q

Describe how we can understand why an ILD occurs for high frequencies but not for low frequencies by drawing an analogy between sound waves and water waves

A
  • Consider a situation in which small ripples in the water are approaching a boat
  • Because the ripples are small compared to the boat, they bounce off the side of the boat and go no further
  • Now imagine the same ripples approaching cattails (small plants sticking out water)
  • Because the distance between the ripples is large compared to the stems of the cattails, the ripples are hardly disturbed and continue on their way
  • This illustrates that an object has a large effect on the wave if it is larger than the distance between the waves (as occurs when short high-frequency sound waves hit the head), but has a small effect if it is smaller than the distance between the waves (as occurs for longer low-frequency sound waves)
  • For this reason, the ILD is an effective cue for location only for high-frequency sounds
236
Q

What’s interaural time difference (ITD)?

A
  • When a sound is positioned closer to one ear than to the other, the sound reaches the close ear slightly before reaching the far ear, so there’s a difference in the time of arrival at the 2 ears
  • The ITD provides a cue for sound localization
  • If the source is located directly in front of the listener, the distance to each ear is the same; the sound reaches the left and right ears simultaneously, so the ITD is zero
  • However, if a source is located off to the side, the sound reaches the right/left ear before it reaches the left/right ear
  • Because the ITD becomes larger as sound sources are located more to the side, the magnitude of the ITD can be used as a cue to determine a sound’s location
  • Behavioral experiments show that ITD is most effective for determining the locations of low-frequency sounds (Yost & Zhong, 2014) and ILD is most effective for high-frequency sounds, so between them they cover the frequency range for hearing
  • However, because most sounds in the environment contain low-frequency components, ITD is the dominant binaural cue for hearing
237
Q

Which one of the binaural cues for hearing is the dominant one?

A
  • Interaural time difference (ITD)
  • Because most sounds in the environment contain low-frequency components
238
Q

What’s the Cone of Confusion?

A
  • While the time and level differences provide information that enables people to judge location along the azimuth coordinate, they provide ambiguous information about the elevation of a sound source
  • Because the time and level differences can be the same at a number of different elevations, they cannot reliably indicate the elevation of the sound source
  • Ambiguous information is provided when the sound source is both directly in front and off to the side
  • These places of ambiguity are illustrated by the cone of confusion
  • This is a surface in the shape of a cone that extends out from the ear
  • All points on the surface of this cone have the same ILD and ITD
  • This shows that there are many locations in space where 2 sounds could result in the same ILD and ITD -> so location information provided by these cues is ambiguous
239
Q

The ambiguous nature of the information provided by the ILD and ITD at different elevations means that another source of information is needed to locate sounds along the elevation coordinate, which is provided by what?

A

This information is provided by spectral cues

240
Q

What are spectral cues?

A
  • In hearing, the distribution of frequencies reaching the ear that are associated with specific locations of a sound
  • Spectral cues work best for judging elevation, especially for spectra extending to higher frequencies
  • The differences in frequencies are caused by interaction of sound with the listener’s head and pinnae
  • Cues in which information for localization is contained in differences in the distribution (or spectrum) of frequencies that reach each ear from different locations
  • These differences are caused by the fact that before the sound stimulus enters the auditory canal, it is reflected from the head and within the various folds of the pinnae
  • The effect of this interaction with the head and pinnae has been measured by placing small microphones inside a listener’s ears and comparing frequencies from sounds that are coming from different directions
  • Ex: consider the frequencies picked up by the microphone when a broadband sound (one containing many frequencies) is presented at elevations of 15 degrees above the head and 15 degrees below the head
  • Sounds coming from these 2 locations would result in the same ILD and ITD because they are the same distance from the left and right ears, but differences in the way the sounds bounce around within the pinna create different patterns of frequencies for the 2 locations
  • The importance of the pinnae for determining elevation has been demonstrated by showing that smoothing out the nooks and crannies of the pinnae with molding compound makes it difficult to locate sounds along the elevation coordinate
241
Q

Describe how Paul Hofman and coworkers (1998) studied how localization of sounds along the elevation coordinate can be affected by using a mold to change the inside contours of the pinnae

A
  • They determined how localization changes when the mold is worn for several weeks, and then what happens when the mold is removed
  • After measuring initial performance, Hofman fitted his listeners with molds that altered the shape of the pinnae and therefore changed the spectral cue
  • They found that localization performance was poor for the elevation coordinate immediately after the mold was inserted, but locations could still be judged at locations along the azimuth coordinate
  • This is exactly what we would expect if binaural cues are used for judging azimuth location and spectral cues are responsible for judging elevation locations
  • He continued his experiment by retesting localization as his listeners continued to wear the molds
  • Over time, localization performance improved, until by 19 days localization had become reasonably accurate
  • Apparently, the person had learned, over a period of weeks, to associate new spectral cues to different directions in space
  • It would be logical to expect that once adapted to the new set of spectral cues created by the molds, localization performance would suffer when the molds were removed
  • However, localization remained excellent immediately after removal of the ear molds
  • Apparently, training with the molds created a new set of correlations between spectral cues and location, but the old correlation was still there as well
  • One way this could occur is if different sets of neurons were involved in responding to each set of spectral cues, just as separate brain areas are involved in processing different languages in people who have learned a second language as adults
242
Q

Other than ILDs, ITDs and spectral cues, what else helps us localize sounds?

A
  • In real-world listening, we also move our heads, which provides additional ILD, ITD, and spectral information that helps minimize the effect of the cone of confusion and helps locate continuous sounds
  • Vision also plays a role in sound localization, as when you hear talking and see a person making gestures and lip movements that match what you are hearing
  • The richness of the environment and our ability to actively search for information help us zero in on a sound’s location
243
Q

What’s the Jeffress Neural Coincidence Model?

A
  • The neural mechanism of auditory localization that proposes that neurons are wired to each receive signals from the 2 ears, so that different neurons fire to different interaural time differences (ITD)
  • He described this operates by a circuit (ex: left ear 1 2 3 4 5 6 7 8 9 right ear)
  • If the sound source is directly in front of the listener, the sound reaches the left and right ears simultaneously, and signals from the left and right ears start out together
  • As each signal travels along its axon, it stimulates each neuron in turn
  • At the beginning of the journey, neurons receive signals from only the left ear (neurons 1, 2, 3) or the right ear (neurons 9, 8, 7), but not both, and they do not fire
  • But when the signals both reach neuron 5 together, that neuron fires
  • This neuron and the others in this circuit are called coincidence detectors, because they only fire when both signals coincide by arriving at the neuron simultaneously
  • If the sound comes from the right, the sound reaches the right ear first, that gives the signal from the right ear a head start, so that it travels all the way to neuron 3 before it meets up with the signal from the left ear
  • Neuron 3, detects ITDs that occur when the sound is coming from a specific location on the right
  • The other neurons in the circuit fire to locations corresponding to other ITDs
  • We can therefore call these coincidence detectors ITD detectors, since each one fires best to a particular ITD
  • The Jeffress model therefore proposes a circuit that contains a series of ITD detectors, each tuned to respond best to a specific ITD
  • According to this idea, the ITD will be indicated by which ITD neuron is firing
  • This has been called a “place code” because ITD is indicated by the place (which neuron) where the activity occurs
244
Q

What are coincidence detectors?

A
  • Neurons in the Jeffress neural coincidence model, which was proposed to explain how neural firing can provide information regarding the location of a sound source
  • A neural coincidence detector fires when signals from the left and right ears reach the neuron simultaneously
  • Different neural coincidence detectors fire to different values of interaural time difference
245
Q

What are ITD detectors?

A
  • Interaural time difference detector
  • Neurons in the Jeffress neural coincidence model that fire when signals reach them from the left and right ears
  • Each ITD detector is tuned to respond to a specific time delay between the 2 signals, and so provides information about possible locations of a sound source
246
Q

What’s one way to describe the properties of ITD neurons?

A
  • To measure ITD tuning curves, which plot the neuron’s firing rate against the ITD
  • Ex: recording from neurons in the brainstem of the barn owl, which has excellent auditory localization abilities, has revealed narrow tuning curves that respond best to specific ITDs
  • The neurons associated with the curves on the left fire when the sound reaches the left ear first, and the ones on the right fire when sound reaches the right ear first
  • These are the tuning curves that are predicted by the Jeffress model, because each neuron responds best to a specific ITD and the response drops off rapidly for other ITDs
  • The place code proposed by the Jeffress model, with its narrow tuning curves, works for owls and other birds, but the situation is different for mammals
247
Q

What happens when we try to apply the Jeffress Neural Coincidence Model to mammals?

A
  • The results of research in which ITD tuning curves are recorded from mammals may appear, at first glance, to support the Jeffress model
  • Ex: an ITD tuning curve of a neuron in the gerbil’s superior olivary nucleus has a peak at an ITD of about 200 microseconds and drops off on either side
  • However, when we plot the owl curve on the same graph we can see that the gerbil curve is much broader than the owl curve
  • In fact, the gerbil curve is so broad that it peaks at ITDs far outside the range of ITDs that a gerbil would actually hear in nature and the range of ITDs that typically occur in the environment
248
Q

Because of the broadness of the ITD curves in mammals, it has been proposed that coding for sound localization is based on what?

A
  • Based on broadly tuned neurons
  • According to this idea, there are broadly tuned neurons in the right hemisphere that respond when sound is coming from the left and broadly tuned neurons in the left hemisphere that respond when sound is coming from the right
  • The location of a sound is indicated by relative responses of these 2 types of broadly tuned neurons
  • This type of coding resembles population coding, in which information in the nervous system is based on the pattern of neural responding
  • This is also how the visual system signals different wavelengths of light, in which wavelengths are signaled by the pattern of response of 3 different cone pigments
249
Q

The neural mechanism of binaural localization is based on _____ neurons for birds and _____ neurons for mammals

A
  • The neural mechanism of binaural localization is based on sharply tuned neurons for birds and broadly tuned neurons for mammals
  • The code for birds is a place code because the ITD is indicated by firing of neurons at a specific place in the nervous system
  • The code for mammals is a population code because the ITD is determined by the firing of many broadly tuned neurons working together
250
Q

The neural basis of binaural localization begins where?

A
  • It begins along the pathway from the cochlea to the brain, in the superior olivary nucleus, which is the first place that receives signals from the left and right ears
  • A great deal of processing for location occurs as signals are traveling from the ear to the cortex
251
Q

Describe examples of studies on sound localization in the cortex

A
  • Dewey Neff and coworkers (1956) placed cats about 8ft away from 2 food boxes—one about 3ft to the left, and one about 3ft to the right
  • The cats were rewarded with food if they approached the sound of a buzzer located behind one of the boxes
  • Once the cats learned this localization task, the auditory areas on both sides of the cortex were lesioned, and although the cats were then trained for more than 5 months, they were never able to relearn how to localize the sounds
  • Based on this finding, Neff concluded that an intact auditory cortex is necessary for accurate localization of sounds in space
  • Fernando Nodal and coworkers (2010) showed that lesioning the primary auditory cortex in ferrets decreased, but did not totally eliminate, the ferrets’ ability to localize sounds
  • Another demonstration that the auditory cortex is involved in localization was provided by Shveta Malhotra and Stephen Lomber (2007), who showed that deactivating auditory cortex in cats by cooling the cortex results in poor localization
252
Q

Describe the What and Where Auditory Pathways

A
  • On either side of the auditory cortex, there’s an anterior belt area and a posterior belt area
  • Both of these areas are auditory, but have different functions
  • Research has shown that these 2 parts of the belt are the starting points for 2 auditory pathways, a what pathway, which extends from the anterior belt to the front of the temporal lobe and then to the frontal cortex, and a where pathway, which extends from the posterior belt to the parietal lobe and then to the frontal cortex
  • The what pathway is associated with perceiving sounds and the where pathway with locating sounds
  • The idea of pathways serving what and where functions is a general principle that occurs for both hearing and vision
  • Evidence for what and where auditory functions in humans has been provided by using brain scanning to show that what and where tasks activate different brain areas in humans
253
Q

What are the functions of the anterior and posterior belts?

A

The anterior belt is involved in perceiving complex sounds and patterns of sound and the posterior belt is involved in localizing sounds

254
Q

What’s direct sound?

A
  • Sound that’s transmitted directly from a sound source to the ears
  • Ex: If you’re listening to someone playing a guitar on an outdoor stage, your perception is based mainly on direct sound
255
Q

What’s indirect sound?

A
  • Sound that reaches a listener’s ears after being reflected from a surface such as a room’s walls
  • If you’re listening to someone playing a guitar in an auditorium, your perception is based on direct sound, which reaches your ears directly, plus indirect sound, which reaches your ears after bouncing off the auditorium’s walls, ceiling, and floor
256
Q

Why does the fact that sound can reach our ears directly from where the sound is originating and indirectly from other locations create a problem?

A

Because even though the sound originates in one place, the sound reaches the listener from many directions and at slightly different times

257
Q

Describe how research has been conducted on sound reflections and the perception of location

A
  • Research on sound reflections and the perception of location has usually simplified the problem by simulating sound reaching the ears directly from a sound source, followed by a delayed sound from a reflection
  • This simulation is achieved having people listen to loudspeakers separated in space
  • The speaker on the left is the lead speaker (representing the actual sound source), and the one on the right is the lag speaker (representing a single sound reflection)
  • If a sound is presented in the lead speaker followed by a long delay (tenths of a second), and then a sound is presented in the lag speaker, listeners typically hear 2 separate sounds—one from the left (lead) followed by one from the right (lag)
  • But when the delay between the lead and lag sounds is much shorter, as often occurs in a room, something different happens
  • Even though the sound is coming from both speakers, listeners hear a single sound as coming only from the lead speaker
  • This situation, in which a single sound appears to originate from near the lead speaker, is called the precedence effect because we perceive the sound as coming from near the source that reaches our ears first
258
Q

What’s the precedence effect?

A
  • When 2 identical or very similar sounds reach a listener’s ears separated by a time interval of less than ~50 to 100 ms, the listener hears both of them as only the first sound that reaches their ears
  • The point of the precedence effect is that a sound source and its lagging reflections are perceived as a single fused sound, except if the delay is too long, in which case the lagging sounds are perceived as echoes
  • This effect governs most of our indoor listening experience
  • In small rooms, the indirect sounds reflected from the walls have a lower level than the direct sound and reach our ears with delays of ~5 to 10 ms
  • In larger rooms, like concert halls, the delays are much longer
  • However, even though our perception of where the sound is coming from is usually determined by the first sound that reaches our ears, the indirect sound, which reaches our ears just slightly later, can affect the quality of the sound we hear
259
Q

What’s architectural acoustics?

A
  • The study of how sounds are reflected in rooms
  • An important concern of architectural acoustics is how these reflected sounds change the quality of the sounds we hear
  • The fact that sound quality is determined by both direct and indirect sound is a major concern of the field of architectural acoustics, which is particularly concerned with how to design concert halls
260
Q

What are the major factors affecting indirect sound?

A
  • The size of the room and the amount of sound absorbed by the walls, ceiling, and floor
  • If most of the sound is absorbed, then there are few sound reflections and little indirect sound
  • If most of the sound is reflected, there are many sound reflections and a large amount of indirect sound
  • Another factor is the shape of the room
  • This determines how sound hits surfaces and the directions in which it is reflected
261
Q

The amount and duration of indirect sound produced by a room is expressed as what?

A
  • Reverberation time
  • This is the time it takes for a sound produced in an enclosed space to decrease to 1/1,000th of its original pressure (or a decrease in level by 60 dB)
  • If the reverberation time of a room is too long, sounds become muddled because the reflected sounds persist for too long
  • Ex: in extreme cases, such as cathedrals with stone walls, these delays are perceived as echoes, and it may be difficult to accurately localize the sound source
  • If the reverberation time is too short, music sounds “dead,” and it becomes more difficult to produce high-intensity sounds
262
Q

What’s known to be the “ideal” reverbaration time?

A

2.0 seconds

263
Q

Does an “ideal” reverberation time always predict good acoustics?

A
  • No, an “ideal” reverberation time does not always predict good acoustics
  • Ex: the problems associated with the design of New York’s Philharmonic Hall. When it opened in 1962, Philharmonic Hall had a reverberation time close to the ideal of 2.0 seconds. Even so, the hall was criticized for sounding as though it had a short reverberation time, and musicians in the orchestra complained that they could not hear each other. These criticisms resulted in a series of alterations to the hall, made over many years, until eventually, when none of the alterations proved satisfactory, the entire interior of the hall was destroyed, and in 1992 the hall was completely rebuilt and renamed Avery Fisher Hall. But that’s not the end of the story, because even after being rebuilt, the acoustics of Avery Fisher Hall were still not considered adequate. So the hall has been renamed David Geffen Hall, and plans are being discussed regarding the best way to improve its acoustics
264
Q

What factors, on top of reverberation time, have been identified by Leo Beranek (1996) as being associated with how music is perceived in concert halls?

A
  • Intimacy time: The time between when sound arrives directly from the stage and when the first reflection arrives. This is related to reverberation but involves just comparing the time between the direct sound and the first reflection, rather than the time it takes for many reflections to die down
  • Bass ratio: The ratio of low frequencies to middle frequencies that are reflected from walls and other surfaces
  • Spaciousness factor: The fraction of all of the sound received by a listener that is indirect sound
265
Q

What’s a problem that often occurs with the acoustics in concert halls?

A
  • That the acoustics depend on the number of people attending a performance, because people’s bodies absorb sound
  • Thus, a hall with good acoustics when full could echo when there are too many empty seats
  • To deal with this problem, some places have designed the seat cushions to have the same absorption properties as an “average” person
  • This means that the hall has the same acoustics when empty or full
  • This is a great advantage to musicians, who usually rehearse in an empty hall
266
Q

How does an adjustable acoustic system work?

A
  • Adjustable acoustic systems could make it possible to adjust the reverberation to between 1.4 and 2.6 seconds
  • This is achieved by motors that control the position of the canopy over the stage and various panels, draperies and banners throughout the hall
  • These adjustments make it possible to “tune” the hall for different kinds of music, so short reverberation times can be achieved for singing and longer reverberation times for orchestral music
267
Q

What’s the auditory scene?

A
  • The sound environment, which includes the locations and qualities of individual sound sources
  • The array of sound sources at different locations in the environment
268
Q

What’s the auditory scene analysis (ASA)?

A
  • The process by which the sound stimuli produced by different sources in an auditory scene become perceptually organized into sounds at different locations and into separated streams of sound
  • ASA poses a difficult problem because the sounds from different sources are combined into a single acoustic signal, so it’s difficult to tell which part of the signal is created by which source just by looking at the waveform of the sound stimulus
  • Ex: the guitar, the vocalist, and the keyboard each create their own sound signal, but all of these signals enter the listener’s ear together and so are combined into a single complex waveform
  • Each of the frequencies in this signal causes the basilar membrane to vibrate, but it isn’t obvious what information might be contained in the sound signal to indicate which vibration is created by which sound source
269
Q

Auditory scene analysis considers what 2 situations?

A
  1. The first situation involves simultaneous grouping
    Ex: this occurs for a musical trio, because all of the musicians are playing simultaneously. The question asked in the case of simultaneous grouping is “How can we hear the vocalist and each of the instruments as separate sound sources?”
  2. The second situation is sequential grouping
    Ex: hearing the melody being played by the keyboard as a sequence of notes that are grouped together is an example of sequential grouping, as is hearing the conversation of a person you are talking with in a coffee shop as a stream of words coming from a single source
    - Research on ASA has focused on determining cues or information in both of these situations
270
Q

What’s simultaneous grouping?

A

The situation that occurs when sounds are perceptually grouped together because they occur simultaneously in time

271
Q

What’s sequential grouping?

A

In auditory scene analysis, grouping that occurs as sounds follow one another in time

272
Q

Name the different types of information that are used to analyze auditory scenes

A
  • Location
  • Onset Synchrony
  • Timbre and Pitch
  • Harmonicity
273
Q

Describe how location can be used to analyze auditory scenes

A
  • One way to analyze an auditory scene into its separate components would be to use information about where each source is located
  • According to this idea, you can separate the sound of the vocalist from the sound of the guitar based on localization cues such as the ILD and ITD
  • Thus, when 2 sounds are separated in space, the cue of location helps us separate them perceptually
  • Also, when a source moves, it typically follows a continuous path rather than jumping erratically from one place to another
  • Ex: this continuous movement of sound helps us perceive the sound from a passing car as originating from a single source
  • But the fact that information other than location is also involved becomes obvious when we consider that sounds can be separated even if they are all coming from the same location
  • Ex: we can perceive many different instruments in a composition that’s recorded by a single microphone and played back over a single loudspeaker
274
Q

Describe how onset synchrony can be used to analyze auditory scenes

A
  • Onset time is one of the strongest cues for segregation
  • If 2 sounds start at slightly different times, it is likely that they came from different sources
  • This occurs often in the environment, because sounds from different sources rarely start at exactly the same time
275
Q

Describe how timbre and pitch can be used to analyze auditory scenes

A
  • Sounds that have the same timbre or pitch range are often produced by the same source
  • Ex: A flute doesn’t suddenly sound like the timbre of a trombone
  • In fact, the flute and trombone are distinguished not only by their timbres, but also by their pitch ranges
  • The flute tends to play in a high pitch range, and the trombone in a low one
  • These distinctions help the listener decide which sounds originate from which source
276
Q

Describe how harmonicity can be used to analyze auditory scenes

A
  • Periodic sounds consist of a fundamental frequency, plus harmonics that are multiples of the fundamental
  • Because it’s unlikely that several independent sound sources would create a fundamental and the pattern of harmonics associated with it, when we hear a harmonic series we infer that it came from a single source
277
Q

List the Gestalt grouping principles that relate to how we group sequences of sounds that occur over time

A
  • Similarity of Pitch
  • Auditory Continuity
278
Q

Describe the similarity of pitch principle that explains how we group sequences of sounds that occur over time

A
  • Pitch helps us organize sound from a single source in time
  • Similarity comes into play because consecutive sounds produced by the same source usually are similar in pitch
  • That is, they don’t usually jump wildly from one pitch to a very different pitch
  • Musical sequences typically contain small intervals between notes
  • These small intervals cause notes to be grouped together, following the Gestalt law of proximity
  • The perception of a string of sounds as belonging together is called auditory stream segregation
  • Most of the time, principles of auditory grouping like similarity of pitch help us to accurately interpret similar sounds as coming from the same source, because that’s what usually happens in the environment
279
Q

What’s auditory stream segregation?

A
  • The effect that occurs when a series of sounds that differ in pitch or timbre are played so that the tones become perceptually separated into simultaneously occurring independent streams of sound
280
Q

Describe Albert Bregman and Jeffrey Campbell (1971) demonstration of auditory stream segregation

A
  • They demonstrated auditory stream segregation based on pitch by alternating high and low tones
  • When the high-pitched tones were slowly alternated with the low-pitched tones, the tones were heard as part of one stream, one after another: Hi–Lo–Hi–Lo–Hi–Lo
  • But when the tones were alternated very rapidly, the high and low tones became perceptually grouped into 2 auditory streams -> the listener perceived 2 separate streams of sound, one high-pitched and one low-pitched - This demonstration shows that stream segregation depends not only on pitch but also on the rate at which tones are presented
  • In other words, when high and low tones are alternated slowly, auditory stream segregation doesn’t occur, so the listener perceives alternating high and low tones.Faster alternation results in segregation into high and low streams
281
Q

Describe how grouping by similarity of pitch can occur

A
  • Grouping by similarity of pitch in which 2 streams of sound are perceived as separated until their pitches become similar
  • Ex: one stream is a series of repeating notes and the other is a scale that goes up
  • At first the 2 streams are separated, so listeners simultaneously perceive the same note repeating and a scale going up
  • However, when the frequencies of the 2 stimuli become similar, grouping by similarity of pitch occurs, and perception changes to a back-and-forth “galloping” or jumping between the tones of the 2 streams
  • Then, as the scale continues upward so the frequencies become more separated, the 2 sequences are again perceived as separated
282
Q

What’s the scale illusion?

A
  • AKA melodic channeling
  • An illusion that occurs when successive notes of a scale are presented alternately to the left and right ears
  • Even though each ear receives notes that jump up and down in frequency, smoothly ascending or descending scales are heard in each ear
  • Another example of how similarity of pitch causes grouping
  • Diana Deutsch (1975, 1996) demonstrated this effect by presenting 2 sequences of notes simultaneously through earphones, one to the right ear and one to the left. The notes presented to each ear jump up and down and do not create a scale. However, Deutsch’s listeners perceived smooth sequences of notes in each ear, with the higher notes in the right ear and the lower ones in the left ear. Even though each ear received both high and low notes, grouping by similarity of pitch caused listeners to group the higher notes in the right ear (which started with a high note) and the lower notes in the left ear (which started with a low note)
  • In Deutsch’s experiment, the perceptual system applies the principle of grouping by similarity to the artificial stimuli presented through earphones and creates the illusion that smooth sequences of notes are being presented to each ear
283
Q

Describe the auditory continuity principle that explains how we group sequences of sounds that occur over time

A
  • Sounds that stay constant or that change smoothly are often produced by the same source
  • This property of sound leads to a principle that resembles the Gestalt principle of good continuation for vision
  • Sound stimuli with the same frequency or smoothly changing frequencies are perceived as continuous even when they are interrupted by another stimulus
  • Richard Warren and coworkers (1972) demonstrated auditory continuity by presenting bursts of tone interrupted by gaps of silence
  • Listeners perceived these tones as stopping during the silence
  • But when Warren filled in the gaps with noise, listeners perceived the tone as continuing behind the noise
  • This demonstration is analogous to the demonstration of visual good continuation illustrated by coiled rope
  • Just as the rope is perceived as continuous even when it is covered by another coil of the rope, a tone can be perceived as continuous even though it is interrupted by bursts of noise
284
Q

Describe the auditory continuity principle that explains how we group sequences of sounds that occur over time

A
  • Sounds that stay constant or that change smoothly are often produced by the same source
  • This property of sound leads to a principle that resembles the Gestalt principle of good continuation for vision
  • Sound stimuli with the same frequency or smoothly changing frequencies are perceived as continuous even when they are interrupted by another stimulus
  • Richard Warren and coworkers (1972) demonstrated auditory continuity by presenting bursts of tone interrupted by gaps of silence
  • Listeners perceived these tones as stopping during the silence
  • But when Warren filled in the gaps with noise, listeners perceived the tone as continuing behind the noise
  • This demonstration is analogous to the demonstration of visual good continuation illustrated by coiled rope
  • Just as the rope is perceived as continuous even when it is covered by another coil of the rope, a tone can be perceived as continuous even though it is interrupted by bursts of noise
285
Q

Describe how experience can explain how we group sequences of sounds that occur over time

A
  • The effect of past experience on the perceptual grouping of auditory stimuli can be demonstrated by presenting the melody of a familiar song
  • Ex: presenting participants with the notes for the song “Three Blind Mice,” but with the notes jumping from one octave to another
  • When people first hear these notes, they find it difficult to identify the song
  • But once they have heard the song as it was meant to be played, they can follow the melody in the octave-jumping version
  • This is an example of the operation of a melody schema
286
Q

What’s a melody schema?

A
  • A representation of a familiar melody that is stored in a person’s memory
  • Existence of a melody schema makes it more likely that the tones associated with a melody will be perceptually grouped
  • When people don’t know that a melody is present, they have no access to the schema and therefore have nothing with which to compare the unknown melody
  • But when they know which melody is present, they compare what they hear to their stored schema and perceive the melody
287
Q

What are 2 important messages that we can take away from the principles of auditory grouping?

A
  1. Because the principles are based on our past experiences, and what usually happens in the environment, their operation is an example of prediction at work. Because prediction is so central to vision, it may be no surprise that it’s also involved in hearing. Just as principles of visual organization provide information about what is probably happening in a visual scene, so the principles of auditory organization provide information about what is probably happening in an auditory scene
  2. Each perceptual principle alone is not foolproof. Meaning that basing our perceptions on just one principle can lead to error—as in the case of the scale illusion, which is purposely arranged so that similarity of pitch creates an erroneous perception. However, in most naturalistic situations, we base our perceptions on a number of these cues working together and predictions about what is “out there” become stronger when supported by multiple sources of evidence
288
Q

What are multisensory interactions?

A
  • The use of a combination of senses
  • Ex: We see people’s lips move as we listen to them speak; our fingers feel the keys of a piano as we hear the music the fingers are creating; we hear a screeching sound and turn to see a car coming to a sudden stop
  • One area of multisensory research is concerned with one sense “dominating” the other (ex: ventriloquism effect/visual capture)
  • Visual capture and the two-flash illusion, although both impressive examples of auditory–visual interaction, result in perceptions that don’t match reality
  • But sound and vision occur together all the time in real-life situations, and when they do, they often complement each other, as when we’re having a conversation
289
Q

What’s the ventriloquism effect/visual capture?

A
  • When sound is heard coming from a seen location, even though it’s actually originating somewhere else
  • Example of vision dominating hearing
  • Ex: It occurs when sounds coming from one place (the ventriloquist’s mouth) appear to come from another place (the dummy’s mouth). Movement of the dummy’s mouth “captures” the sound
  • Ex: In movie theaters before the introduction of digital surround sound. An actor’s dialogue was produced by a speaker located on one side of the screen but the image of the actor who was talking was located in the center of the screen, many feet away. Despite this separation, moviegoers heard the sound coming from its seen location (the image at the center of the screen) rather than from where it was actually produced (the speaker to the side of the screen). Sound originating from a location off to the side was captured by vision
290
Q

What’s the two-flash illusion?

A
  • An illusion that occurs when one flash of light is presented, accompanied by 2 rapidly presented tones
  • Presentation of the 2 tones causes the observer to perceive 2 flashes of light
  • When a single dot is flashed onto a screen, the participant perceives one flash
  • When a single beep is presented at the same time as the dot, the participant still perceives one flash
  • However, if the single dot is accompanied by 2 beeps, the participant sees 2 flashes, even though the dot was flashed only once
  • The mechanism responsible for this effect is still being researched, but the important finding for our purposes is that sound creates a visual effect
291
Q

What’s speechreading?

A
  • AKA lipreading
  • Process by which deaf people determine what people are saying by observing their lip and facial movements
  • When you’re having a conversation with someone, you’re not only hearing what the person is saying, but you may also be watching his or her lips
  • Watching people’s lip movements makes it easier to understand what they are saying, especially in a noisy environment
  • This is why theater lighting designers often go to great lengths to be sure that the actors’ faces are illuminated
  • Lip movements, whether in everyday conversations or in the theater, provide information about what sounds are being produced
292
Q

Describe how the idea that there are connections between vision and hearing is also reflected in the brain

A
  • This is reflected in the interconnection of the different sensory areas of the brain
  • These connections between sensory areas contribute to coordinated receptive fields (RFs) for a neuron in the monkey’s parietal lobe that responds to both visual stimuli and sound
  • Ex: a neuron can respond when an auditory stimulus is presented in an area that’s below eye level and to the left and when a visual stimulus originates from about the same area
  • There’s a great deal of overlap between these 2 receptive fields
  • It’s easy to see that neurons such as this would be useful in our multisensory environment
  • When we hear a sound coming from a specific location in space and also see what is producing the sound—a musician playing or a person talking—the multisensory neurons that fire to both sound and vision help us form a single representation of space that involves both auditory and visual stimuli
  • Another example of cross-talk in the brain occurs when the primary receiving area associated with one sense is activated by stimuli that are usually associated with another sense
  • Ex: some blind people who used a technique called echolocation to locate objects and perceive shapes in the environment
293
Q

Describe the use of Echolocation in Blind People

A
  • Daniel Kish, who has been blind since he was 13-months-old, finds his way around by clicking his tongue and listening to the echoes that bounce off of nearby objects
  • This technique, called echolocation, enables Kish to identify the location and size of objects while walking
  • He uses tongue clicks to locate nearby objects using echolocation, and the canes to detect details of the terrain
  • To study the effect of echolocation on the brain, Lore Thaler and coworkers (2011) had 2 expert echolocators create their clicking sounds as they stood near objects, and recorded the sounds and resulting echoes with small microphones placed in the ears
  • To determine how these sounds would activate the brain, they recorded brain activity using fMRI as the expert echolocators and sighted control participants listened to the recorded sounds and their echoes
  • They found that the sounds activated the auditory cortex in both the blind and sighted participants
  • However, the visual cortex was also strongly activated in the echolocators but was silent in the control participants
  • Apparently, the visual area is activated because the echolocators are having what they describe as “spatial” experiences
  • In fact, some echolocators lose their awareness of the auditory clicks as they focus on the spatial information the echoes are providing
  • This report that echoes are transformed into spatial experiences inspired Liam Norman and Lore Thaler (2019) to use fMRI to measure the location of activity in expert echolocators’ visual cortex as they listened to echoes coming from different locations
  • What they found was that echoes coming from a particular position in space tended to activate a particular area in the visual cortex
  • This link between the location of an echo and location on the visual cortex didn’t occur for control groups of blind non-echolocators and sighted participants
  • What their result means, according to Norman and Thaler, is that learning to echolocate causes reorganization of the brain, and the visual area is involved because it normally contains a “retinotopic map” in which each point on the retina is associated with a specific location of activity in the visual cortex
  • The maps for echolocation in the echolocator’s visual cortex are therefore similar to the maps of visual locations in sighted people’s visual cortex
  • Thus, when sound is used to achieve spatial awareness, the visual cortex becomes involved
294
Q

Describe the findings of the experiment by Mor Regev and coworkers (2013), who recorded the fMRI response of participants as they either listened to a 7-minute spoken story or read the words of the story presented at exactly the same rate that the words had been spoken

A
  • They found that listening to the story activated the auditory receiving area in the temporal lobe and that reading the written version activated the visual receiving area in the occipital lobe
  • But moving up to the superior temporal gyrus in the temporal lobe, which is involved in language processing, they found that the responses from listening and from reading were synchronized in time
  • This area of the brain is therefore responding not to “hearing” or “vision,” but to the meaning of the messages created by hearing or vision
  • The synchronized responding didn’t occur in a control group that was exposed to unidentifiable scrambled letters or sounds
295
Q

Describe the case of Mr. I

A

A painter who became colour blind at the age of 65 after suffering a concussion in a car accident

296
Q

What’s cerebral achromatopsia?

A
  • A loss of colour vision (type of colour blindness) caused by damage to the cortex
  • Damage to the ventro-medial occipital and temporal lobes
297
Q

Most cases of total colour blindness or of colour deficiency (partial colour blindness) occur when?

A

At birth due to the genetic absence of one or more types of cone receptors

298
Q

Are people who are born with partial colour blindness (congenital achromatopsia) disturbed by their colour deficiency?

A

People who are born partially colour blind are not disturbed by their decreased colour perception compared to “normal” because they have never experienced colour as a person with normal colour vision does

299
Q

What do people with total colour blindness (ex: Mr. I) often complain about?

A

That it’s sometimes difficult to distinguish one object from another

300
Q

What are the functions of colour vision?

A
  • Function for aesthetics (ex: favourite colour) -> some colours may be more pleasing to others
  • Colour serves important signalling functions (both natural and contrived by humans)
  • Ex: the natural and man-made world provides many colour signals that help us identify and classify things (ex: we know a banana is ripe when it has turned yellow and we know to stop at a red light)
  • Colour facilitates perceptual organization (its role in perceptual organization is crucial to the survival of many species)
  • Ex: a monkey with good colour vision easily detects red fruit against a green background , but a colour-blind monkey would find it more difficult to find the fruit
  • Colour vision thus enhances the contrast of objects that, if they didn’t appear coloured, would be more difficult to perceive
  • This link between good color vision and the ability to detect colored food has led to the proposal that monkey and human color vision may have evolved for the express purpose of detecting fruit
  • Colour vision also helps us recognize and identify objects (especially familiar ones)
  • Colour also helps us recognize natural scenes and rapidly perceive the gist of scenes
  • Color has also been suggested as a cue to emotions signaled by facial expressions
  • This was demonstrated by Christopher Thorstenson and coworkers (2019) who found that when asked to rate the emotions of ambiguous-emotion faces, participants were more likely to rate the face as expressing disgust when colored green and as expressing anger when red
  • Another function of colours -> circadian rhythm (blue light = night and yellow light = day)
301
Q

Describe the study by James Tanaka and Lynn Presnell (1999) on the function of colour vision for recognizing objects

A
  • They asked observers to identify objects, which appeared either in their normal colors, like a yellow banana, or in inappropriate colors, like a purple banana
  • The result was that observers recognized the appropriately colored objects more rapidly and accurately
  • Thus, knowing the colors of familiar objects helps us to recognize these objects
302
Q

Describe Newton’s prism experiment

A
  • First, he made a hole in a window shade, which let a beam of sunlight enter the room
  • When he placed Prism 1 in its path, the beam of white-appearing light was split into the components of the visual spectrum
  • At the time, many people thought that prisms (which were common novelties) added color to light
  • Newton, however, thought that white light was a mixture of differently colored lights and that the prism separated the white light into its individual components
  • To support this hypothesis, Newton next placed a board in the path of the differently colored beams
  • Holes in the board allowed only particular beams to pass through while the rest were blocked. Each beam that passed through the board then went through a second prism (ex: prisms 2, 3, and 4, for the red, yellow, and blue rays of light)
303
Q

What were Newton’s findings about the light that passed through the second prism?

A
  1. The 2nd prism didn’t change the colour appearance of any light that passed through it
    - Ex: a red beam continued to look red after it passed through the 2nd prism
    - To Newton, this meant that unlike white light, the individual colors of the spectrum are not mixtures of other colors
  2. The degree to which beams from each part of the spectrum were “bent” by the second prism was different
    - Red beams were bent only a little, yellow beams were bent a bit more, and violet beams were bent the most
    - From this observation, Newton concluded that light in each part of the spectrum is defined by different physical properties and that these physical differences give rise to our perception of different colors
    - Newton though that the light was actually particles (particles of light) but we know today that these are wavelengths
304
Q

What are the physical properties of the different wavelengths of light?

A
  • Wavelengths from ~400 to 450 nm appear violet
  • 450 to 490 nm appear blue
  • 500 to 575 nm appear green
  • 575 to 590 nm appear yellow
  • 590 to 620 nm appear orange
  • 620 to 700 nm appear red
305
Q

The colors of objects are largely determined by what?

A

By the wavelengths of light that are reflected from the objects into our eyes

306
Q

What are chromatic colours?

A
  • Color with hue, such as blue, yellow, red, or green
  • These occur when some wavelengths are reflected more than others (selective reflection)
307
Q

What’s selective reflection?

A
  • When an object reflects some wavelengths of the spectrum more than others
  • Ex: A red sheet of paper reflects long wavelengths of light and absorbs short and medium wavelengths. As a result, only the long wavelengths reach our eyes & the paper appears red
308
Q

What are achromatic colours?

A
  • Colours such as white, gray, and black
  • These occur when light is reflected equally across the spectrum
  • Ex: a white sheet of paper reflects all wavelengths of light equally so it appears white
309
Q

What light contains all of the wavelengths of the spectrum?

A

White light

310
Q

What are reflectance curves?

A

A plot showing the percentage of light reflected from an object versus wavelength

311
Q

Do individual objects only reflect a single wavelength of light?

A
  • No they don’t usually reflect a single wavelength of light
  • Ex: tomatoes predominantly reflect long wavelengths of light into our eyes whereas lettuce principally reflects medium wavelengths. As a result, tomatoes appear red, whereas lettuce appears green
312
Q

How is the difference between black, grey and white determined?

A
  • It’s related to the overall amount of light reflected from an object
    Ex: the black paper reflects less than 10% of the light that hits it, whereas the white paper reflects more than 80% of the light
313
Q

What’s selective transmission?

A
  • When some wavelengths pass through visually transparent objects or substances and others do not
  • Selective transmission is associated with the perception of chromatic colour
  • Ex: cranberry juice selectively transmits long-wavelength light and appears red, whereas limeade selectively transmits medium-wavelength light and appears green
  • This can happen for liquids, plastics, and glass
314
Q

What are transmission curves?

A
  • Plots of the percentage of light transmitted through a liquid or object at each wavelength
  • Similar to the reflectance curves
315
Q

Describe the relationship between the wavelengths reflected or transmitted and the color perceived

A
  • Short wavelength -> blue
  • Medium wavelength -> green
  • Long wavelength -> red
  • Medium & long wavelength -> yellow
  • Short, medium & long wavelength -> white
316
Q

Describe the process of mixing paints

A
  • The key to understanding what happens when coloured paints are mixed together is that when mixed, both paints still absorb the same wavelengths they absorbed when alone, so the only wavelengths reflected are those that are reflected by both paints in common
  • Ex: the blue blob absorbs long-wavelength light and reflects some short-wavelength light and some medium-wavelength light. The yellow blob absorbs short-wavelength light and reflects some medium- and long-wavelength light. Mix these together and the only wavelengths that survive all this absorption are some of the medium-wavelengths, which are associated with green
  • Because the blue and yellow blobs subtract all of the wavelengths except some that are associated with green, mixing paints is called subtractive color mixture
317
Q

What happens if paints reflect no colour in common?

A

Mixing them would result in little to no reflection across the spectrum and the mixture would appear black

318
Q

What are the different types of white light in the visible spectrum?

A
  • Sunlight (available outside of visible spectrum as well
  • Incandescent lightbulbs (ex: LED-more cool tone)
  • Fluorescent lightbulbs (ex: warm yellowy light)
319
Q

What’s monochromatic light?

A

Light coming from a single wavelength

320
Q

What’s behavioural evidence for Opponent-Process theory?

A
  • Color afterimages and simultaneous color contrast show the opposing pairings
  • Types of color blindness are red/green and blue/yellow
321
Q

What happens in complementary afterimages?

A

Red and green switch places and blue and yellow switch places

322
Q

What’s the difference between inner hair cells and outer hair cells?

A
  • Inner hair cells -> help us determine frequencies
  • Outer hair cells -> amplify the movement of the basilar membrane (have an amplifying role) -> you can see this in experiments where they removed the outer hair cells on animals
323
Q

What happens when you remove an outer hair cell?

A

The threshold for detecting a certain frequency becomes higher

324
Q

What are cochlear implants?

A
  • Electrodes are inserted into the cochlea to electrically stimulate auditory nerve fibers
  • The device is made up of:
  • A microphone worn behind the ear
  • A sound processor
  • A transmitter mounted on the mastoid bone
  • A receiver surgically mounted on the mastoid bone
  • These basically mimic the hair cells
  • Bypasses the cochlea and stimulates the auditory nerve directly