Perception of sound waves of various frequencies and amplitudes. How many decibels can the human ear perceive

Psychoacoustics - a field of science bordering between physics and psychology, studies data on the auditory sensation of a person when a physical stimulus - sound - acts on the ear. A large amount of data has been accumulated on human reactions to auditory stimuli. Without this data, it is difficult to gain a correct understanding of the operation of audio frequency signaling systems. Consider the most important features of human perception of sound.
A person feels changes in sound pressure occurring at a frequency of 20-20,000 Hz. Sounds below 40 Hz are relatively rare in music and do not exist in spoken language. At very high frequencies, musical perception disappears and a certain indefinite sound sensation arises, depending on the individuality of the listener, his age. With age, the sensitivity of hearing in humans decreases, especially in the upper frequencies of the sound range.
But it would be wrong to conclude on this basis that the transmission of a wide frequency band by a sound reproducing installation is unimportant for older people. Experiments have shown that people, even barely perceiving signals above 12 kHz, very easily recognize the lack of high frequencies in a musical transmission.

Frequency characteristics of auditory sensations

The area of ​​sounds audible by a person in the range of 20-20000 Hz is limited in intensity by thresholds: from below - audibility and from above - pain sensations.
The threshold of hearing is estimated by the minimum pressure, more precisely, by the minimum increment of pressure relative to the boundary; it is sensitive to frequencies of 1000-5000 Hz - here the threshold of hearing is the lowest (sound pressure is about 2-10 Pa). In the direction of lower and higher sound frequencies, the sensitivity of hearing drops sharply.
The pain threshold determines the upper limit of the perception of sound energy and corresponds approximately to a sound intensity of 10 W / m or 130 dB (for a reference signal with a frequency of 1000 Hz).
With an increase in sound pressure, the intensity of the sound also increases, and the auditory sensation increases in jumps, called the intensity discrimination threshold. The number of these jumps at medium frequencies is about 250, at low and high frequencies it decreases and, on average, over the frequency range is about 150.

Since the range of intensity variation is 130 dB, then the elementary jump of sensations on average over the amplitude range is 0.8 dB, which corresponds to a change in sound intensity by 1.2 times. At low levels of hearing, these jumps reach 2-3 dB, at high levels they decrease to 0.5 dB (1.1 times). An increase in the power of the amplifying path by less than 1.44 times is practically not fixed by the human ear. With a lower sound pressure developed by the loudspeaker, even a twofold increase in the power of the output stage may not give a tangible result.

Subjective characteristics of sound

The quality of sound transmission is evaluated on the basis of auditory perception. Therefore, it is possible to correctly determine the technical requirements for the sound transmission path or its individual links only by studying the patterns that connect the subjectively perceived sensation of sound and the objective characteristics of sound are pitch, loudness and timbre.
The concept of pitch implies a subjective assessment of the perception of sound in the frequency range. Sound is usually characterized not by frequency, but by pitch.
Tone is a signal of a certain height, having a discrete spectrum (musical sounds, vowels of speech). A signal that has a wide continuous spectrum, all frequency components of which have the same average power, is called white noise.

A gradual increase in the frequency of sound vibrations from 20 to 20,000 Hz is perceived as a gradual change in tone from the lowest (bass) to the highest.
The degree of accuracy with which a person determines the pitch by ear depends on the sharpness, musicality and training of his ear. It should be noted that the pitch to some extent depends on the intensity of the sound (at high levels, sounds of greater intensity seem lower than weaker ones..
The human ear is good at distinguishing two tones that are close in pitch. For example, in the frequency range of approximately 2000 Hz, a person can distinguish between two tones that differ from each other in frequency by 3-6 Hz.
The subjective scale of sound perception in terms of frequency is close to the logarithmic law. Therefore, a doubling of the oscillation frequency (regardless of the initial frequency) is always perceived as the same change in pitch. The pitch interval corresponding to a frequency change of 2 times is called an octave. The frequency range perceived by a person is 20-20,000 Hz, it covers approximately ten octaves.
An octave is a fairly large pitch change interval; a person distinguishes much smaller intervals. So, in ten octaves perceived by the ear, one can distinguish more than a thousand gradations of pitch. Music uses smaller intervals called semitones, which correspond to a frequency change of approximately 1.054 times.
An octave is divided into half octaves and a third of an octave. For the latter, the following range of frequencies has been standardized: 1; 1.25; 1.6; 2; 2.5; 3; 3.15; 4; 5; 6.3:8; 10, which are the boundaries of one-third octaves. If these frequencies are placed at equal distances along the frequency axis, then a logarithmic scale will be obtained. Based on this, all frequency characteristics of sound transmission devices are built on a logarithmic scale.
The transmission loudness depends not only on the intensity of the sound, but also on the spectral composition, the conditions of perception and the duration of exposure. So, two sounding tones of medium and low frequency, having the same intensity (or the same sound pressure), are not perceived by a person as equally loud. Therefore, the concept of loudness level in backgrounds was introduced to denote sounds of the same loudness. The level of sound pressure in decibels of the same volume of a pure tone with a frequency of 1000 Hz is taken as the sound volume level in phons, i.e. for a frequency of 1000 Hz, the volume levels in phons and decibels are the same. At other frequencies, for the same sound pressure, sounds may appear louder or quieter.
The experience of sound engineers in recording and editing musical works shows that in order to better detect sound defects that may occur during work, the volume level during control listening should be kept high, approximately corresponding to the volume level in the hall.
With prolonged exposure to intense sound, hearing sensitivity gradually decreases, and the more, the higher the volume of the sound. The detectable reduction in sensitivity is related to the hearing response to overload, i.e. with its natural adaptation, After a break in listening, hearing sensitivity is restored. To this it should be added that the hearing aid, when perceiving high-level signals, introduces its own, so-called subjective, distortions (which indicates the non-linearity of hearing). Thus, at a signal level of 100 dB, the first and second subjective harmonics reach levels of 85 and 70 dB.
A significant volume level and the duration of its exposure cause irreversible phenomena in the auditory organ. It is noted that in recent years, the hearing thresholds have sharply increased among young people. The reason for this was the passion for pop music, characterized by high sound levels.
The volume level is measured using an electro-acoustic device - a sound level meter. The measured sound is first converted by the microphone into electrical vibrations. After amplification by a special voltage amplifier, these oscillations are measured with a pointer device adjusted in decibels. To ensure that the readings of the device correspond as closely as possible to the subjective perception of loudness, the device is equipped with special filters that change its sensitivity to the perception of sound of different frequencies in accordance with the characteristic of hearing sensitivity.
An important characteristic of sound is timbre. The ability of hearing to distinguish it allows you to perceive signals with a wide variety of shades. The sound of each of the instruments and voices, due to their characteristic shades, becomes multicolored and well recognizable.
Timbre, being a subjective reflection of the complexity of the perceived sound, does not have a quantitative assessment and is characterized by terms of a qualitative order (beautiful, soft, juicy, etc.). When a signal is transmitted through an electro-acoustic path, the resulting distortions primarily affect the timbre of the reproduced sound. The condition for the correct transmission of the timbre of musical sounds is the undistorted transmission of the signal spectrum. The signal spectrum is a set of sinusoidal components of a complex sound.
The so-called pure tone has the simplest spectrum, it contains only one frequency. The sound of a musical instrument turns out to be more interesting: its spectrum consists of the fundamental frequency and several "impurity" frequencies, called overtones (higher tones). Overtones are multiples of the fundamental frequency and are usually smaller in amplitude.
The timbre of the sound depends on the distribution of intensity over the overtones. The sounds of different musical instruments differ in timbre.
More complex is the spectrum of combination of musical sounds, called a chord. In such a spectrum, there are several fundamental frequencies along with the corresponding overtones.
Differences in timbre are shared mainly by the low-mid frequency components of the signal, therefore, a large variety of timbres is associated with signals lying in the lower part of the frequency range. The signals related to its upper part, as they increase, lose their timbre coloring more and more, which is due to the gradual departure of their harmonic components beyond the limits of audible frequencies. This can be explained by the fact that up to 20 or more harmonics are actively involved in the formation of the timbre of low sounds, medium 8 - 10, high 2 - 3, since the rest are either weak or fall out of the region of audible frequencies. Therefore, high sounds, as a rule, are poorer in timbre.
Almost all natural sound sources, including sources of musical sounds, have a specific dependence of the timbre on the volume level. Hearing is also adapted to this dependence - it is natural for it to determine the intensity of the source by the color of the sound. Loud sounds are usually more harsh.

Musical sound sources

A number of factors that characterize the primary sources of sounds have a great influence on the sound quality of electroacoustic systems.
The acoustic parameters of musical sources depend on the composition of the performers (orchestra, ensemble, group, soloist and type of music: symphonic, folk, pop, etc.).

The origin and formation of sound on each musical instrument has its own specifics associated with the acoustic features of sound formation in a particular musical instrument.
An important element of musical sound is attack. This is a specific transient process during which stable sound characteristics are established: loudness, timbre, pitch. Any musical sound goes through three stages - beginning, middle and end, and both the initial and final stages have a certain duration. The initial stage is called the attack. It lasts differently: for plucked, percussion and some wind instruments 0-20 ms, for bassoon 20-60 ms. An attack is not just an increase in sound volume from zero to some steady value, it can be accompanied by the same change in pitch and timbre. Moreover, the characteristics of the attack of the instrument are not the same in different parts of its range with different playing styles: the violin is the most perfect instrument in terms of the richness of possible expressive methods of attack.
One of the characteristics of any musical instrument is the frequency range of the sound. In addition to the fundamental frequencies, each instrument is characterized by additional high-quality components - overtones (or, as is customary in electroacoustics, higher harmonics), which determine its specific timbre.
It is known that sound energy is unevenly distributed over the entire spectrum of sound frequencies emitted by the source.
Most instruments are characterized by amplification of the fundamental frequencies, as well as individual overtones in certain (one or more) relatively narrow frequency bands (formants), which are different for each instrument. The resonant frequencies (in hertz) of the formant region are: for trumpet 100-200, horn 200-400, trombone 300-900, trumpet 800-1750, saxophone 350-900, oboe 800-1500, bassoon 300-900, clarinet 250-600 .
Another characteristic property of musical instruments is the strength of their sound, which is determined by a larger or smaller amplitude (span) of their sounding body or air column (a larger amplitude corresponds to a stronger sound and vice versa). The value of peak acoustic powers (in watts) is: for large orchestra 70, bass drum 25, timpani 20, snare drum 12, trombone 6, piano 0.4, trumpet and saxophone 0.3, trumpet 0.2, double bass 0.( 6, piccolo 0.08, clarinet, horn and triangle 0.05.
The ratio of the sound power extracted from the instrument when performing "fortissimo" to the sound power when performing "pianissimo" is commonly called the dynamic range of the sound of musical instruments.
The dynamic range of a musical sound source depends on the type of performing group and the nature of the performance.
Consider the dynamic range of individual sound sources. Under the dynamic range of individual musical instruments and ensembles (orchestras and choirs of various composition), as well as voices, we understand the ratio of the maximum sound pressure created by a given source to the minimum, expressed in decibels.
In practice, when determining the dynamic range of a sound source, one usually operates only with sound pressure levels, calculating or measuring their corresponding difference. For example, if the maximum sound level of an orchestra is 90 and the minimum is 50 dB, then the dynamic range is said to be 90 - 50 = = 40 dB. In this case, 90 and 50 dB are the sound pressure levels relative to the zero acoustic level.
The dynamic range for a given sound source is not constant. It depends on the nature of the performed work and on the acoustic conditions of the room in which the performance takes place. Reverb expands the dynamic range, which usually reaches its maximum value in rooms with a large volume and minimal sound absorption. Almost all instruments and human voices have a dynamic range that is uneven across the sound registers. For example, the volume level of the lowest sound on the "forte" of the vocalist is equal to the level of the highest sound on the "piano".

The dynamic range of a musical program is expressed in the same way as for individual sound sources, but the maximum sound pressure is noted with a dynamic ff (fortissimo) shade, and the minimum with pp (pianissimo).

The highest volume, indicated in notes fff (forte, fortissimo), corresponds to an acoustic sound pressure level of approximately 110 dB, and the lowest volume, indicated in notes prr (piano-pianissimo), approximately 40 dB.
It should be noted that the dynamic shades of performance in music are relative and their connection with the corresponding sound pressure levels is to some extent conditional. The dynamic range of a particular musical program depends on the nature of the composition. Thus, the dynamic range of classical works by Haydn, Mozart, Vivaldi rarely exceeds 30-35 dB. The dynamic range of variety music usually does not exceed 40 dB, while dance and jazz - only about 20 dB. Most works for Russian folk instruments orchestra also have a small dynamic range (25-30 dB). This is true for the brass band as well. However, the maximum sound level of a brass band in a room can reach a fairly high level (up to 110 dB).

masking effect

The subjective assessment of loudness depends on the conditions in which the sound is perceived by the listener. In real conditions, the acoustic signal does not exist in absolute silence. At the same time, extraneous noise affects the hearing, making it difficult to perceive sound, masking the main signal to a certain extent. The effect of masking a pure sinusoidal tone by extraneous noise is estimated by a value indicating. by how many decibels the threshold of audibility of the masked signal rises above the threshold of its perception in silence.
Experiments to determine the degree of masking of one sound signal by another show that the tone of any frequency is masked by lower tones much more effectively than by higher ones. For example, if two tuning forks (1200 and 440 Hz) emit sounds with the same intensity, then we stop hearing the first tone, it is masked by the second one (having extinguished the vibration of the second tuning fork, we will hear the first one again).
If there are two complex audio signals simultaneously, consisting of certain spectra of audio frequencies, then the effect of mutual masking occurs. Moreover, if the main energy of both signals lies in the same region of the audio frequency range, then the masking effect will be the strongest. Thus, when transmitting an orchestral work, due to masking by the accompaniment, the soloist's part may become poorly legible, indistinct.
Achieving clarity or, as they say, "transparency" of sound in the sound transmission of orchestras or pop ensembles becomes very difficult if the instrument or individual groups of instruments of the orchestra play in the same or close registers at the same time.
When recording an orchestra, the director must take into account the peculiarities of disguise. At rehearsals, with the help of a conductor, he sets a balance between the sound power of the instruments of one group, as well as between the groups of the entire orchestra. The clarity of the main melodic lines and individual musical parts is achieved in these cases by the close location of the microphones to the performers, the deliberate selection by the sound engineer of the most important instruments in a given place, and other special sound engineering techniques.
The phenomenon of masking is opposed by the psychophysiological ability of the hearing organs to single out one or more sounds from the general mass that carry the most important information. For example, when the orchestra is playing, the conductor notices the slightest inaccuracies in the performance of the part on any instrument.
Masking can significantly affect the quality of signal transmission. A clear perception of the received sound is possible if its intensity significantly exceeds the level of interference components that are in the same band as the received sound. With uniform interference, the signal excess should be 10-15 dB. This feature of auditory perception finds practical application, for example, in assessing the electroacoustic characteristics of carriers. So, if the signal-to-noise ratio of an analog record is 60 dB, then the dynamic range of the recorded program can be no more than 45-48 dB.

Temporal characteristics of auditory perception

The hearing aid, like any other oscillatory system, is inertial. When the sound disappears, the auditory sensation does not disappear immediately, but gradually, decreasing to zero. The time during which the sensation in terms of loudness decreases by 8-10 phon is called the hearing time constant. This constant depends on a number of circumstances, as well as on the parameters of the perceived sound. If two short sound pulses arrive at the listener with the same frequency composition and level, but one of them is delayed, then they will be perceived together with a delay not exceeding 50 ms. For large delay intervals, both pulses are perceived separately, an echo occurs.
This feature of hearing is taken into account when designing some signal processing devices, for example, electronic delay lines, reverbs, etc.
It should be noted that due to the special property of hearing, the perception of the volume of a short-term sound impulse depends not only on its level, but also on the duration of the impact of the impulse on the ear. So, a short-term sound, lasting only 10-12 ms, is perceived by the ear quieter than a sound of the same level, but affecting the ear for, for example, 150-400 ms. Therefore, when listening to a transmission, the loudness is the result of averaging the energy of the sound wave over a certain interval. In addition, human hearing has inertia, in particular, when perceiving non-linear distortions, he does not feel such if the duration of the sound pulse is less than 10-20 ms. That is why in the level indicators of sound-recording household radio-electronic equipment, instantaneous signal values ​​are averaged over a period selected in accordance with the temporal characteristics of the hearing organs.

Spatial representation of sound

One of the important human abilities is the ability to determine the direction of the sound source. This ability is called the binaural effect and is explained by the fact that a person has two ears. Experimental data shows where the sound comes from: one for high-frequency tones, the other for low-frequency ones.

The sound travels a shorter path to the ear facing the source than to the second ear. As a result, the pressure of sound waves in the ear canals differs in phase and amplitude. Amplitude differences are significant only at high frequencies, when the sound wave length becomes comparable to the size of the head. When the amplitude difference exceeds the 1 dB threshold, the sound source appears to be on the side where the amplitude is greater. The angle of deviation of the sound source from the center line (line of symmetry) is approximately proportional to the logarithm of the amplitude ratio.
To determine the direction of the sound source with frequencies below 1500-2000 Hz, phase differences are significant. It seems to a person that the sound comes from the side from which the wave, which is ahead in phase, reaches the ear. The angle of deviation of sound from the midline is proportional to the difference in the time of arrival of sound waves to both ears. A trained person can notice a phase difference with a time difference of 100 ms.
The ability to determine the direction of sound in the vertical plane is much less developed (about 10 times). This feature of physiology is associated with the orientation of the hearing organs in the horizontal plane.
A specific feature of the spatial perception of sound by a person is manifested in the fact that the hearing organs are able to sense the total, integral localization created with the help of artificial means of influence. For example, two speakers are installed in a room along the front at a distance of 2-3 m from each other. At the same distance from the axis of the connecting system, the listener is located strictly in the center. In the room, two sounds of the same phase, frequency and intensity are emitted through the speakers. As a result of the identity of the sounds passing into the organ of hearing, a person cannot separate them, his sensations give an idea of ​​a single, apparent (virtual) sound source, which is located strictly in the center on the axis of symmetry.
If we now reduce the volume of one speaker, then the apparent source will move towards the louder speaker. The illusion of sound source movement can be obtained not only by changing the signal level, but also by artificially delaying one sound relative to another; in this case, the apparent source will shift towards the speaker, which emits a signal ahead of time.
Let us give an example to illustrate integral localization. The distance between speakers is 2m, the distance from the front line to the listener is 2m; in order for the source to shift as if by 40 cm to the left or right, it is necessary to apply two signals with a difference in intensity level of 5 dB or with a time delay of 0.3 ms. With a level difference of 10 dB or a time delay of 0.6 ms, the source will "move" 70 cm from the center.
Thus, if you change the sound pressure generated by the speakers, then the illusion of moving the sound source arises. This phenomenon is called total localization. To create a total localization, a two-channel stereophonic sound transmission system is used.
Two microphones are installed in the primary room, each of which works on its own channel. In the secondary - two loudspeakers. Microphones are located at a certain distance from each other along a line parallel to the placement of the sound emitter. When the sound emitter is moved, different sound pressure will act on the microphone and the arrival time of the sound wave will be different due to the unequal distance between the sound emitter and the microphones. This difference creates the effect of total localization in the secondary room, as a result of which the apparent source is localized at a certain point in space located between the two loudspeakers.
It should be said about the binoural sound transmission system. With this system, called the "artificial head" system, two separate microphones are placed in the primary room, positioned at a distance from each other equal to the distance between the ears of a person. Each of the microphones has an independent sound transmission channel, at the output of which telephones for the left and right ears are switched on in the secondary room. With identical sound transmission channels, such a system accurately reproduces the binaural effect created near the ears of the "artificial head" in the primary room. The presence of headphones and the need to use them for a long time is a disadvantage.
The organ of hearing determines the distance to the sound source by a number of indirect signs and with some errors. Depending on whether the distance to the signal source is small or large, its subjective assessment changes under the influence of various factors. It was found that if the determined distances are small (up to 3 m), then their subjective assessment is almost linearly related to the change in the volume of the sound source moving along the depth. An additional factor for a complex signal is its timbre, which becomes more and more "heavy" as the source approaches the listener. This is due to the increasing increase in the overtones of the low register compared to the overtones of the high register, caused by the resulting increase in volume level.
For average distances of 3-10 m, the removal of the source from the listener will be accompanied by a proportional decrease in volume, and this change will apply equally to the fundamental frequency and to the harmonic components. As a result, there is a relative amplification of the high-frequency part of the spectrum and the timbre becomes brighter.
As the distance increases, the energy loss in the air will increase in proportion to the square of the frequency. Increased loss of high register overtones will result in a reduction in timbre brightness. Thus, the subjective assessment of distances is associated with a change in its volume and timbre.
Under conditions of an enclosed space, the signals of the first reflections, which are delayed by 20–40 ms relative to the direct one, are perceived by the ear as coming from different directions. At the same time, their increasing delay creates the impression of a significant distance from the points from which these reflections originate. Thus, according to the delay time, one can judge the relative remoteness of secondary sources or, which is the same, the size of the room.

Some features of the subjective perception of stereo broadcasts.

A stereophonic sound transmission system has a number of significant features compared to a conventional monophonic one.
The quality that distinguishes stereophonic sound, surround, i.e. natural acoustic perspective can be assessed using some additional indicators that do not make sense with a monophonic sound transmission technique. These additional indicators include: the angle of hearing, i.e. the angle at which the listener perceives the sound stereo image; stereo resolution, i.e. subjectively determined localization of individual elements of the sound image at certain points in space within the angle of audibility; acoustic atmosphere, i.e. the effect of making the listener feel present in the primary room where the transmitted sound event occurs.

About the role of room acoustics

The brilliance of sound is achieved not only with the help of sound reproduction equipment. Even with good enough equipment, the sound quality can be poor if the listening room does not have certain properties. It is known that in a closed room there is a phenomenon called reverberation. By affecting the hearing organs, reverberation (depending on its duration) can improve or degrade the sound quality.

A person in a room perceives not only direct sound waves created directly by the sound source, but also waves reflected by the ceiling and walls of the room. Reflected waves are still audible for some time after the termination of the sound source.
It is sometimes believed that reflected signals play only a negative role, interfering with the perception of the main signal. However, this view is incorrect. A certain part of the energy of the initial reflected echo signals, reaching the ears of a person with short delays, amplifies the main signal and enriches its sound. On the contrary, later reflected echoes. the delay time of which exceeds a certain critical value, form a sound background that makes it difficult to perceive the main signal.
The listening room should not have a long reverberation time. Living rooms tend to have low reverberation due to their limited size and the presence of sound-absorbing surfaces, upholstered furniture, carpets, curtains, etc.
Barriers of different nature and properties are characterized by the sound absorption coefficient, which is the ratio of the absorbed energy to the total energy of the incident sound wave.

To increase the sound-absorbing properties of the carpet (and reduce noise in the living room), it is advisable to hang the carpet not close to the wall, but with a gap of 30-50 mm).

Man is truly the most intelligent of the animals that inhabit the planet. However, our mind often robs us of superiority in such abilities as the perception of the environment through smell, hearing and other sensory sensations.

Thus, most animals are far ahead of us when it comes to auditory range. The human hearing range is the range of frequencies that the human ear can perceive. Let's try to understand how the human ear works in relation to the perception of sound.

Human hearing range under normal conditions

The average human ear can pick up and distinguish sound waves in the range of 20 Hz to 20 kHz (20,000 Hz). However, as a person ages, the auditory range of a person decreases, in particular, its upper limit decreases. In older people, it is usually much lower than in younger people, while infants and children have the highest hearing abilities. Auditory perception of high frequencies begins to deteriorate from the age of eight.

Human hearing in ideal conditions

In the laboratory, a person's hearing range is determined using an audiometer that emits sound waves of different frequencies and headphones adjusted accordingly. Under these ideal conditions, the human ear can recognize frequencies in the range of 12 Hz to 20 kHz.


Hearing range for men and women

There is a significant difference between the hearing range of men and women. Women were found to be more sensitive to high frequencies than men. The perception of low frequencies is more or less the same in men and women.

Various scales to indicate hearing range

Although the frequency scale is the most common scale for measuring human hearing range, it is also often measured in pascals (Pa) and decibels (dB). However, measurement in pascals is considered inconvenient, since this unit involves working with very large numbers. One µPa is the distance traveled by a sound wave during vibration, which is equal to one tenth of the diameter of a hydrogen atom. Sound waves in the human ear travel a much greater distance, making it difficult to give a range of human hearing in pascals.

The softest sound that can be recognized by the human ear is approximately 20 µPa. The decibel scale is easier to use as it is a logarithmic scale that directly references the Pa scale. It takes 0 dB (20 µPa) as its reference point and continues to compress this pressure scale. Thus, 20 million µPa equals only 120 dB. So it turns out that the range of the human ear is 0-120 dB.

The hearing range varies greatly from person to person. Therefore, to detect hearing loss, it is best to measure the range of audible sounds in relation to a reference scale, and not in relation to the usual standardized scale. Tests can be performed using sophisticated hearing diagnostic tools that can accurately determine the extent and diagnose the causes of hearing loss.

It is a complex specialized organ, consisting of three sections: the outer, middle and inner ear.

The outer ear is a sound pickup apparatus. Sound vibrations are picked up by the auricles and transmitted through the external auditory canal to the tympanic membrane, which separates the outer ear from the middle ear. Picking up sound and the whole process of hearing with two ears, the so-called biniural hearing, is important for determining the direction of sound. Sound vibrations coming from the side reach the nearest ear a few decimal fractions of a second (0.0006 s) earlier than the other. This extremely small difference in the time of sound arrival at both ears is enough to determine its direction.

The middle ear is an air cavity that connects to the nasopharynx through the Eustachian tube. Vibrations from the tympanic membrane through the middle ear are transmitted by 3 auditory ossicles connected to each other - the hammer, anvil and stirrup, and the latter through the membrane of the oval window transmits these vibrations of the fluid in the inner ear - the perilymph. Thanks to the auditory ossicles, the amplitude of the oscillations decreases, and their strength increases, which makes it possible to set in motion a column of fluid in the inner ear. The middle ear has a special mechanism for adapting to changes in sound intensity. With strong sounds, special muscles increase the tension of the eardrum and reduce the mobility of the stirrup. This reduces the amplitude of vibrations, and the inner ear is protected from damage.

The inner ear with the cochlea located in it is located in the pyramid of the temporal bone. The human cochlea has 2.5 coils. The cochlear canal is divided by two partitions (the main membrane and the vestibular membrane) into 3 narrow passages: the upper one (scala vestibularis), the middle one (the membranous canal) and the lower one (the scala tympani). At the top of the cochlea there is a hole connecting the upper and lower channels into a single one, going from the oval window to the top of the cochlea and further to the round window. Their cavity is filled with a liquid - perilymph, and the cavity of the middle membranous canal is filled with a liquid of a different composition - endolymph. In the middle channel there is a sound-receiving apparatus - the organ of Corti, in which there are receptors for sound vibrations - hair cells.

Sound perception mechanism. The physiological mechanism of sound perception is based on two processes occurring in the cochlea: 1) the separation of sounds of different frequencies at the place of their greatest impact on the main membrane of the cochlea and 2) the transformation of mechanical vibrations into nervous excitation by receptor cells. Sound vibrations entering the inner ear through the oval window are transmitted to the perilymph, and the vibrations of this fluid lead to displacements of the main membrane. The height of the vibrating liquid column and, accordingly, the place of the greatest displacement of the main membrane depends on the height of the sound. Thus, at different pitch sounds, different hair cells and different nerve fibers are excited. An increase in sound intensity leads to an increase in the number of excited hair cells and nerve fibers, which makes it possible to distinguish the intensity of sound vibrations.
The transformation of vibrations into the process of excitation is carried out by special receptors - hair cells. The hairs of these cells are immersed in the integumentary membrane. Mechanical vibrations under the action of sound lead to displacement of the integumentary membrane relative to the receptor cells and bending of the hairs. In receptor cells, mechanical displacement of hairs causes a process of excitation.

sound conduction. Distinguish between air and bone conduction. Under normal conditions, air conduction predominates in a person: sound waves are captured by the outer ear, and air vibrations are transmitted through the external auditory canal to the middle and inner ear. In the case of bone conduction, sound vibrations are transmitted through the bones of the skull directly to the cochlea. This mechanism of transmission of sound vibrations is important when a person dives under water.
A person usually perceives sounds with a frequency of 15 to 20,000 Hz (in the range of 10-11 octaves). In children, the upper limit reaches 22,000 Hz, with age it decreases. The highest sensitivity was found in the frequency range from 1000 to 3000 Hz. This area corresponds to the most frequently occurring frequencies in human speech and music.

Sound, like a signal, has an infinite number of vibrations and can carry the same infinite amount of information. The degree of its perception will be different depending on the physiological capabilities of the ear, in this case, excluding psychological factors. Depending on the type of noise, its frequency and pressure, a person feels its influence on himself.

Threshold of sensitivity of the human ear in decibels

A person perceives the frequency of sound from 16 to 20,000 Hz. The eardrums are sensitive to the pressure of sound vibrations, the level of which is measured in decibels (dB). The optimal level is from 35 to 60 dB, noise of 60-70 dB improves mental work, more than 80 dB, on the contrary, weakens attention and impairs the thinking process, and long-term perception of sound above 80 dB can cause hearing loss.

A frequency of up to 10-15 Hz is infrasound, not perceived by the ear, which causes resonant vibrations. The ability to control the vibrations that sound creates is the most powerful weapon of mass destruction. Inaudible to the ear, infrasound is able to travel long distances, transmitting orders that make people act according to a certain scenario, cause panic and horror, make them forget about everything that has nothing to do with the desire to hide, to escape from this fear. And with a certain ratio of frequency and sound pressure, such an apparatus is capable of not only suppressing the will, but also killing, injuring human tissues.

Threshold of absolute sensitivity of the human ear in decibels

The range from 7 to 13 Hz emit natural disasters: volcanoes, earthquakes, typhoons and cause a feeling of panic and horror. Since the human body also has a frequency of oscillation, which ranges from 8 to 15 Hz, with the help of such infrasound it costs nothing to create a resonance and increase the amplitude tenfold in order to drive a person to suicide or damage internal organs.

At low frequencies and high pressure, nausea and stomach pain appear, which quickly turn into serious disorders of the gastrointestinal tract, and an increase in pressure to 150 dB leads to physical damage. Resonances of internal organs at low frequencies cause bleeding and spasms, at medium frequencies - nervous excitation and injury to internal organs, at high frequencies - up to 30 Hz - tissue burns.

In the modern world, the development of sound weapons is actively underway, and, apparently, it was not in vain that the German microbiologist Robert Koch predicted that it would be necessary to look for a “vaccination” from noise like from plague or cholera.

We often evaluate sound quality. When choosing a microphone, audio processing program, or audio file recording format, one of the most important questions is how good it will sound. But there are differences between the characteristics of sound that can be measured and those that can be heard.

Tone, timbre, octave.

The brain perceives sounds of certain frequencies. This is due to the peculiarities of the mechanism of the inner ear. Receptors located on the main membrane of the inner ear convert sound vibrations into electrical potentials that excite the fibers of the auditory nerve. The fibers of the auditory nerve have frequency selectivity due to the excitation of the cells of the organ of Corti located in different places of the main membrane: high frequencies are perceived near the oval window, low frequencies - at the top of the spiral.

Closely related to the physical characteristic of sound, frequency, is the pitch we feel. Frequency is measured as the number of complete cycles of a sine wave in one second (hertz, Hz). This definition of frequency is based on the fact that a sine wave has exactly the same waveform. In real life, very few sounds have this property. However, any sound can be represented by a set of sinusoidal oscillations. We usually call such a set a tone. That is, a tone is a signal of a certain height, having a discrete spectrum (musical sounds, vowel sounds of speech), in which the frequency of a sinusoidal wave is distinguished, which has the maximum amplitude in this set. A signal that has a wide continuous spectrum, all frequency components of which have the same average intensity, is called white noise.

A gradual increase in the frequency of sound vibrations is perceived as a gradual change in tone from the lowest (bass) to the highest.

The degree of accuracy with which a person determines the pitch of a sound by ear depends on the sharpness and training of his ear. The human ear is good at distinguishing two tones that are close in pitch. For example, in the frequency region of approximately 2000 Hz, a person can distinguish between two tones that differ from each other in frequency by 3-6 Hz or even less.

The frequency spectrum of a musical instrument or voice contains a sequence of evenly spaced peaks - harmonics. They correspond to frequencies that are multiples of some base frequency, the most intense of the sine waves that make up the sound.

The special sound (timbre) of a musical instrument (voice) is associated with the relative amplitude of various harmonics, and the pitch perceived by a person most accurately conveys the base frequency. Timbre, being a subjective reflection of the perceived sound, does not have a quantitative assessment and is characterized only qualitatively.

In a "pure" tone, there is only one frequency. Usually, the perceived sound consists of the frequency of the fundamental tone and several "impurity" frequencies, called overtones. The overtones are a multiple of the frequency of the fundamental tone and less than its amplitude. The timbre of the sound depends on the intensity distribution over the overtones. The spectrum of the combination of musical sounds, called the chord, turns out to be more complex. In such a spectrum, there are several fundamental frequencies along with accompanying overtones.

If the frequency of one sound is exactly twice the frequency of another, the sound wave "fits" one into the other. The frequency distance between such sounds is called an octave. The frequency range perceived by a person, 16-20,000 Hz, covers approximately ten to eleven octaves.

Amplitude of sound vibrations and loudness.

The audible part of the range of sounds is divided into low-frequency sounds - up to 500 Hz, mid-frequency sounds - 500-10,000 Hz and high-frequency sounds - over 10,000 hertz. The ear is most sensitive to a relatively narrow range of mid-frequency sounds from 1000 to 4000 Hz. That is, sounds of the same strength in the mid-frequency range can be perceived as loud, and in the low-frequency or high-frequency range - as quiet or not be heard at all. This feature of sound perception is due to the fact that the sound information necessary for the existence of a person - speech or the sounds of nature - is transmitted mainly in the mid-frequency range. Thus, loudness is not a physical parameter, but the intensity of an auditory sensation, a subjective characteristic of sound associated with the peculiarities of our perception.

The auditory analyzer perceives an increase in the amplitude of a sound wave due to an increase in the amplitude of vibration of the main membrane of the inner ear and stimulation of an increasing number of hair cells with the transmission of electrical impulses at a higher frequency and along a greater number of nerve fibers.

Our ear can distinguish the intensity of sound in the range from the faintest whisper to the loudest noise, which roughly corresponds to a 1 million-fold increase in the amplitude of the main membrane movement. However, the ear interprets this huge difference in sound amplitude as approximately 10,000 times the change. That is, the intensity scale is strongly "compressed" by the mechanism of sound perception of the auditory analyzer. This allows a person to interpret differences in sound intensity over an extremely wide range.

Sound intensity is measured in decibels (dB) (1 bel is equal to ten times the amplitude). The same system is used to determine the change in volume.

For comparison, we can give an approximate level of intensity of different sounds: a barely audible sound (hearing threshold) 0 dB; whisper near the ear 25-30 dB; speech of average volume 60-70 dB; very loud speech (shouting) 90 dB; at concerts of rock and pop music in the center of the hall 105-110 dB; next to an airliner taking off 120 dB.

The magnitude of the increase in the volume of the perceived sound has a discrimination threshold. The number of loudness gradations distinguishable at medium frequencies does not exceed 250, at low and high frequencies it sharply decreases and averages about 150.

CATEGORIES

POPULAR ARTICLES

2023 "kingad.ru" - ultrasound examination of human organs