Audible range. "Minimum noticeable difference"

Hearing loss is pathological condition, characterized by hearing loss and difficulty understanding spoken language. It occurs quite often, especially in the elderly. However, these days there is a trend towards more early development hearing loss, including among young people and children. Depending on how much hearing is weakened, hearing loss is divided into different degrees.


What are decibels and hertz

Any sound or noise can be characterized by two parameters: pitch and sound intensity.

Pitch

The pitch of a sound is determined by the number of times a sound wave oscillates and is expressed in hertz (Hz): the higher the hertz, the higher the pitch. For example, the very first white key on the left on a regular piano (the "A" of the subcontractave) produces a low sound at 27.500 Hz, and the very last white key on the right (the "C" of the fifth octave) produces a low sound of 4186.0 Hz.

The human ear is capable of distinguishing sounds within the range of 16–20,000 Hz. Everything below 16 Hz is called infrasound, and above 20,000 is called ultrasound. Both ultrasound and infrasound are not perceived by the human ear, but can affect the body and psyche.

By frequency, all audible sounds can be divided into high-, mid- and low-frequency. Low-frequency sounds include sounds up to 500 Hz, mid-frequency sounds within the range of 500-10,000 Hz, high-frequency sounds all sounds with a frequency of more than 10,000 Hz. Human ear with the same impact force, it is better to hear mid-frequency sounds, which are perceived as louder. Accordingly, low and high frequencies are “heard” quieter, or even “stop sounding” altogether. In general, after 40–50 years upper limit the audibility of sounds decreases from 20,000 to 16,000 Hz.

Power of sound

If the ear is exposed to a very loud sound, the eardrum may rupture. In the picture below - a normal membrane, above - a membrane with a defect.

Any sound can affect the hearing organ in different ways. This depends on its sound intensity, or loudness, which is measured in decibels (dB).

Normal hearing is capable of distinguishing sounds from 0 dB and above. When exposed to loud sound of more than 120 dB.

The human ear feels most comfortable in the range of up to 80–85 dB.

For comparison:

  • winter forest in calm weather - about 0 dB,
  • rustle of leaves in the forest, park – 20–30 dB,
  • normal conversational speech, office work – 40–60 dB,
  • engine noise in the car interior – 70–80 dB,
  • loud screams – 85–90 dB,
  • thunderclaps - 100 dB,
  • a jackhammer at a distance of 1 meter from it - about 120 dB.


Degrees of hearing loss relative to volume levels

Typically, the following degrees of hearing loss are distinguished:

  • Normal hearing - a person hears sounds in the range from 0 to 25 dB and above. He can hear the rustling of leaves, the singing of birds in the forest, the ticking of a wall clock, etc.
  • Hearing loss:
  1. I degree (mild) – a person begins to hear sounds from 26–40 dB.
  2. II degree (moderate) - the threshold for the perception of sounds starts from 40–55 dB.
  3. III degree (severe) – hears sounds from 56–70 dB.
  4. IV degree (deep) – from 71–90 dB.
  • Deafness is a condition when a person cannot hear a sound louder than 90 dB.

An abbreviated version of the degrees of hearing loss:

  1. Mild degree - the ability to perceive sounds less than 50 dB. A person understands spoken language almost completely at a distance of more than 1 m.
  2. Medium degree - the threshold for the perception of sounds begins at a volume of 50–70 dB. Communication with each other is difficult, because in this case a person hears speech well at a distance of up to 1 m.
  3. Severe degree – more than 70 dB. Speech of normal intensity is no longer audible or is unintelligible at the ear. You have to scream or use a special hearing aid.

In everyday practical life, specialists can use another classification of hearing loss:

  1. Normal hearing. A person hears spoken speech and whispers at a distance of more than 6 m.
  2. Mild hearing loss. A person understands spoken speech from a distance of more than 6 m, but hears whispers no more than 3–6 meters away. The patient can distinguish speech even in background noise.
  3. Moderate hearing loss. Whispers can be distinguished at a distance of no more than 1–3 m, and ordinary spoken speech – up to 4–6 m. Speech perception may be disrupted by extraneous noise.
  4. Significant degree of hearing loss. Conversational speech can be heard no further than at a distance of 2–4 m, and whispering – up to 0.5–1 m. There is an illegible perception of words; some individual phrases or words have to be repeated several times.
  5. Severe degree. Whispers are practically indistinguishable even close to the ear; spoken speech can hardly be distinguished even when shouting at a distance of less than 2 m. He reads lips more.


Degrees of hearing loss relative to the pitch of sounds

  • Group I. Patients are able to perceive only low frequencies in the range of 125–150 Hz. They only respond to low and loud voices.
  • Group II. In this case, higher frequencies become available for perception, which range from 150 to 500 Hz. Usually, simple spoken vowels “o” and “u” become perceptible.
  • III group. Good perception of low and medium frequencies (up to 1000 Hz). Such patients already listen to music, distinguish the doorbell, hear almost all vowels, and grasp the meaning of simple phrases and individual words.
  • IV group. Frequencies up to 2000 Hz become available for perception. Patients distinguish almost all sounds, as well as individual phrases and words. They understand speech.

This classification of hearing loss is important not only for correct selection hearing aid, but also placing children in a regular or specialized school for.

Diagnosis of hearing loss


Audiometry will help determine the degree of hearing loss in a patient.

The most accurate and reliable way to identify and determine the degree of hearing loss is audiometry. For this purpose, the patient wears special headphones into which a signal of appropriate frequencies and strength is supplied. If the subject hears the signal, he lets him know by pressing the device button or nodding his head. Based on the results of audiometry, a corresponding curve of auditory perception (audiogram) is constructed, the analysis of which allows not only to identify the degree of hearing loss, but also in some situations to obtain a more in-depth understanding of the nature of hearing loss.
Sometimes, when conducting audiometry, they do not wear headphones, but use a tuning fork or simply pronounce certain words at some distance from the patient.

When to see a doctor

It is necessary to contact an ENT doctor if:

  1. You began to turn your head towards the one who was speaking, and at the same time you strained to hear him.
  2. Relatives who live with you or friends who come to visit make comments about the fact that you have turned on the TV, radio, or player too loudly.
  3. The doorbell does not ring as clearly as before, or you may no longer hear it at all.
  4. When talking on the phone, you ask the other person to speak louder and more clearly.
  5. They began to ask you to repeat what you were told again.
  6. If there is noise around you, then it becomes much more difficult to hear your interlocutor and understand what he is saying.

Despite the fact that, in general, the earlier the correct diagnosis is established and treatment is started, the better results and the more likely it is that the hearing will persist for many years to come.

Having considered the theory of propagation and the mechanisms by which sound waves arise, it is useful to understand how sound is “interpreted” or perceived by humans. A paired organ, the ear, is responsible for the perception of sound waves in the human body. Human ear- a very complex organ that is responsible for two functions: 1) perceives sound impulses 2) acts as the vestibular apparatus of the whole human body, determines the position of the body in space and gives the vital ability to maintain balance. The average human ear is capable of detecting vibrations of 20 - 20,000 Hz, but there are deviations up or down. Ideally, the audible frequency range is 16 - 20,000 Hz, which also corresponds to 16 m - 20 cm wavelength. The ear is divided into three components: the outer, middle and inner ear. Each of these “divisions” performs its own function, but all three divisions are closely connected with each other and actually transmit sound waves to each other.

External (outer) ear

The outer ear consists of the pinna and the external auditory canal. The auricle is an elastic cartilage of complex shape, covered with skin. At the bottom of the auricle there is a lobe, which consists of fatty tissue and is also covered with skin. The auricle acts as a receiver of sound waves from the surrounding space. The special shape of the structure of the auricle makes it possible to better capture sounds, especially the sounds of the mid-frequency range, which is responsible for the transmission of speech information. This fact is largely due to evolutionary necessity, since a person spends most of his life in oral communication with representatives of his species. The human auricle is practically motionless, unlike a large number of representatives of the animal species, which use ear movements to more accurately tune to the sound source.

The folds of the human auricle are designed in such a way that they introduce corrections (minor distortions) regarding the vertical and horizontal location of the sound source in space. It is due to this unique feature a person is able to quite clearly determine the location of an object in space relative to himself, guided only by sound. This feature is also well known under the term "sound localization". The main function of the auricle is to catch as many sounds as possible in the audible frequency range. The further fate of the “caught” sound waves is decided in the ear canal, the length of which is 25-30 mm. In it, the cartilaginous part of the external auricle passes into the bone, and the skin surface of the auditory canal is endowed with sebaceous and sulfur glands. At the end of the ear canal there is an elastic eardrum, to which vibrations of sound waves reach, thereby causing its response vibrations. The eardrum, in turn, transmits these resulting vibrations to the middle ear.

Middle ear

Vibrations transmitted by the eardrum enter an area of ​​the middle ear called the “tympanic region.” This is an area with a volume of about one cubic centimeter in which three auditory ossicles are located: malleus, incus and stapes. It is these “intermediate” elements that perform the most important function: Transmits sound waves into the inner ear and simultaneously amplifies them. The auditory ossicles represent an extremely complex chain of sound transmission. All three bones are closely connected to each other, as well as to the eardrum, due to which vibrations are transmitted “along the chain”. On the way to the area inner ear there is a window of the vestibule, which is blocked by the base of the stapes. To equalize the pressure on both sides of the eardrum (for example, in case of changes in external pressure), the middle ear area is connected to the nasopharynx through eustachian tube. We are all very familiar with the effect of blocked ears, which occurs precisely because of such fine tuning. From the middle ear, sound vibrations, already amplified, enter the area of ​​the inner ear, the most complex and sensitive.

Inner ear

The most complex form is represented by the inner ear, called for this reason the labyrinth. The bony labyrinth includes: vestibule, cochlea and semicircular canals, as well as vestibular apparatus , responsible for balance. The cochlea is directly related to hearing in this connection. The cochlea is a spiral-shaped membranous canal filled with lymphatic fluid. Inside, the channel is divided into two parts by another membranous partition called the "main membrane". This membrane consists of fibers of various lengths (more than 24,000 in total), stretched like strings, each string resonates to its own a certain sound. The canal is divided by a membrane into the upper and lower scala, communicating at the apex of the cochlea. At the opposite end, the canal connects to the receptor apparatus of the auditory analyzer, which is covered with tiny hair cells. This hearing analyzer device is also called the “Organ of Corti”. When vibrations from the middle ear enter the cochlea, the lymphatic fluid filling the canal also begins to vibrate, transmitting vibrations to the main membrane. At this moment, the apparatus of the auditory analyzer comes into action, the hair cells of which, located in several rows, transform sound vibrations into electrical “nerve” impulses, which are transmitted along the auditory nerve to temporal zone cerebral cortex. In such a complex and ornate way, a person will ultimately hear the desired sound.

Features of perception and speech formation

The mechanism of speech formation was formed in humans throughout the entire evolutionary stage. The meaning of this ability is to transmit verbal and non-verbal information. The first carries a verbal and semantic load, the second is responsible for conveying the emotional component. The process of creating and perceiving speech includes: wording the message; coding into elements according to the rules of the existing language; transient neuromuscular actions; movement vocal cords; emission of an acoustic signal; Next, the listener comes into action, carrying out: spectral analysis of the received acoustic signal and selection of acoustic features in the peripheral auditory system, transmission of selected features via neural networks, recognition of the language code ( linguistic analysis), understanding the meaning of the message.
The apparatus for generating speech signals can be compared to a complex wind instrument, but the versatility and flexibility of configuration and the ability to reproduce the slightest subtleties and details has no analogues in nature. The voice-forming mechanism consists of three inextricable components:

  1. Generator- lungs as a reservoir of air volume. The energy of excess pressure is stored in the lungs, then through the excretory canal, with the help of the muscular system, this energy is removed through the trachea connected to the larynx. At this stage, the air stream is interrupted and modified;
  2. Vibrator- consists of vocal cords. The flow is also affected by turbulent air jets (creating edge tones) and pulsed sources (explosions);
  3. Resonator- includes resonant cavities of complex geometric shape (pharynx, oral and nasal cavities).

The totality of the individual arrangement of these elements forms the unique and individual timbre of the voice of each person individually.

The energy of the air column is generated in the lungs, which create a certain flow of air during inhalation and exhalation due to the difference in atmospheric and intrapulmonary pressure. The process of energy accumulation is carried out through inhalation, the process of release is characterized by exhalation. This happens due to the compression and expansion of the chest, which is carried out with the help of two muscle groups: intercostal and diaphragm; with deep breathing and singing, the muscles of the abdominal press, chest and neck also contract. When you inhale, the diaphragm contracts and moves down, contraction of the external intercostal muscles raises the ribs and moves them to the sides, and the sternum forward. An increase in the chest leads to a drop in pressure inside the lungs (relative to atmospheric pressure), and this space is rapidly filled with air. When you exhale, the muscles relax accordingly and everything returns to its previous state ( rib cage returns to its original state due to its own gravity, the diaphragm rises, the volume of the previously expanded lungs decreases, intrapulmonary pressure increases). Inhalation can be described as a process that requires energy expenditure (active); exhalation is a process of energy accumulation (passive). Control of the process of breathing and speech formation occurs unconsciously, but when singing, breathing control requires a conscious approach and long-term additional training.

The amount of energy that is subsequently expended on the formation of speech and voice depends on the volume of stored air and on the amount of additional pressure in the lungs. The maximum developed pressure in a trained person opera singer can reach 100-112 dB. Modulation of air flow by vibration of the vocal cords and the creation of subpharyngeal excess pressure, these processes occur in the larynx, which is a kind of valve located at the end of the trachea. The valve performs a dual function: it protects the lungs from foreign objects and supports high pressure. It is the larynx that acts as the source of speech and singing. The larynx is a collection of cartilage connected by muscles. The larynx has a rather complex structure, the main element of which is a pair of vocal cords. It is the vocal cords that are the main (but not the only) source of voice production or “vibrator”. During this process, the vocal cords begin to move, accompanied by friction. To protect against this, a special mucous secretion is secreted, which acts as a lubricant. The formation of speech sounds is determined by vibrations of the ligaments, which leads to the formation of a flow of air exhaled from the lungs to a certain type of amplitude characteristic. Between the vocal folds there are small cavities that act as acoustic filters and resonators when required.

Features of auditory perception, listening safety, hearing thresholds, adaptation, correct volume level

As can be seen from the description of the structure of the human ear, this organ is very delicate and quite complex in structure. Taking this fact into account, it is not difficult to determine that this extremely delicate and sensitive device has a set of limitations, thresholds, etc. The human auditory system is adapted to perceive quiet sounds, as well as sounds of medium intensity. Long term exposure loud sounds entails irreversible shifts in hearing thresholds, as well as other hearing problems, up to complete deafness. The degree of damage is directly proportional to the time of exposure in a loud environment. At this moment, the adaptation mechanism also comes into force - i.e. Under the influence of prolonged loud sounds, sensitivity gradually decreases, perceived volume decreases, and hearing adapts.

Adaptation initially seeks to protect the hearing organs from too loud sounds, however, it is the influence of this process that most often forces a person to uncontrollably increase the volume level of the audio system. Protection is realized thanks to the work of the mechanism of the middle and inner ear: the stapes is retracted from the oval window, thereby protecting against excessively loud sounds. But the protection mechanism is not ideal and has a time delay, triggering only 30-40 ms after the start of sound arrival, and full protection is not achieved even after a duration of 150 ms. The protection mechanism is activated when the volume level exceeds 85 dB, while the protection itself is up to 20 dB.
The most dangerous, in this case, can be considered the phenomenon of “auditory threshold shift,” which usually occurs in practice as a result of prolonged exposure to loud sounds above 90 dB. The process of restoration of the auditory system after such harmful effects can last up to 16 hours. The threshold shift begins already at an intensity level of 75 dB, and increases proportionally with increasing signal level.

When considering the problem correct level sound intensity, the worst thing is to realize the fact that problems (acquired or congenital) associated with hearing are practically untreatable in our age of fairly developed medicine. All this should lead any sane person to think about taking good care of their hearing, if, of course, they plan to preserve its pristine integrity and the ability to hear the entire frequency range for as long as possible. Fortunately, everything is not as scary as it might seem at first glance, and by taking a number of precautions you can easily preserve your hearing even in old age. Before considering these measures, it is necessary to remember one important feature of human auditory perception. The hearing aid perceives sounds nonlinearly. This phenomenon is as follows: if we imagine one frequency of a pure tone, for example 300 Hz, then nonlinearity appears when overtones of this fundamental frequency appear in the auricle according to the logarithmic principle (if the fundamental frequency is taken to be f, then the overtones of the frequency will be 2f, 3f etc. in increasing order). This nonlinearity is also easier to understand and is familiar to many under the name "nonlinear distortions". Since such harmonics (overtones) do not appear in the original pure tone, it turns out that the ear itself makes its own corrections and overtones to the original sound, but they can only be determined as subjective distortions. At intensity levels below 40 dB, subjective distortion does not occur. As the intensity increases from 40 dB, the level of subjective harmonics begins to increase, but even at the level of 80-90 dB their negative contribution to the sound is relatively small (therefore, this intensity level can conditionally be considered a kind of “golden mean” in the musical field).

Based on this information, you can easily determine a safe and acceptable volume level that will not harm the auditory organs and at the same time will make it possible to hear absolutely all the features and details of the sound, for example, in the case of working with a “hi-fi” system. This "golden mean" level is approximately 85-90 dB. It is at this sound intensity that it is possible to hear everything that is contained in the audio path, while the risk of premature damage and hearing loss is minimized. A volume level of 85 dB can be considered almost completely safe. To understand what the dangers of loud listening are and why too low a volume level does not allow you to hear all the nuances of sound, let’s look at this issue in more detail. As for low volume levels, the lack of expediency (but more often subjective desire) of listening to music at low levels is due to the following reasons:

  1. Nonlinearity of human auditory perception;
  2. Features of psychoacoustic perception, which will be discussed separately.

The nonlinearity of auditory perception discussed above has a significant effect at any volume below 80 dB. In practice it looks like in the following way: If you turn on the music at a quiet level, for example 40 dB, then the mid-frequency range of the musical composition will be heard most clearly, be it the vocals of the performer or instruments playing in this range. At the same time, there will be a clear lack of low and high frequencies, due precisely to the nonlinearity of perception and also to the fact that different frequencies sound at different volumes. Thus, it is obvious that in order to fully perceive the entirety of the picture, the frequency intensity level must be maximally aligned to single meaning. Despite the fact that even at a volume level of 85-90 dB of idealized volume equalization different frequencies does not occur, the level becomes acceptable for normal everyday listening. The lower the volume at the same time, the more clearly the characteristic nonlinearity will be perceived by ear, namely the feeling of the absence of the proper amount of high and low frequencies. At the same time, it turns out that with such nonlinearity it is impossible to speak seriously about reproducing high-fidelity “hi-fi” sound, because the accuracy of the original sound picture will be extremely low in this particular situation.

If you delve into these findings, it becomes clear why listening to music at a low volume level, although the most safe from a health point of view, is extremely negative for the ear due to the creation of clearly implausible images of musical instruments and voices, and the lack of scale of the sound stage. In general, quiet music playback can be used as background accompaniment, but it is completely contraindicated to listen to high “hi-fi” quality at low volume, for the above reasons of the impossibility of creating naturalistic images of the sound stage, which was formed by the sound engineer in the studio, at the sound recording stage. But not only low volume introduces certain restrictions on the perception of the final sound; the situation is much worse with increased volume. It is possible and quite simple to damage your hearing and significantly reduce sensitivity if you listen to music at levels above 90 dB for a long time. These data are based on a large number of medical studies, concluding that sound above 90 dB causes real and almost irreparable harm to health. The mechanism of this phenomenon lies in auditory perception and the structural features of the ear. When a sound wave with an intensity above 90 dB enters the ear canal, the middle ear organs come into play, causing a phenomenon called auditory adaptation.

The principle of what happens in this case is this: the stapes is moved away from the oval window and protects the inner ear from too loud sounds. This process is called acoustic reflex. To the ear, this is perceived as a short-term decrease in sensitivity, which may be familiar to anyone who has ever attended rock concerts in clubs, for example. After such a concert, a short-term decrease in sensitivity occurs, which after a certain period of time is restored to its previous level. However, restoration of sensitivity will not always happen and directly depends on age. Behind all this lies the great danger of listening to loud music and other sounds, the intensity of which exceeds 90 dB. The occurrence of an acoustic reflex is not the only “visible” danger of loss of auditory sensitivity. When exposed to too loud sounds for a long time, the hairs located in the area of ​​the inner ear (which respond to vibrations) become very deflected. In this case, the effect occurs that the hair responsible for the perception of a certain frequency is deflected under the influence of high-amplitude sound vibrations. At a certain point, such a hair may deviate too much and cannot return back. This will cause a corresponding loss of sensitivity at a specific frequency!

The worst thing about this whole situation is that ear diseases are practically untreatable, even with the most modern methods known to medicine. All this leads to certain serious conclusions: sound above 90 dB is dangerous to health and is almost guaranteed to cause premature hearing loss or a significant decrease in sensitivity. What’s even more unpleasant is that the previously mentioned property of adaptation comes into play over time. This process in human auditory organs occurs almost imperceptibly, i.e. a person who is slowly losing sensitivity is close to 100% likely not to notice this until the people around them themselves pay attention to constant repeated questions, like: “What did you just say?” The conclusion in the end is extremely simple: when listening to music, it is vitally important not to allow sound intensity levels above 80-85 dB! There is also a positive side to this point: the volume level of 80-85 dB approximately corresponds to the level of music recording in a studio environment. This is where the concept of the “Golden Mean” arises, above which it is better not to rise if health issues are of any importance.

Even listening to music for a short period of time at a level of 110-120 dB can cause hearing problems, for example during a live concert. Obviously, it is sometimes impossible or very difficult to avoid this, but it is extremely important to try to do this in order to maintain the integrity of auditory perception. Theoretically, short-term exposure to loud sounds (not exceeding 120 dB), even before the onset of “auditory fatigue,” does not lead to serious negative consequences. But in practice, there are usually cases of prolonged exposure to sound of such intensity. People deafen themselves without realizing the full extent of the danger in a car when listening to an audio system, at home in similar conditions, or in the headphones of a portable player. Why does this happen, and what forces the sound to become louder and louder? There are two answers to this question: 1) The influence of psychoacoustics, which will be discussed separately; 2) The constant need to “shout out” some external sounds with the volume of the music. The first aspect of the problem is quite interesting, and will be discussed in detail further, but the second side of the problem is more suggestive negative thoughts and conclusions about an erroneous understanding of the true fundamentals of proper listening to hi-fi class sound.

Without going into details, general conclusion about listening to music and the correct volume is as follows: listening to music should occur at sound intensity levels no higher than 90 dB, no lower than 80 dB in a room in which extraneous sounds from external sources are strongly muffled or completely absent (such as neighbors talking and other noise , behind the wall of the apartment; street noise and technical noise if you are inside a car, etc.). I would like to emphasize once and for all that it is precisely in the case of compliance with such that it is likely strict requirements, you can achieve the long-awaited volume balance, which will not cause premature unwanted damage to the auditory organs, and will also provide true pleasure from listening to your favorite music with the smallest sound details at high and low frequencies and the accuracy that the very concept of “hi-fi” sound pursues.

Psychoacoustics and features of perception

In order to most fully answer some important questions regarding the final human perception of sound information, there is a whole branch of science that studies a huge variety of such aspects. This section is called "psychoacoustics". The fact is that auditory perception does not end only with the functioning of the auditory organs. After the direct perception of sound by the organ of hearing (ear), then the most complex and little-studied mechanism for analyzing the information received comes into play; this is entirely the responsibility of the human brain, which is designed in such a way that during operation it generates waves of a certain frequency, and they are also designated in Hertz (Hz). Different frequencies of brain waves correspond to certain human states. Thus, it turns out that listening to music helps to change the brain's frequency tuning, and this is important to consider when listening to musical compositions. Based on this theory, there is also a method of sound therapy by directly influencing a person’s mental state. There are five types of brain waves:

  1. Delta waves (waves below 4 Hz). Corresponds to condition deep sleep without dreams, while there is a complete absence of body sensations.
  2. Theta waves (4-7 Hz waves). State of sleep or deep meditation.
  3. Alpha waves (waves 7-13 Hz). State of relaxation and relaxation during wakefulness, drowsiness.
  4. Beta waves (waves 13-40 Hz). State of activity, everyday thinking and mental activity, excitement and cognition.
  5. Gamma waves (waves above 40 Hz). A state of intense mental activity, fear, excitement and awareness.

Psychoacoustics, as a branch of science, seeks answers to the most interesting questions regarding the final human perception of sound information. In the process of studying this process, it is revealed great amount factors whose influence invariably occurs both in the process of listening to music and in any other case of processing and analyzing any sound information. The psychoacoustic examines almost all the diversity possible influences starting with the emotional and mental state a person at the time of listening, ending with the structural features of the vocal cords (if we are talking about the peculiarities of perception of all the subtleties of vocal performance) and the mechanism for converting sound into electrical impulses of the brain. The most interesting, and most importantly important factors(which are vitally important to consider every time when listening to your favorite musical compositions, as well as when building a professional audio system) will be discussed further.

The concept of consonance, musical consonance

The structure of the human auditory system is unique primarily in the mechanism of sound perception, the nonlinearity of the auditory system, and the ability to group sounds by height with a fairly high degree of accuracy. Most interesting feature perception, one can note the nonlinearity of the auditory system, which manifests itself in the form of the appearance of additional non-existent (in the fundamental tone) harmonics, especially often manifested in people with musical or absolute pitch. If we stop in more detail and analyze all the subtleties of the perception of musical sound, then the concept of “consonance” and “dissonance” of various chords and sound intervals can easily be distinguished. Concept "consonance" is defined as a consonant (from the French word “agreement”) sound, and accordingly vice versa, "dissonance"- discordant, discordant sound. Despite the diversity different interpretations These concepts are characteristics of musical intervals, it is most convenient to use the “musical-psychological” decoding of the terms: consonance is defined and felt by a person as a pleasant and comfortable, soft sound; dissonance on the other hand, it can be characterized as a sound that causes irritation, anxiety and tension. Such terminology is slightly subjective in nature, and also, throughout the history of the development of music, completely different intervals have been taken as “consonant” and vice versa.

Nowadays, these concepts are also difficult to perceive unambiguously, since there are differences among people with different musical preferences and tastes, and there is no generally accepted and agreed upon concept of harmony. The psychoacoustic basis for the perception of various musical intervals as consonant or dissonant directly depends on the concept of the “critical band”. Critical band- this is a certain bandwidth within which auditory sensations change dramatically. The width of the critical bands increases proportionally with increasing frequency. Therefore, the sensation of consonances and dissonances is directly related to the presence of critical bands. The human auditory organ (ear), as mentioned earlier, plays the role of a bandpass filter at a certain stage of the analysis of sound waves. This role is assigned to the basilar membrane, on which 24 critical bands with frequency-dependent widths are located.

Thus, consonance and inconsistency (consonance and dissonance) directly depend on the resolution of the auditory system. It turns out that if two different tones sound in unison or the frequency difference is zero, then this is perfect consonance. The same consonance occurs if the frequency difference is greater than the critical band. Dissonance occurs only when the frequency difference is from 5% to 50% of the critical band. The highest degree of dissonance in a given segment is audible if the difference is one quarter of the width of the critical band. Based on this, it is easy to analyze any mixed musical recording and combination of instruments for consonance or dissonance of sound. It is not difficult to guess what a big role the sound engineer, recording studio and other components of the final digital or analogue audio track play in this case, and all this even before attempting to play it on sound reproducing equipment.

Sound localization

The system of binaural hearing and spatial localization helps a person to perceive the fullness of the spatial sound picture. This perception mechanism is realized through two hearing receivers and two auditory channels. The sound information that arrives through these channels is subsequently processed in the peripheral part of the auditory system and is subjected to spectrotemporal analysis. Further, this information is transmitted to the higher parts of the brain, where the difference between the left and right sound signals is compared, and a single sound image is formed. This described mechanism is called binaural hearing . Thanks to this, a person has the following unique capabilities:

1) localization of sound signals from one or more sources, thereby forming a spatial picture of the perception of the sound field
2) separation of signals coming from different sources
3) highlighting some signals against the background of others (for example, isolating speech and voice from noise or the sound of instruments)

Spatial localization is easy to observe on simple example. At a concert, with a stage and a certain number of musicians on it at a certain distance from each other, you can easily (if desired, even by closing your eyes) determine the direction of arrival of the sound signal of each instrument, evaluate the depth and spatiality of the sound field. In the same way, a good hi-fi system is valued, capable of reliably “reproducing” such effects of spatiality and localization, thereby actually “deceiving” the brain into feeling a full presence at the live performance of your favorite performer. The localization of a sound source is usually determined by three main factors: time, intensity and spectral. Regardless of these factors, there are a number of patterns that can be used to understand the basics regarding sound localization.

Largest localization effect perceived human organs hearing is located in the mid-frequency region. At the same time, it is almost impossible to determine the direction of sounds of frequencies above 8000 Hz and below 150 Hz. The latter fact is especially widely used in hi-fi and home theater systems when choosing the location of the subwoofer (low-frequency section), the location of which in the room, due to the lack of localization of frequencies below 150 Hz, is practically irrelevant, and the listener in any case has a holistic image of the sound stage. The accuracy of localization depends on the location of the source of sound wave radiation in space. Thus, the greatest accuracy of sound localization is observed in the horizontal plane, reaching a value of 3°. In the vertical plane, the human auditory system is much worse at determining the direction of the source; the accuracy in this case is 10-15° (due to the specific structure of the ears and complex geometry). The localization accuracy varies slightly depending on the angle of the sound-emitting objects in space relative to the listener, and the final effect is also influenced by the degree of diffraction of sound waves from the listener's head. It should also be noted that broadband signals are localized better than narrowband noise.

The situation with determining the depth of directional sound is much more interesting. For example, a person can determine the distance to an object by sound, however, this happens to a greater extent due to changes in sound pressure in space. Typically, the further the object is from the listener, the more the sound waves in free space are attenuated (in the room the influence of reflected sound waves is added). Thus, we can conclude that the localization accuracy is higher in a closed room precisely due to the occurrence of reverberation. Reflected waves arising in indoors, make it possible for such interesting effects, such as expansion of the sound stage, enveloping, etc. These phenomena are possible precisely due to the sensitivity of three-dimensional sound localization. The main dependencies that determine the horizontal localization of sound: 1) the difference in the time of arrival of the sound wave to the left and right ear; 2) differences in intensity due to diffraction on the listener's head. To determine the depth of sound, the difference in sound pressure level and the difference in spectral composition are important. Localization in the vertical plane is also strongly dependent on diffraction in the auricle.

The situation is more complicated with modern surround sound systems based on dolby surround technology and analogues. It would seem that the principles of constructing home theater systems clearly regulate the method of recreating a fairly naturalistic spatial picture of 3D sound with the inherent volume and localization of virtual sources in space. However, not everything is so trivial, since the very mechanisms of perception and localization of a large number of sound sources are usually not taken into account. The transformation of sound by the organs of hearing involves the process of adding signals from different sources arriving at different ears. Moreover, if phase structure different sounds are more or less synchronous, such a process is perceived by ear as sound emanating from one source. There are also a number of difficulties, including the peculiarities of the localization mechanism, which makes it difficult to accurately determine the direction of the source in space.

In view of the above, the most difficult task becomes the separation of sounds from different sources, especially if these different sources play a similar amplitude-frequency signal. And this is exactly what happens in practice in any modern surround sound system, and even in a conventional stereo system. When a person is listening a large number of sounds emanating from different sources, first a determination is made that each specific sound belongs to the source that creates it (grouping by frequency, pitch, timbre). And only at the second stage does hearing try to localize the source. After this, incoming sounds are divided into streams based on spatial characteristics (difference in time of arrival of signals, difference in amplitude). Based on the information received, a more or less static and fixed auditory image is formed, from which it is possible to determine where each specific sound comes from.

It is very convenient to track these processes using the example of an ordinary stage, with musicians fixedly located on it. At the same time, it is very interesting that if the vocalist/performer, occupying an initially certain position on the stage, begins to smoothly move around the stage in any direction, the previously formed auditory image will not change! Determining the direction of the sound emanating from the vocalist will remain subjectively the same, as if he were standing in the same place where he stood before moving. Only in the event of a sudden change in the performer’s location on stage will the formed sound image be split. In addition to the problems discussed and the complexity of the processes of localizing sounds in space, in the case of multi-channel surround sound systems, the reverberation process in the final listening room plays a rather large role. This dependence is most clearly observed when big number reflected sounds come from all sides - the accuracy of localization deteriorates significantly. If the energy saturation of reflected waves is greater (predominant) than direct sounds, the localization criterion in such a room becomes extremely blurred, and it is extremely difficult (if not impossible) to talk about the accuracy of determining such sources.

However, in a strongly reverberating room localization theoretically occurs; in the case of broadband signals, hearing is guided by the intensity difference parameter. In this case, the direction is determined using the high-frequency component of the spectrum. In any room, the accuracy of localization will depend on the time of arrival of reflected sounds after direct sounds. If the gap between these sound signals is too small, the “law of the direct wave” begins to work to help the auditory system. The essence of this phenomenon: if sounds with a short time delay interval come from different directions, then the localization of the entire sound occurs according to the first arriving sound, i.e. the ear ignores, to some extent, reflected sound if it arrives too soon after the direct sound. A similar effect also appears when the direction of sound arrival in the vertical plane is determined, but in this case it is much weaker (due to the fact that the sensitivity of the auditory system to localization in the vertical plane is noticeably worse).

The essence of the precedence effect is much deeper and is of a psychological rather than physiological nature. A large number of experiments were carried out, on the basis of which the dependence was established. This effect occurs primarily when the time of occurrence of the echo, its amplitude and direction coincide with some of the listener’s “expectations” of how the acoustics of a particular room form the sound image. Perhaps the person has already had listening experience in this room or similar ones, which predisposes the auditory system to the occurrence of the “expected” precedence effect. To circumvent these limitations inherent to human hearing, in the case of several sound sources, various tricks and tricks are used, with the help of which a more or less plausible localization of musical instruments/other sound sources in space is ultimately formed. By and large, the reproduction of stereo and multi-channel sound images is based on great deception and the creation of an auditory illusion.

When two or more speaker systems (for example, 5.1 or 7.1, or even 9.1) reproduce sound from different points in the room, the listener hears sounds emanating from non-existent or imaginary sources, perceiving a certain sound panorama. The possibility of this deception lies in the biological features of the human body. Most likely, a person did not have time to adapt to recognizing such deception due to the fact that the principles of “artificial” sound reproduction appeared relatively recently. But, although the process of creating an imaginary localization turned out to be possible, the implementation is still far from perfect. The fact is that the ear really perceives a sound source where it actually does not exist, but the correctness and accuracy of the transmission of sound information (in particular timbre) is a big question. Through numerous experiments in real reverberation rooms and in anechoic chambers, it was established that the timbre of sound waves from real and imaginary sources is different. This mainly affects the subjective perception of spectral loudness; the timbre in this case changes in a significant and noticeable way (when compared with a similar sound reproduced by a real source).

In the case of multi-channel home theater systems, the level of distortion is noticeably higher for several reasons: 1) Many sound signals similar in amplitude-frequency and phase characteristics simultaneously arrive from different sources and directions (including reflected waves) to each ear canal. This leads to increased distortion and the appearance of comb filtering. 2) Strong separation of loudspeakers in space (relative to each other; in multi-channel systems this distance can be several meters or more) contributes to the growth of timbre distortions and sound coloration in the area of ​​the imaginary source. As a result, we can say that timbre coloring in multi-channel and surround sound systems in practice occurs for two reasons: the phenomenon of comb filtering and the influence of reverberation processes in a particular room. If more than one source is responsible for the reproduction of sound information (this also applies to a stereo system with 2 sources), the appearance of a “comb filtering” effect caused by at different times arrival of sound waves in each auditory canal. Particular unevenness is observed in the upper midrange of 1-4 kHz.

The person is deteriorating, and over time we lose the ability to detect a certain frequency.

Video made by the channel AsapSCIENCE, is a kind of age-related hearing loss test that will help you find out your hearing limits.

Various sounds are played in the video, starting at 8000 Hz, which means your hearing is not impaired.

The frequency then increases and this indicates the age of your hearing based on when you stop hearing a particular sound.


So if you hear a frequency:

12,000 Hz – you are under 50 years old

15,000 Hz – you are under 40 years old

16,000 Hz – you are under 30 years old

17,000 – 18,000 – you are under 24 years old

19,000 – you are under 20 years old

If you want the test to be more accurate, you should set the video quality to 720p or better yet 1080p, and listen with headphones.

Hearing test (video)


Hearing loss

If you heard all the sounds, you are most likely under 20 years old. Results depend on sensory receptors in your ear called hair cells which become damaged and degenerate over time.

This type of hearing loss is called sensorineural hearing loss. A variety of infections, medications, and autoimmune diseases can cause this disorder. The outer hair cells, which are tuned to detect higher frequencies, usually die first, causing the effects of age-related hearing loss, as demonstrated in this video.

Human hearing: interesting facts

1. Among healthy people frequency range that the human ear can detect ranges from 20 (lower than the lowest note on a piano) to 20,000 Hertz (higher than the highest note on a small flute). However, the upper limit of this range decreases steadily with age.

2. People talk to each other at a frequency from 200 to 8000 Hz, and the human ear is most sensitive to a frequency of 1000 – 3500 Hz

3. Sounds that are above the limit of human audibility are called ultrasound, and those below - infrasound.

4. Ours my ears don't stop working even in my sleep, continuing to hear sounds. However, our brain ignores them.

5. Sound travels at 344 meters per second. A sonic boom occurs when an object exceeds the speed of sound. Sound waves in front and behind the object collide and create shock.

6. Ears - self-cleaning organ. Pores in ear canal allocate earwax, and tiny hairs called cilia push wax out of the ear

7. The sound of a baby crying is approximately 115 dB, and it's louder than a car horn.

8. In Africa there is a Maaban tribe who live in such silence that even in old age they hear whispers up to 300 meters away.

9. Level bulldozer sound idling is about 85 dB (decibels), which can cause hearing damage after just one 8-hour day.

10. Sitting in front speakers at a rock concert, you're exposing yourself to 120 dB, which begins to damage your hearing after just 7.5 minutes.

Frequencies

Frequency - physical quantity, a characteristic of a periodic process, is equal to the number of repetitions or occurrences of events (processes) per unit of time.

As We know, the human ear hears frequencies from 16 Hz to 20,000 kHz. But this is very average.

The sound comes from various reasons. Sound is wave-like air pressure. If there were no air, we would not hear any sound. There is no sound in space.
We hear sound because our ears are sensitive to changes in air pressure - sound waves. The simplest sound wave is a short sound signal - like this:

Sound waves entering the ear canal vibrate the eardrum. Through the chain of ossicles of the middle ear, the oscillatory movement of the membrane is transmitted to the fluid of the cochlea. The wave-like movement of this fluid, in turn, is transmitted to the main membrane. The movement of the latter entails irritation of the endings auditory nerve. That's how it is Main way sound from its source to our consciousness. TYTS

When you clap your hands, the air between your palms is pushed out and a sound wave is created. The increased pressure causes air molecules to spread in all directions at the speed of sound, which is 340 m/s. When the wave reaches the ear, it vibrates the eardrum, from which the signal is transmitted to the brain and you hear a pop.
A pop is a short, single oscillation that quickly fades away. The sound vibration graph of a typical cotton sound looks like this:

Another typical example of a simple sound wave is a periodic oscillation. For example, when a bell rings, the air is shaken by periodic vibrations of the bell's walls.

So at what frequency does the ordinary human ear begin to hear? It will not hear a frequency of 1 Hz, but can only see it using the example of an oscillatory system. The human ear hears precisely starting at frequencies of 16 Hz. That is, when air vibrations are perceived by our ear as a certain sound.

How many sounds does a person hear?

Not all people with normal hearing hear the same. Some are able to distinguish sounds that are close in pitch and volume and detect individual tones in music or noise. Others cannot do this. For a person with fine hearing, there are more sounds than for a person with undeveloped hearing.

But how different do the frequencies of two sounds have to be in order for them to be heard as two different tones? Is it possible, for example, to distinguish tones from each other if the difference in frequencies is equal to one vibration per second? It turns out that for some tones this is possible, but for others it is not. Thus, a tone with a frequency of 435 can be distinguished in pitch from tones with frequencies of 434 and 436. But if we take higher tones, the difference is already evident at a greater frequency difference. The ear perceives tones with the number of vibrations 1000 and 1001 as identical and detects the difference in sound only between frequencies 1000 and 1003. For higher tones, this difference in frequencies is even greater. For example, for frequencies around 3000 it is equal to 9 oscillations.

In the same way, our ability to distinguish sounds that are similar in volume is not the same. At a frequency of 32, only 3 sounds of different volumes can be heard; at a frequency of 125 there are already 94 sounds of varying volumes, at 1000 vibrations - 374, at 8000 - again less and, finally, at a frequency of 16,000 we hear only 16 sounds. In total, our ear can catch more than half a million sounds, varying in height and volume! These are only half a million simple sounds. Add to this the countless combinations of two or more tones - consonance, and you will get an impression of the diversity of the sound world in which we live and in which our ear is so free to navigate. That is why the ear is considered, along with the eye, the most sensitive sense organ.

Therefore, for the convenience of understanding the sound, we use a non-usual scale with divisions of 1 kHz

And logarithmic. With extended frequency representation from 0 Hz to 1000 Hz. The frequency spectrum can thus be represented in the form of a diagram like this from 16 to 20,000 Hz.

But not all people, even with normal hearing, are equally sensitive to sounds of different frequencies. Thus, children usually perceive sounds with a frequency of up to 22 thousand without tension. In most adults, ear sensitivity to high-pitched sounds has already been reduced to 16-18 thousand vibrations per second. The sensitivity of the ear in old people is limited to sounds with a frequency of 10–12 thousand. They often do not hear at all the singing of a mosquito, the chirping of a grasshopper, a cricket, or even the chirping of a sparrow. Thus from perfect sound(Fig. above) as a person ages, he already hears sounds from a narrower perspective

Let me give you an example of the frequency range of musical instruments

Now in relation to Our topic. Dynamics, as an oscillatory system, due to a number of its features, cannot reproduce the entire spectrum of frequencies with constant linear characteristics. Ideally, this would be a full-range speaker that reproduces a frequency spectrum from 16 Hz to 20 kHz at one volume level. Therefore, in car audio, several types of speakers are used to reproduce specific frequencies.

So far it looks like this (for a three-way system + subwoofer).

Subwoofer 16 Hz to 60 Hz
Midbass 60 Hz to 600 Hz
Midrange from 600 Hz to 3000 Hz
Tweeter from 3000 Hz to 20000 Hz


About the section

This section contains articles devoted to phenomena or versions that in one way or another may be interesting or useful to researchers of the unexplained.
Articles are divided into categories:
Informational. They contain information useful for researchers from various fields of knowledge.
Analytical. They include analytics of accumulated information about versions or phenomena, as well as descriptions of the results of experiments performed.
Technical. They accumulate information about technical solutions that can be used in the field of studying unexplained facts.
Techniques. Contain descriptions of methods used by group members when investigating facts and studying phenomena.
Media. Contains information about the reflection of phenomena in the entertainment industry: films, cartoons, games, etc.
Known misconceptions. Revelations of known unexplained facts, collected including from third-party sources.

Article type:

Information

Peculiarities of human perception. Hearing

Sound is vibrations, i.e. periodic mechanical disturbance in elastic media - gaseous, liquid and solid. Such a disturbance, which represents some physical change in the medium (for example, a change in density or pressure, displacement of particles), propagates in it in the form of a sound wave. A sound may be inaudible if its frequency is beyond the sensitivity of the human ear, or if it travels through a medium, such as a solid, that cannot have direct contact with the ear, or if its energy is rapidly dissipated in the medium. Thus, the process of perceiving sound that is usual for us is only one side of acoustics.

Sound waves

Sound wave

Sound waves can serve as an example of an oscillatory process. Any hesitation is associated with a violation equilibrium state system and is expressed in the deviation of its characteristics from equilibrium values ​​with a subsequent return to the original value. For sound vibrations, this characteristic is the pressure at a point in the medium, and its deviation is the sound pressure.

Consider a long pipe filled with air. A piston that fits tightly to the walls is inserted into it at the left end. If the piston is sharply moved to the right and stopped, the air in the immediate vicinity of it will be compressed for a moment. The compressed air will then expand, pushing the air adjacent to it to the right, and the area of ​​compression initially created near the piston will move through the pipe at a constant speed. This compression wave is the sound wave in the gas.
That is, a sharp displacement of particles of an elastic medium in one place will increase the pressure in this place. Thanks to the elastic bonds of particles, pressure is transmitted to neighboring particles, which, in turn, act on the next ones, and the area high blood pressure as if moving in an elastic medium. The area of ​​high pressure is followed by an area low blood pressure, and thus a series of alternating regions of compression and rarefaction are formed, propagating in the medium in the form of a wave. Each particle of the elastic medium in this case will perform oscillatory movements.

A sound wave in a gas is characterized by excess pressure, excess density, displacement of particles and their speed. For sound waves, these deviations from equilibrium values ​​are always small. Thus, the excess pressure associated with the wave is much less than the static pressure of the gas. Otherwise, we are dealing with another phenomenon - a shock wave. In a sound wave corresponding to normal speech, the excess pressure is only about one millionth of atmospheric pressure.

The important fact is that the substance is not carried away by the sound wave. A wave is only a temporary disturbance passing through the air, after which the air returns to an equilibrium state.
Wave motion, of course, is not unique to sound: light and radio signals travel in the form of waves, and everyone is familiar with waves on the surface of water.

Thus, sound, in a broad sense, is elastic waves propagating in some elastic medium and creating mechanical vibrations in it; in a narrow sense, the subjective perception of these vibrations by the special sense organs of animals or humans.
Like any wave, sound is characterized by amplitude and frequency spectrum. Typically, a person hears sounds transmitted through the air in the frequency range from 16-20 Hz to 15-20 kHz. Sound below the human hearing range is called infrasound; higher: up to 1 GHz, - ultrasound, from 1 GHz - hypersound. Among the audible sounds, we should also highlight phonetic, speech sounds and phonemes (which make up spoken speech) and musical sounds (which make up music).

Longitudinal and transverse sound waves are distinguished depending on the ratio of the direction of propagation of the wave and the direction of mechanical vibrations of the particles of the propagation medium.
In liquid and gaseous media, where there are no significant fluctuations in density, acoustic waves are longitudinal in nature, that is, the direction of vibration of the particles coincides with the direction of movement of the wave. IN solids, in addition to longitudinal deformations, elastic shear deformations also occur, causing the excitation of transverse (shear) waves; in this case, the particles oscillate perpendicular to the direction of wave propagation. The speed of propagation of longitudinal waves is much greater than the speed of propagation of shear waves.

The air is not uniform for sound everywhere. It is known that air is constantly in motion. The speed of its movement in different layers is not the same. In layers close to the ground, the air comes into contact with its surface, buildings, forests, and therefore its speed here is less than at the top. Due to this, the sound wave does not travel equally fast at the top and bottom. If the movement of air, i.e. the wind, is a companion to sound, then upper layers air, the wind will drive the sound wave more strongly than in the lower ones. When there is a headwind, sound at the top travels slower than at the bottom. This difference in speed affects the shape of the sound wave. As a result of wave distortion, sound does not travel straight. With a tailwind, the line of propagation of the sound wave bends downward, and with a headwind, it bends upward.

Another reason for the uneven propagation of sound in the air. This is the different temperature of its individual layers.

Unevenly heated layers of air, like the wind, change the direction of sound. During the day, the sound wave bends upward because the speed of sound in the lower, hotter layers is greater than in the upper layers. In the evening, when the earth, and with it the nearby layers of air, quickly cool, the upper layers become warmer than the lower ones, the speed of sound in them is greater, and the line of propagation of sound waves bends downward. Therefore, in the evenings, out of the blue, you can hear better.

Watching clouds, you can often notice how they move at different heights not only with at different speeds, but sometimes in different directions. This means that the wind at different heights from the ground may have different speeds and directions. The shape of the sound wave in such layers will also change from layer to layer. Let, for example, the sound come against the wind. In this case, the sound propagation line should bend and go upward. But if a layer of slow-moving air gets in its way, it will change its direction again and may return to the ground again. It is then that in the space from the place where the wave rises in height to the place where it returns to the ground, a “zone of silence” appears.

Organs of sound perception

Hearing - ability biological organisms perceive sounds with the hearing organs; special function hearing aid, excited by sound vibrations environment, for example air or water. One of the biological five senses, also called acoustic perception.

The human ear perceives sound waves with a length of approximately 20 m to 1.6 cm, which corresponds to 16 - 20,000 Hz (oscillations per second) when vibrations are transmitted through the air, and up to 220 kHz when sound is transmitted through the bones of the skull. These waves have important biological significance, for example, sound waves in the range of 300-4000 Hz correspond to the human voice. Sounds above 20,000 Hz are of little practical importance as they decelerate quickly; vibrations below 60 Hz are perceived through the vibration sense. The range of frequencies that a person is able to hear is called the auditory or sound range; higher frequencies are called ultrasound, and lower frequencies are called infrasound.
The ability to distinguish sound frequencies greatly depends on the individual: his age, gender, susceptibility to hearing diseases, training and hearing fatigue. Individuals are capable of perceiving sound up to 22 kHz, and possibly higher.
A person can distinguish several sounds at the same time due to the fact that there can be several standing waves in the cochlea at the same time.

The ear is a complex vestibular-auditory organ that performs two functions: it perceives sound impulses and is responsible for the position of the body in space and the ability to maintain balance. This is a paired organ that is located in the temporal bones of the skull, limited externally by the auricles.

The organ of hearing and balance is represented by three sections: the outer, middle and inner ear, each of which performs its own specific functions.

The outer ear consists of the pinna and the external auditory canal. The auricle is a complex-shaped elastic cartilage covered with skin, its lower part, called the lobe, is skin fold, which consists of skin and adipose tissue.
The auricle in living organisms works as a receiver of sound waves, which are then transmitted to the inside of the hearing aid. The value of the auricle in humans is much smaller than in animals, so in humans it is practically motionless. But many animals, by moving their ears, are able to determine the location of the source of sound much more accurately than humans.

The folds of the human auricle contribute to the incoming ear canal sound - slight frequency distortions, depending on the horizontal and vertical localization of sound. Thus, the brain receives additional information to clarify the location of the sound source. This effect is sometimes used in acoustics, including to create the sensation of surround sound when using headphones or hearing aids.
The function of the auricle is to catch sounds; its continuation is the cartilage of the external auditory canal, the length of which is on average 25-30 mm. Cartilaginous part The auditory canal passes into the bone, and the entire external auditory canal is lined with skin containing sebaceous and sulfur glands, which are modified sweat glands. This passage ends blindly: it is separated from the middle ear by the eardrum. Sound waves captured by the auricle hit the eardrum and cause it to vibrate.

In turn, vibrations from the eardrum are transmitted to the middle ear.

Middle ear
The main part of the middle ear is the tympanic cavity - a small space with a volume of about 1 cm³ located in the temporal bone. There are three auditory ossicles: the malleus, the incus and the stirrup - they transmit sound vibrations from the outer ear to the inner ear, simultaneously amplifying them.

The auditory ossicles, as the smallest fragments of the human skeleton, represent a chain that transmits vibrations. The handle of the malleus is closely fused with the eardrum, the head of the malleus is connected to the incus, and that, in turn, with its long process, is connected to the stapes. The base of the stapes closes the window of the vestibule, thus connecting to the inner ear.
The middle ear cavity is connected to the nasopharynx through the Eustachian tube, through which the average air pressure inside and outside the eardrum is equalized. When external pressure changes, the ears sometimes become blocked, which is usually resolved by yawning reflexively. Experience shows that ear congestion is solved even more effectively by swallowing movements or by blowing into a pinched nose at this moment.

Inner ear
Of the three sections of the organ of hearing and balance, the most complex is the inner ear, which, due to its intricate shape, is called the labyrinth. The bony labyrinth consists of the vestibule, cochlea and semicircular canals, but only the cochlea, filled with lymphatic fluids, is directly related to hearing. Inside the cochlea there is a membranous canal, also filled with liquid, on the lower wall of which there is a receptor apparatus of the auditory analyzer, covered with hair cells. Hair cells detect vibrations of the fluid filling the canal. Each hair cell is tuned to a specific audio frequency, with cells tuned to low frequencies located in the upper part of the cochlea, and high frequencies are picked up by cells in the lower part of the cochlea. When hair cells die from age or for other reasons, a person loses the ability to perceive sounds of the corresponding frequencies.

Limits of Perception

The human ear nominally hears sounds in the range of 16 to 20,000 Hz. The upper limit tends to decrease with age. Most adults cannot hear sounds above 16 kHz. The ear itself does not respond to frequencies below 20 Hz, but they can be felt through the senses of touch.

The range of loudness of perceived sounds is enormous. But the eardrum in the ear is only sensitive to changes in pressure. Sound pressure level is usually measured in decibels (dB). The lower threshold of audibility is defined as 0 dB (20 micropascals), and the definition of the upper limit of audibility refers rather to the threshold of discomfort and then to hearing impairment, concussion, etc. This limit depends on how long we listen to the sound. The ear can tolerate short-term increases in volume up to 120 dB without consequences, but long-term exposure to sounds above 80 dB can cause hearing loss.

More careful studies of the lower limit of hearing have shown that the minimum threshold at which sound remains audible depends on frequency. This graph is called the absolute hearing threshold. On average, it has a region of greatest sensitivity in the range from 1 kHz to 5 kHz, although sensitivity decreases with age in the range above 2 kHz.
There is also a way to perceive sound without the participation of the eardrum - the so-called microwave auditory effect, when modulated radiation in the microwave range (from 1 to 300 GHz) affects the tissue around the cochlea, causing a person to perceive various sounds.
Sometimes a person can hear sounds in the low-frequency region, although in reality there were no sounds of this frequency. This happens because the vibrations of the basilar membrane in the ear are not linear and vibrations can occur in it with a difference frequency between two higher frequencies.

Synesthesia

One of the most unusual psychoneurological phenomena, in which the type of stimulus and the type of sensations that a person experiences do not coincide. Synaesthetic perception is expressed in the fact that in addition to ordinary qualities, additional, simpler sensations or persistent “elementary” impressions may arise - for example, color, smell, sounds, tastes, qualities of a textured surface, transparency, volume and shape, location in space and other qualities , not received through the senses, but existing only in the form of reactions. Such additional qualities may either arise as isolated sensory impressions or even manifest physically.

There is, for example, auditory synesthesia. This is the ability of some people to "hear" sounds when observing moving objects or flashes, even if they are not accompanied by actual sound phenomena.
It should be borne in mind that synesthesia is rather a psychoneurological feature of a person and is not a mental disorder. This perception of the world around us can be felt by an ordinary person through the use of certain narcotic substances.

There is no general theory of synesthesia (a scientifically proven, universal idea about it) yet. Currently, there are many hypotheses and a lot of research is being conducted in this area. Original classifications and comparisons have already appeared, and certain strict patterns have emerged. For example, we scientists have already found out that synesthetes have a special nature of attention - as if “preconscious” - to those phenomena that cause synesthesia in them. Synesthetes have a slightly different brain anatomy and a radically different activation of the brain to synaesthetic “stimuli.” And researchers from the University of Oxford (UK) conducted a series of experiments during which they found that the cause of synesthesia may be overexcitable neurons. The only thing that can be said for sure is that such perception is obtained at the level of brain function, and not at the level of primary perception of information.

Conclusion

Pressure waves travel through the outer ear, eardrum, and middle ear ossicles to reach the fluid-filled, cochlear-shaped inner ear. The liquid, oscillating, hits a membrane covered with tiny hairs, cilia. The sinusoidal components of a complex sound cause vibrations in various parts of the membrane. The cilia that vibrate along with the membrane excite the cilia associated with them. nerve fibers; a series of pulses appear in them, in which the frequency and amplitude of each component of a complex wave are “encoded”; this data is electrochemically transmitted to the brain.

Of the entire spectrum of sounds, they primarily distinguish audible range: from 20 to 20,000 hertz, infrasounds (up to 20 hertz) and ultrasounds - from 20,000 hertz and above. A person cannot hear infrasounds and ultrasounds, but this does not mean that they do not affect him. It is known that infrasounds, especially below 10 hertz, can influence the human psyche and cause depressive states. Ultrasounds can cause astheno-vegetative syndromes, etc.
The audible part of the sound range is divided into low-frequency sounds - up to 500 hertz, mid-frequency - 500-10,000 hertz and high-frequency - over 10,000 hertz.

This division is very important, since the human ear is not equally sensitive to different sounds. The ear is most sensitive to a relatively narrow range of mid-frequency sounds from 1000 to 5000 hertz. To lower and higher frequency sounds, sensitivity drops sharply. This leads to the fact that a person is able to hear sounds with an energy of about 0 decibels in the mid-frequency range and not hear low-frequency sounds of 20-40-60 decibels. That is, sounds with the same energy in the mid-frequency range can be perceived as loud, but in the low-frequency range as quiet or not be heard at all.

This feature of sound was not formed by nature by chance. The sounds necessary for its existence: speech, sounds of nature, are mainly in the mid-frequency range.
The perception of sounds is significantly impaired if other sounds, noises similar in frequency or harmonic composition, are heard at the same time. This means, on the one hand, the human ear does not perceive low-frequency sounds well, and, on the other hand, if there is extraneous noise in the room, then the perception of such sounds can be further disturbed and distorted.

CATEGORIES

POPULAR ARTICLES

2024 “kingad.ru” - ultrasound examination of human organs