Idealization. Thought experiment

Experiment

The most important part of scientific research is experiment. More than 2/3 of all scientific labor resources are spent on experiments. The basis of an experiment is a scientifically conducted experiment(s) with precisely taken into account and controlled conditions that make it possible to monitor its progress, control it, and recreate it each time these conditions are repeated. The word experiment itself comes from Lat. experimentum- sample. Experience is understood as the reproduction of the phenomenon under study under certain experimental conditions with the possibility of recording its results. Experience is a separate elementary part of the experiment.

An experiment differs from ordinary, ordinary passive observation by the active influence of the researcher on the phenomenon being studied.

In scientific language and research work, the term “experiment” is usually used in a meaning common to a number of related concepts: experience, targeted observation, reproduction of an object of knowledge, organization of special conditions of its existence. This concept includes the scientific setting up of experiments and observation of the phenomenon under study under precisely taken into account conditions, allowing one to monitor the course of phenomena and recreate it each time these conditions are repeated.

Basic purpose experiments are to identify the properties of the objects under study and test the validity of hypotheses

When conducting experimental studies, decisions can be made two main tasks:

1. Identification of quantitative patterns that establish the relationship between variables that describe the object of study.

2. Finding the values ​​of variables that ensure the optimal (according to a certain criterion) mode of operation of the object.

There are natural and model experiments. If the first is placed directly with the object, then the second - with its substitute - the model. Currently, the most common types of models are mathematical, and experiments carried out on such models are called computational.

Before each experiment, a program is drawn up, which includes:

– the purpose and objectives of the experiment; selection of variable factors (input variables);

– justification of the scope of the experiment, the number of experiments;

– determination of the sequence of changes in factors;

– choosing a step for changing factors, setting intervals between future experimental points;

– justification of measuring instruments;

– description of the experiment;

– justification of methods for processing and analyzing experimental results.

Before the experiment, it is necessary to select variable factors, i.e. establish the main and secondary characteristics that influence the process under study, analyze the calculated (theoretical) process diagrams. The main principle for establishing the degree of importance of a characteristic is its role in the process under study.

Often the experimenter’s work is so chaotic and unorganized, and its effectiveness is so low, that the results obtained cannot justify even the funds spent on conducting the experiments. Therefore, the issues of organizing an experiment, reducing the costs of conducting it and processing the results obtained are quite relevant.

Modern methods of planning an experiment and processing its results, developed on the basis of probability theory and mathematical statistics, allow:

– significantly (often several times) reduce the number of experiments required;

– make the experimenter’s work more focused and organized,

– significantly increase both the productivity of his work and the reliability of the results obtained.

The theory of experimental planning began with the work of the English scientist R. Fisher in the 30s of the 20th century, who used it to solve agrobiological problems.

Planning an experiment consists of choosing the number and conditions of experiments that allow one to obtain the necessary knowledge about the object of study with the required accuracy. This is purposeful control of an experiment, implemented under conditions of incomplete knowledge of the mechanism of the phenomenon being studied.

The purpose of planning an experiment is to find such conditions and rules for conducting experiments under which it is possible to obtain reliable and reliable information about an object with the least amount of labor, and also to present this information in a compact and convenient form with a quantitative assessment of accuracy.

The general direction of the theory of experimental planning can be formulated as follows: “less experiments - more information - higher quality of results.”

Experiments are usually carried out in small series according to a pre-designed algorithm. After each small series of experiments, the observation results are processed and a strictly informed decision is made on what to do next. When choosing an algorithm for planning an experiment, the purpose of the study, as well as a priori information about the mechanism of the phenomenon being studied, is naturally taken into account. This information is always incomplete, with the possible exception of a trivial case - demonstration experiments.

As a rule, any object of study (a carrier of some unknown properties or qualities that need to be studied) can be represented as a “black box” with a certain number of inputs and outputs (Fig. 2.2.).


Rice. 5.1. Structural diagram of the research object

Input variables Х i, i = 1, 2,…k (where k is the number of variables) that determine the state of the object are called factors. The fixed value of the factor is called factor level. The main requirement for factors is sufficient controllability, which means the ability to establish the desired level of the factor and stabilize it throughout the experiment.

The output variable Y g (usually g = 1) is the object’s response to input influences; it's called response, and the dependence

Y = f(X 1 , X 2 , …X i ,…X k) (2.1)

called response function or goals. Usually there is only a general idea about the nature of this dependence. The choice of response function is determined by the purpose of the study, which can be the optimization of economic (cost, performance), technological (accuracy, speed), design (dimensions, reliability) or other characteristics of the object.

The geometric representation of the response function in the factor space X 1, X 2, ..., X k is called response surface

The true form of the response function (2.1) is most often unknown before the experiment, and therefore a statistical model of the process is used to mathematically describe the response surface

Y р = f(X 1 , X 2 , …X i ,…X k). (2.2)

Equation (2.2) is obtained as a result of an experiment and is called an approximating function or a regression model of the process. By approximation we mean the replacement of exact analytical expressions with approximate ones. A polynomial of some degree is usually used as a regression equation. Moreover, polynomials of the first and second order are most widely used in calculations, since the required accuracy of calculations is usually very low (about 5 - 15%).

For example, for k = 1, the nth degree polynomial has the form

for k = 2 and n = 1, usually written as

where a 0 , a 1 , a 2 ,…a n are unknown regression coefficients, which are calculated based on the experimental results

In addition, due to the finite number of terms of the approximating polynomial, the discrepancy between the true and approximate values ​​of the response function outside the experimental points can be significant. In connection with the above, the problem arises of finding such a type of polynomial and such a number of experiments that a certain criterion is satisfied. Usually, the sum of squared deviations of the experimental values ​​Y j from their calculated value Y j p is taken as a criterion. The best approximation of the approximating function to the true one is considered to be a function that satisfies the condition of the minimum of this sum.

To determine the unknown coefficients of the regression model (5.2), the most universal least squares method (LSM).

Using least squares, the values ​​a 0 , a 1 , a 2 , …, a n are found from the condition of minimizing the sum of squared deviations of the experimental response values ​​Y j from the obtained Y j p using a regression model, i.e. by minimizing the sum:

Minimization of the sum of squares is carried out in the usual way using differential calculus by equating the first partial derivatives with respect to a 0, a 1, a 2,…., a n to 0. The result is a closed system of algebraic equations, with unknowns a 0 , a 1 , a 2 ,…. ,a n .

When using the least squares method, a necessary condition for obtaining statistical estimates is the fulfillment of the inequality N > d, i.e. the number of experiments N must be greater than the number of unknown coefficients d.

The main feature of the statistical (regression) model under consideration is that such a model cannot accurately describe the behavior of an object in any specific experiment. The researcher cannot predict the exact value of Y in each experiment, but with the help of an appropriate statistical model he can indicate around which center the values ​​of Y will be grouped for a given combination of factor values ​​X ij .

Induction and deduction

Induction – This is a type of generalization, which consists in the transition from knowledge of individual facts and from less general knowledge to more general knowledge. With the inductive method of research, general principles and laws are established based on particular facts and phenomena.

The induction process usually begins with the comparison and analysis of observational and experimental data. As this data set expands, a regular occurrence of some property or relationship may become apparent. The repeated repetition observed in experiments in the absence of exceptions inspires confidence in the universality of the phenomenon and leads to an inductive generalization - the assumption that this is exactly how things will be in all similar cases. A conclusion by induction is a conclusion about the general properties of all objects belonging to a given class, based on the observation of a fairly wide variety of individual facts. So, for example, D.I. Mendeleev, using particular facts about chemical elements, formulated the periodic law.

Typically, inductive generalizations are viewed as empirical truths, or empirical laws.

Deduction- this is a thinking operation, which consists in the fact that new knowledge is derived on the basis of knowledge of a more general nature, previously obtained by generalizing observations, experiments, practical activities, i.e. using induction. When applying the deductive method, particular provisions are derived from general laws, axioms, etc. The deductive conclusion is constructed according to the following scheme; all items of class “A” have property “B”; item “a” belongs to class “A”; This means “a” has property “B”. In general, deduction as a method of knowledge is based on already known laws and principles. Therefore, the deduction method does not allow us to obtain meaningful new knowledge. Deduction is only a way of logical development of a system of propositions based on initial knowledge, a way of identifying the specific content of generally accepted premises. So, for example, based on the general laws of mechanics, they obtain the equations of motion of a car.

The disadvantage of the deductive method of research is the limitations arising from the general laws on the basis of which a particular case is studied. So, for example, in order to comprehensively study the movement of a car, it is not enough to know only the laws of mechanics; it is necessary to apply other principles arising from the analysis of the system: “driver - car - external environment”.

Induction and deduction are closely related and complement each other. For example, a scientist, justifying a hypothesis of scientific research, establishes its compliance with the general laws of natural science (deduction). At the same time, a hypothesis is formulated on the basis of particular facts (induction).

Analysis and synthesis

Analysis(from the Greek analysis - decomposition): a method by which the researcher mentally separates the object under study into various components (both parts and elements), paying special attention to the connections between them. Analysis is an organic component of any scientific research, which is usually its first stage, when the researcher moves from an undifferentiated description of the object being studied to identifying its structure, composition, as well as its properties and characteristics.

Synthesis(from the Greek synthesis - connection): using this method, the researcher mentally combines the various components (both parts and elements) of the object being studied into a single system. In synthesis, there is not just a unification, but a generalization of the analytically identified and studied features of the object. The provisions obtained as a result of synthesis are included in the theory of the object, which, enriched and refined, determines the path of new scientific research.

Methods of analysis and synthesis are equally used in scientific research. Thus, when identifying individual elements (subsystems and mechanisms) when studying the functioning of an engine, the analysis method is used, while studying the engine as a system consisting of elements, the synthesis method is used. The synthesis method allows you to generalize the concepts of laws and theories. The operations of analysis and synthesis are inextricably linked with each other; each of them is carried out with the help and through the other.

Analogy

Analogy- a method of cognition in which knowledge obtained during the consideration of any one object is transferred to another, less studied and currently being studied. The analogy method is based on the similarity of objects according to a number of characteristics, which allows one to obtain completely reliable knowledge about the subject being studied. The use of the analogy method in scientific knowledge requires some caution. Here it is extremely important to clearly identify the conditions under which it works most effectively. However, in cases where it is possible to develop a system of clearly formulated rules for transferring knowledge from a model to a prototype, the results and conclusions using the analogy method acquire evidentiary force.

Abstraction and formalization

Abstraction – This is a method of scientific research based on the fact that when studying a certain object, one is distracted from its non-essential aspects and features in a given situation. This allows us to simplify the picture of the phenomenon under study and consider it in its “pure” form. Abstraction is associated with the idea of ​​the relative independence of phenomena and their aspects, which makes it possible to separate essential aspects from non-essential ones. In this case, as a rule, the original subject of research is replaced by another - equivalent, based on the conditions of the given problem. For example, when studying the operation of a mechanism, a calculation diagram is analyzed that displays the main, essential properties of the mechanism.

The following types of abstraction are distinguished:

– identification (formation of concepts by combining objects related by their properties into a special class). That is, on the basis of the sameness of a certain set of objects that are similar in some respect, an abstract object is constructed. For example, as a result of the generalization of the property of electronic, magnetic, electric machine, relay, hydraulic, pneumatic devices to amplify input signals, such a generalized abstraction (abstract object) as an amplifier arose. It is a representative of the properties of objects of different quality that are equal in a certain respect.

– isolation (isolation of properties inextricably linked with objects). Isolating abstraction is performed to isolate and clearly record the phenomenon under study. An example is the abstraction of the actual total force acting on the boundary of a moving fluid element. The number of these forces, like the number of properties of the liquid element, is infinite. However, from this variety it is possible to isolate the forces of pressure and friction by mentally identifying at the boundary of the flow an element of the surface through which the external medium acts on the flow with some force (in this case the researcher is not interested in the reasons for the occurrence of such a force). Mentally decomposing the force into two components, the pressure force can be defined as a normal component of the external influence, and the friction force as a tangential component.

– idealization corresponds to the goal of replacing a real situation with an idealized scheme to simplify the situation under study and more effectively use research methods and tools. The process of idealization is the mental construction of concepts about objects that are non-existent and impracticable, but have prototypes in the real world. For example, an ideal gas, an absolutely solid body, a material point, etc. As a result of idealization, real objects are deprived of some of their inherent properties and endowed with hypothetical properties.

A modern researcher often, from the very beginning, sets the task of simplifying the phenomenon being studied and constructing its abstract, idealized model. Idealization acts here as the starting point in the construction of theory. The criterion for the fruitfulness of idealization is the satisfactory agreement in many cases between the theoretical and empirical results of the study.

Formalization– a method of studying certain areas of knowledge in formalized systems using artificial languages. These are, for example, the formalized languages ​​of chemistry, mathematics, and logic. Formalized languages ​​allow you to briefly and clearly record knowledge and avoid the ambiguity of natural language terms. Formalization, which is based on abstraction and idealization, can be considered as a type of modeling (sign modeling).


Related information.


Special methods of scientific knowledge include procedures of abstraction and idealization, during which scientific concepts are formed.
Abstraction is a mental distraction from all the properties, connections and relationships of the object being studied, which seem unimportant for a given theory.
The result of the abstraction process is called abstraction. An example of abstractions are concepts such as point, line, set, etc.
Idealization is the operation of mentally highlighting any one property or relationship that is important for a given theory (it is not necessary that this property really exists), and mentally constructing an object endowed with this property.
It is through idealization that such concepts as “absolutely black body”, “ideal gas”, “atom” in classical physics, etc. are formed. The ideal objects obtained in this way do not actually exist, since in nature there cannot be objects and phenomena that have only one property or quality. This is the main difference between ideal objects and abstract ones.
Formalization is the use of special symbols instead of real objects.
A striking example of formalization is the widespread use of mathematical symbols and mathematical methods in natural science. Formalization makes it possible to examine an object without directly addressing it and record the results obtained in a concise and clear form.
Induction
Induction is a method of scientific knowledge, which is the formulation of a logical conclusion by summarizing observational and experimental data, obtaining a general conclusion based on particular premises, moving from the particular to the general.
A distinction is made between complete and incomplete induction. Complete induction builds a general conclusion based on the study of all objects or phenomena of a given class. As a result of complete induction, the resulting conclusion has the character of a reliable conclusion. But in the world around us there are not many similar objects of the same class, the number of which is so limited that a researcher can study each of them.
Therefore, much more often, scientists resort to incomplete induction, which builds a general conclusion based on the observation of a limited number of facts, if among them there are no ones that contradict the inductive inference. For example, if a scientist observes the same fact on a hundred or more occasions, he can conclude that this effect will appear in other similar circumstances. Naturally, the truth obtained in this way is incomplete; the knowledge obtained is probabilistic in nature and requires additional confirmation.
Deduction
Induction cannot exist in isolation from deduction.
Deduction is a method of scientific knowledge, which is the obtaining of particular conclusions based on general knowledge, a conclusion from the general to the particular.
Deductive inference is constructed according to the following scheme: all objects of class A have property B, object a belongs to class A; therefore, a has property B. For example: “All people are mortal”; “Ivan is a man”; therefore, “Ivan is mortal.”
Deduction as a method of cognition is based on already known laws and principles. Therefore, the deduction method does not allow us to obtain meaningful new knowledge. Deduction is only a way of logical development of a system of propositions based on initial knowledge, a way of identifying the specific content of generally accepted premises. Therefore, it cannot exist in isolation from induction. Both induction and deduction are indispensable in the process of scientific knowledge.
Hypothesis
The solution to any scientific problem involves putting forward various guesses, assumptions, and most often more or less substantiated hypotheses, with the help of which the researcher tries to explain facts that do not fit into old theories.
A hypothesis is any assumption, guess or prediction put forward to eliminate a situation of uncertainty in scientific research.
Therefore, a hypothesis is not reliable, but probable knowledge, the truth or falsity of which has not yet been established.

Special methods of scientific knowledge include procedures of abstraction and idealization, during which scientific concepts are formed.

Abstraction- mental distraction from all the properties, connections and relationships of the object being studied, which seem unimportant for this theory.

The result of the abstraction process is called abstraction. An example of abstractions are concepts such as point, line, set, etc.

Idealization- this is the operation of mentally highlighting any one property or relationship that is important for a given theory (it is not necessary that this property really exists), and mentally constructing an object endowed with this property.

It is through idealization that such concepts as “absolutely black body”, “ideal gas”, “atom” in classical physics, etc. are formed. The ideal objects obtained in this way do not actually exist, since in nature there cannot be objects and phenomena that have only one property or quality. This is the main difference between ideal objects and abstract ones.

Formalization- use of special symbols instead of real objects.

A striking example of formalization is the widespread use of mathematical symbols and mathematical methods in natural science. Formalization makes it possible to examine an object without directly addressing it and record the results obtained in a concise and clear form.

Induction

Induction- a method of scientific knowledge, which is the formulation of a logical conclusion by summarizing observational and experimental data, obtaining a general conclusion based on particular premises, moving from the particular to the general.

A distinction is made between complete and incomplete induction. Full induction builds a general conclusion based on the study of all objects or phenomena of a given class. As a result of complete induction, the resulting conclusion has the character of a reliable conclusion. But in the world around us there are not many similar objects of the same class, the number of which is so limited that a researcher can study each of them.

Therefore, scientists much more often resort to incomplete induction, which builds a general conclusion based on the observation of a limited number of facts, unless among them there are those that contradict the inductive inference. For example, if a scientist observes the same fact on a hundred or more occasions, he can conclude that this effect will appear in other similar circumstances. Naturally, the truth obtained in this way is incomplete; the knowledge obtained is probabilistic in nature and requires additional confirmation.

Deduction

Induction cannot exist in isolation from deduction.

Deduction- a method of scientific knowledge, which is the obtaining of particular conclusions based on general knowledge, a conclusion from the general to the particular.

Deductive inference is constructed according to the following scheme: all items in the class A have the property IN, item A belongs to the class A; hence, A has the property IN. For example: “All people are mortal”; “Ivan is a man”; therefore, “Ivan is mortal.”

Deduction as a method of cognition is based on already known laws and principles. Therefore, the deduction method does not allow us to obtain meaningful new knowledge. Deduction is only a way of logical development of a system of propositions based on initial knowledge, a way of identifying the specific content of generally accepted premises. Therefore, it cannot exist in isolation from induction. Both induction and deduction are indispensable in the process of scientific knowledge.

Hypothesis

The solution to any scientific problem involves putting forward various guesses, assumptions, and most often more or less substantiated hypotheses, with the help of which the researcher tries to explain facts that do not fit into old theories.

Hypothesis is any assumption, guess or prediction put forward to eliminate a situation of uncertainty in scientific research.

Therefore, a hypothesis is not reliable, but probable knowledge, the truth or falsity of which has not yet been established.

Special universal methods of scientific knowledge

Universal methods of scientific knowledge include analogy, modeling, analysis and synthesis.

Analogy

Analogy- a method of cognition in which the transfer of knowledge obtained by examining any one object occurs to another, less studied, but similar to the first object in some essential properties.

The analogy method is based on the similarity of objects according to a number of characteristics, and the similarity is established as a result

comparing objects with each other. Thus, the basis of the analogy method is the comparison method.

The use of the analogy method in scientific knowledge requires some caution. The fact is that one can mistake a purely external, random similarity between two objects for an internal, significant one, and on this basis draw a conclusion about a similarity that in fact does not exist. Thus, although both the horse and the car are used as vehicles, it would be incorrect to transfer knowledge about the structure of the car to the anatomy and physiology of the horse. This analogy will be wrong.

However, the method of analogy occupies a much more significant place in cognition than it might seem at first glance. After all, analogy does not simply outline connections between phenomena. The most important feature of human cognitive activity is that our consciousness is not capable of perceiving completely new knowledge if it does not have points of contact with knowledge already known to us. That is why, when explaining new material in the classroom, they always resort to examples, which should draw an analogy between known and unknown knowledge.

Modeling

The analogy method is closely related to the modeling method.

Simulation method involves the study of any objects through their models with further transfer of the obtained data to the original.

This method is based on the significant similarity of the original object and its model. Modeling should be treated with the same caution as analogy, and the limits and boundaries of simplifications permissible in modeling should be strictly indicated.

Modern science knows several types of modeling: subject, mental, symbolic and computer.

Subject modeling is the use of models that reproduce certain geometric, physical, dynamic or functional characteristics of the prototype. Thus, the aerodynamic qualities of airplanes and other machines are studied using models, and various structures (dams, power plants, etc.) are being developed.

Mental simulation - it is the use of various mental representations in the form of imaginary models. The ideal planetary model of the atom by E. Rutherford is widely known, reminiscent of the Solar system: there is a positively charged environment around

negatively charged electrons (planets) rotated from the core (the Sun).

Sign (symbolic) modeling uses diagrams, drawings, and formulas as models. They reflect some properties of the original in a symbolic form. A type of symbolic is mathematical modeling, carried out by means of mathematics and logic. The language of mathematics allows you to express any properties of objects and phenomena, describe their functioning or interaction with other objects using a system of equations. This creates a mathematical model of the phenomenon. Often mathematical modeling is combined with subject modeling.

Computer simulation has become widespread recently. In this case, the computer is both a means and an object of experimental research, replacing the original. The model is a computer program (algorithm).

Analysis

Analysis- a method of scientific knowledge, which is based on the procedure of mental or real division of an object into its constituent parts and their separate study.

This procedure aims to move from the study of the whole to the study of its parts and is carried out by abstracting from the connection of these parts with each other.

Analysis is an organic component of any scientific research, which is usually its first stage, when the researcher moves from describing the undivided object under study to identifying its structure, composition, as well as properties and characteristics. To comprehend an object as a whole, it is not enough to know what it consists of. It is important to understand how the component parts of an object are related to each other, and this can only be done by studying them in unity. For this purpose, analysis is complemented by synthesis.

Synthesis

Synthesis- a method of scientific knowledge, which is based on the procedure for combining various elements of a subject into a single whole, a system, without which truly scientific knowledge of this subject is impossible.

Synthesis acts not as a method of constructing the whole, but as a method of representing the whole in the form of a unity of knowledge obtained through analysis. It is important to understand that synthesis is not at all a simple mechanical connection of disconnected elements into a single system. It shows the place and role of each element in this system, its connection with other components of the system. Thus, during synthesis, there is not just a unification, but a generalization of the analytically identified and studied features of the object.

Synthesis is as necessary a part of scientific knowledge as analysis, and comes after it. Analysis and synthesis are two sides of a single analytical-synthetic method of cognition that do not exist without each other.

Classification

Classification- a method of scientific knowledge that allows you to combine into one class objects that are as similar as possible to each other in essential characteristics.

Classification makes it possible to reduce the accumulated diverse material to a relatively small number of classes, types and forms, to identify the initial units of analysis, and to discover stable characteristics and relationships. Typically, classifications are expressed in the form of natural language texts, diagrams and tables.

The variety of methods of scientific knowledge creates difficulties in their use and understanding of their significance. These problems are solved by a special field of knowledge - methodology, i.e. teaching about methods. The most important task of methodology is to study the origin, essence, effectiveness and other characteristics of methods of cognition.

Idealization is a special type of abstraction, which is the mental introduction of certain changes to the object being studied in accordance with the goals of the research. As a result of such changes, for example, some properties, aspects, or features of objects may be excluded from consideration. An example of this type of idealization is the widespread idealization in mechanics - a material point, and it can mean any body, from an atom to a planet.

Another type of idealization is endowing an object with some properties that are not realizable in reality. An example of such an idealization is a completely black body. Such a body is endowed with the property, which does not exist in nature, of absorbing absolutely all radiant energy falling on it, without reflecting anything and without letting anything pass through it.

The radiation spectrum of a completely black body is an ideal case, because it is not influenced by either the nature of the emitter's substance or the state of its surface. The problem of calculating the amount of radiation emitted by an ideal emitter - an absolutely black body - was taken up by Max Planck, who worked on it for 4 years. In 1900, he managed to find a solution in the form of a formula that correctly described the spectral distribution of energy of an emitted black body. Thus, working with an idealized object helped lay the foundations of quantum theory, which marked a radical revolution in science.

The advisability of using idealization is determined by the following circumstances:

firstly, idealization is appropriate when the real objects to be studied are sufficiently complex for the available means of theoretical, in particular, mathematical analysis, and in relation to the idealized case it is possible, by applying these means, to build and develop a theory that, under certain conditions and purposes, is effective for descriptions of the properties and behavior of these real objects;

secondly, it is advisable to use idealization in cases where it is necessary to exclude certain properties and connections of the object under study, without which it cannot exist, but which obscure the essence of the processes occurring in it. A complex object is presented as if in a “purified” form, which makes it easier to study. An example is Sadi Carnot's ideal steam engine;

thirdly, the use of idealization is advisable when the properties, aspects, connections of the object being studied that are excluded from consideration do not affect its essence within the framework of this study. Thus, if in a number of cases it is possible and advisable to consider atoms in the form of a material point, then such idealization is unacceptable when studying the structure of the atom.

If there are different theoretical approaches, then different idealization options are possible. As an example, we can cite three different concepts of “ideal gas”, formed under the influence of different theoretical and physical concepts: Maxwell-Boltzmann, Bose-Einstein, Fermi-Dirac. However, all three idealization options obtained in this case turned out to be fruitful in the study of gas states of various natures. Thus, the ideal Maxwell-Boltzmann gas became the basis for research into ordinary molecular rarefied gases located at fairly high temperatures; The Bose-Einstein ideal gas was used to study photonic gas, and the Fermi-Dirac ideal gas helped solve a number of electron gas problems.

Idealization, in contrast to pure abstraction, allows for an element of sensory clarity. The usual process of abstraction leads to the formation of mental abstractions that do not have any clarity. This feature of idealization is very important for the implementation of such a specific method of theoretical knowledge, which is a thought experiment.

A thought experiment is a mental selection of certain provisions and situations that make it possible to detect some important features of the object under study. A thought experiment involves operating with an idealized object, which consists in the mental selection of certain positions and situations that make it possible to detect some important features of the object under study. This reveals a certain similarity between a thought experiment and a real one. Moreover, every real experiment, before being carried out in practice, is first “played out” by the researcher mentally in the process of thinking and planning.

At the same time, thought experiments also play an independent role in science. At the same time, while maintaining similarities with the real experiment, it is at the same time significantly different from it. This difference is as follows:

A real experiment is a method associated with practical, “instrumental” knowledge of the world around us. In a thought experiment, the researcher operates not with material objects, but with their idealized images, and the operation itself is carried out in his consciousness, i.e. purely speculative, without any logistical support.

In a real experiment, one has to take into account real physical and other limitations on the behavior of the object of study. In this regard, a thought experiment has a clear advantage over a real experiment. In a thought experiment, you can abstract from the action of undesirable factors by conducting it in an idealized, “pure” form.

In scientific knowledge, there may be cases when, when studying certain phenomena and situations, conducting real experiments turns out to be completely impossible. This gap in knowledge can only be filled by a thought experiment.

A clear example of the role of a thought experiment is the history of the discovery of the phenomenon of friction. For a millennium, Aristotle's concept prevailed, which stated that a moving body stops if the force pushing it ceases. The proof was the movement of the cart or ball, which stopped by itself if the impact was not renewed.

Galileo managed, through a thought experiment and step-by-step idealization, to imagine an ideal surface and discover the law of mechanics of motion. “The law of inertia,” wrote A. Einstein and L. Infeld, “cannot be deduced directly from experiment; it can be deduced speculatively - by thinking associated with observation.” This experiment can never be performed in reality, although it leads to a deep understanding of the actual processes.

A thought experiment can have great heuristic value in helping to interpret new knowledge obtained purely mathematically. This is confirmed by many examples from the history of science. One of them is the thought experiment of W. Heisenberg, aimed at clarifying the uncertainty relation. In this thought experiment, the uncertainty relation was found through abstraction, dividing the entire structure of the electron into two opposites: a wave and a corpuscle. Thus, the coincidence of the result of a thought experiment with the result achieved mathematically meant proof of the objectively existing inconsistency of the electron as an integral material formation and made it possible to understand its essence.

The idealization method, very fruitful in many cases, at the same time has certain limitations. The development of scientific knowledge sometimes forces us to abandon previously existing idealizations. For example, Einstein abandoned such idealizations as “absolute space” and “absolute time.” In addition, any idealization is limited to a specific area of ​​phenomena and serves to solve only certain problems.

Idealization in itself, although it can be fruitful and even lead to a scientific discovery, is not yet sufficient to make this discovery. Here the theoretical principles from which the researcher proceeds play a decisive role. Thus, the idealization of the steam engine, successfully carried out by Sadi Carnot, led him to the discovery of the mechanical equivalent of heat, which he could not discover because he believed in the existence of caloric.

The main positive significance of idealization as a method of scientific knowledge is that the theoretical constructions obtained on its basis then make it possible to effectively study real objects and phenomena. Simplifications achieved through idealization facilitate the creation of a theory that reveals the laws of the studied area of ​​​​phenomena of the material world. If the theory as a whole correctly describes real phenomena, then the idealizations underlying it are also legitimate.

Formalization. The language of science.

Formalization refers to a special approach in scientific knowledge, which consists in the use of special symbols, which allows one to escape from the study of real objects, from the content of the theoretical provisions describing them, and to operate instead with a certain set of symbols (signs). An example of formalization is a mathematical description.

To build any formal system you need:

1) setting the alphabet, i.e. a certain set of characters;

2) setting the rules by which “words” and “formulas” can be obtained from the initial characters of this alphabet;

3) setting rules according to which one can move from some words and formulas of a given system to other words and formulas (the so-called rules of inference).

The advantage of formalization is to ensure the brevity and clarity of recording scientific information, which opens up great opportunities for operating with it. It is unlikely that it would have been possible to successfully use, for example, Maxwell's theoretical conclusions if they had not been compactly expressed in the form of mathematical equations, but described using ordinary natural language.

Of course, a formalized language is not as rich and flexible as a natural one, but it is not ambiguous (polysemy), but has unambiguous semantics. Thus, a formalized language has the property of being monosemic. The expanding use of formalization as a method of theoretical knowledge is associated not only with the development of mathematics. Chemistry also has its own symbolism, along with the rules for operating it. It is one of the variants of a formalized artificial language.

The language of modern science differs significantly from natural human language. It contains many special terms and expressions; it widely uses means of formalization, among which the central place belongs to mathematical formalization. Based on the needs of science, various artificial languages ​​are created to solve certain problems. The entire set of artificial formalized languages ​​created and being created is included in the language of science, forming a powerful means of scientific knowledge.

At the same time, it should be borne in mind that the creation of any single formalized language of science is not possible. At the same time, formalized languages ​​cannot be the only form of language of modern science, because the desire for maximum adequacy requires the use of unformalized forms of language. But to the extent that adequacy is unthinkable without precision, the tendency towards increasing formalization of the languages ​​of all and especially the natural sciences is objective and progressive.

Special methods of scientific knowledge include procedures of abstraction and idealization, during which scientific concepts are formed.

Abstraction- mental distraction from all the properties, connections and relationships of the object being studied, which seem unimportant for this theory.

The result of the abstraction process is called abstraction. An example of abstractions are concepts such as point, line, set, etc.

Idealization- this is the operation of mentally highlighting any one property or relationship that is important for a given theory (it is not necessary that this property really exists), and mentally constructing an object endowed with this property.

It is through idealization that such concepts as “absolutely black body”, “ideal gas”, “atom” in classical physics, etc. are formed. The ideal objects obtained in this way do not actually exist, since in nature there cannot be objects and phenomena that have only one property or quality. This is the main difference between ideal objects and abstract ones.

Formalization- use of special symbols instead of real objects.

A striking example of formalization is the widespread use of mathematical symbols and mathematical methods in natural science. Formalization makes it possible to examine an object without directly addressing it and record the results obtained in a concise and clear form.

The use of symbolism ensures a complete overview of a certain area of ​​problems, brevity and clarity of knowledge recording, and avoids the ambiguity of terms. The cognitive value of formalization lies in the fact that it is a means of systematizing and clarifying the logical structure of a theory. One of the most valuable advantages of formalization is its heuristic capabilities, in particular the ability to detect and prove previously unknown properties of the objects being studied. There are two types of formalized theories: fully formalized and partially formalized theories. Fully formalized theories are constructed in an axiomatically deductive form with an explicit indication of the formalization language and the use of clear logical means. In partially formalized theories, the language and logical means used to develop a given scientific discipline are not explicitly fixed. At the present stage of development of science, partially formalized theories predominate in it. The formalization method contains great heuristic possibilities. The formalization process is creative. Starting from a certain level of generalization of scientific facts, formalization transforms them, reveals in them such features that were not recorded at the content-intuitive level. Idealization, abstraction - replacement of individual properties of an object or an entire object with a symbol or sign, mental distraction from something in order to highlight something else. Ideal objects in science reflect stable connections and properties of objects: mass, speed, force, etc. But ideal objects may not have real prototypes in the objective world, i.e. As scientific knowledge develops, some abstractions can be formed from others without recourse to practice. Therefore, a distinction is made between empirical and ideal theoretical objects. Idealization is a necessary precondition for constructing a theory, since the system of idealized, abstract images determines the specifics of a given theory.



Modeling. A model is a mental or material replacement of the most significant aspects of the object being studied. A model is a specially created human object or system, a device that in a certain respect imitates and reproduces real-life objects or systems that are the object of scientific research. Modeling relies on analogies of properties and relationships between the original and the model. Having studied the relationships that exist between the quantities describing the model, they are then transferred to the original and thus make a plausible conclusion about the behavior of the latter. Modeling as a method of scientific knowledge is based on a person’s ability to abstract the studied signs or properties of various objects and phenomena and establish certain relationships between them. Although scientists have long used this method, it was only from the middle of the 19th century. modeling is gaining strong recognition among scientists and engineers. In connection with the development of electronics and cybernetics, modeling is becoming an extremely effective research method. Thanks to the use of modeling the patterns of reality, which in the original could only be studied through observation, they become accessible to experimental research. The possibility arises of repeated repetition in the model of phenomena corresponding to unique processes of nature or social life. If we consider the history of science and technology from the point of view of the use of certain models, then we can state that in the early stages of the development of science and technology, material, visual models were used. Subsequently, they gradually lost, one after another, the concrete features of the original, and their correspondence with the original acquired an increasingly abstract character. Currently, the search for models based on logical foundations is becoming increasingly important. There are many options for classifying models. In our opinion, the most convincing option is the following: a) natural models (existing in nature in their natural form). So far, none of the structures created by man can compete with natural structures in terms of the complexity of the problems they solve. There is the science of bionics, the purpose of which is to study unique natural models with the aim of further using the acquired knowledge to create artificial devices. It is known, for example, that the creators of the model of the shape of a submarine took the body shape of a dolphin as an analogue; when designing the first aircraft, a model of the wingspan of birds was used, etc. ; b) material-technical models (in a reduced or enlarged form, completely reproducing the original). At the same time, experts distinguish between a) models created in order to reproduce the spatial properties of the object being studied (models of houses, district buildings, etc.); b) models that reproduce the dynamics of the objects being studied, regular relationships, quantities, parameters (models of airplanes, ships, plane trees, etc.). Finally, there is a third type of models - c) symbolic models, including mathematical ones. Sign modeling makes it possible to simplify the subject being studied and to highlight in it those structural relationships that most interest the researcher. While losing to material-technical models in terms of clarity, iconic models gain due to deeper penetration into the structure of the fragment of objective reality being studied. Thus, with the help of sign systems, it is possible to understand the essence of such complex phenomena as the structure of the atomic nucleus, elementary particles, and the Universe. Therefore, the use of symbolic models is especially important in those areas of science and technology where they deal with the study of extremely general connections, relationships, and structures. The possibilities of symbolic modeling have especially expanded due to the advent of computers. Options have emerged for constructing complex sign-mathematical models that make it possible to select the most optimal values ​​of the quantities of complex real processes under study and carry out long-term experiments on them. In the course of research, the need often arises to construct various models of the processes being studied, ranging from real ones to conceptual and mathematical models. In general, “the construction of not only visual, but also conceptual and mathematical models accompanies the process of scientific research from its beginning to the end, making it possible to cover the main features of the processes under study in a single system of visual and abstract images” (70. P. 96). The historical and logical method: the first reproduces the development of an object, taking into account all the factors acting on it, the second reproduces only the general, the main thing in the subject in the process of development.



CATEGORIES

POPULAR ARTICLES

2024 “kingad.ru” - ultrasound examination of human organs