Calculation of clipper adhs by Newton's method. Numerical methods for solving nonlinear equations

Newton's method (tangent method)

Let the root of the equation f(x)=0 be separated on the segment , with the first and second derivatives f’(x) and f""(x) are continuous and of constant sign for хн .

Let at some step of root refinement the next approximation to the root x n is obtained (selected) . Then suppose that the next approximation obtained with the help of the correction h n , results in the exact value of the root

x = xn + hn. (1.2.3-6)

Counting h n small value, we represent f(х n + h n) in the form of a Taylor series, limiting ourselves to linear terms

f(x n + h n) »f(x n) + h n f’(x n). (1.2.3-7)

Considering that f(x) = f(х n + h n) = 0, we get f(х n) + h n f ’(х n) » 0.

Hence h n » - f(x n)/ f’(x n). Let's substitute the value h n in (1.2.3-6) and instead of the exact value of the root x we get another approximation

Formula (1.2.3-8) allows us to obtain a sequence of approximations x 1, x 2, x 3 ..., which, under certain conditions, converges to the exact value of the root x, that is

Geometric interpretation of Newton's method is as follows
(Fig.1.2.3-6). Let us take the right end of the segment b as the initial approximation x 0 and construct a tangent at the corresponding point B 0 on the graph of the function y = f(x). The point of intersection of the tangent with the x-axis is taken as a new, more accurate approximation x 1. Repeating this procedure many times allows us to obtain a sequence of approximations x 0, x 1, x 2 , . . ., which tends to the exact value of the root x.

The calculation formula of Newton's method (1.2.3-8) can be obtained from a geometric construction. So in a right triangle x 0 B 0 x 1 leg
x 0 x 1 = x 0 V 0 /tga. Considering that point B 0 is on the graph of the function f(x), and the hypotenuse is formed by the tangent to the graph f(x) at point B 0, we get

(1.2.3-9)

(1.2.3-10)

This formula coincides with (1.2.3-8) for the nth approximation.

From Fig. 1.2.3-6 it is clear that choosing point a as the initial approximation can lead to the fact that the next approximation x 1 will be outside the segment on which the root is separated x. In this case, the convergence of the process is not guaranteed. In the general case, the choice of the initial approximation is made in accordance with the following rule: the initial approximation should be taken as a point x 0 О, at which f(x 0)×f''(x 0)>0, that is, the signs of the function and its second derivative match up.

The conditions for the convergence of Newton's method are formulated in the following theorem.

If the root of the equation is separated on the segment, and f’(x 0) and f’’(x) are different from zero and retain their signs when, then if we choose such a point as the initial approximation x 0 О , What f(x 0).f¢¢(x 0)>0 , then the root of the equation f(x)=0 can be calculated with any degree of accuracy.

The error estimate of Newton's method is determined by the following expression:

(1.2.3-11)

where is the smallest value at

Highest value at

The calculation process stops if ,

where is the specified accuracy.

In addition, the following expressions can serve as a condition for achieving a given accuracy when refining the root using Newton’s method:

The diagram of the Newton method algorithm is shown in Fig. 1.2.3-7.

The left side of the original equation f(x) and its derivative f’(x) in the algorithm are designed as separate software modules.

Rice. 1.2.3-7. Newton method algorithm diagram

Example 1.2.3-3. Refine the roots of the equation x-ln(x+2) = 0 using Newton’s method, provided that the roots of this equation are separated on the segments x 1 О[-1.9;-1.1] and x 2 О [-0.9;2 ].

The first derivative f’(x) = 1 – 1/(x+2) retains its sign on each of the segments:

f'(x)<0 при хÎ [-1.9; -1.1],

f’(x)>0 at xО [-0.9; 2].

The second derivative f"(x) = 1/(x+2) 2 > 0 for any x.

Thus, the convergence conditions are satisfied. Since f""(x)>0 over the entire range of permissible values, then to clarify the root of the initial approximation x 1 choose x 0 = -1.9 (since f(-1.9)×f”(-1.9)>0). We obtain a sequence of approximations:

Continuing the calculations, we obtain the following sequence of the first four approximations: -1.9; –1.8552, -1.8421; -1.8414 . The value of the function f(x) at the point x=-1.8414 is equal to f(-1.8414)=-0.00003 .

To clarify the root x 2 О[-0.9;2] we choose 0 =2 (f(2)×f”(2)>0) as the initial approximation. Based on x 0 = 2, we obtain a sequence of approximations: 2.0;1.1817; 1.1462; 1.1461. The value of the function f(x) at the point x=1.1461 is equal to f(1.1461)= -0.00006.

Newton's method has a high convergence rate, but at each step it requires calculating not only the value of the function, but also its derivative.

Chord method

Geometric interpretation of the chord method is as follows
(Fig.1.2.3-8).

Let's draw a line segment through points A and B. The next approximation x 1 is the abscissa of the point of intersection of the chord with the 0x axis. Let's construct the equation of a straight line segment:

Let's set y=0 and find the value x=x 1 (next approximation):

Let's repeat the calculation process to obtain the next approximation to the root - x 2 :

In our case (Fig. 1.2.11) and the calculation formula for the chord method will look like

This formula is valid when point b is taken as a fixed point, and point a acts as an initial approximation.

Let's consider another case (Fig. 1.2.3-9), when .

The straight line equation for this case has the form

The next approximation x 1 at y = 0

Then the recursive formula for the method of chords for this case has the form

It should be noted that the fixed point in the chord method is chosen to be the end of the segment for which the condition f (x)∙f¢¢ (x)>0 is satisfied.

Thus, if the point a is taken as a fixed point , then x 0 = b acts as an initial approximation, and vice versa.

The sufficient conditions that ensure the calculation of the root of the equation f(x) = 0 using the chord formula will be the same as for the tangent method (Newton's method), only instead of the initial approximation, a fixed point is chosen. The chord method is a modification of Newton's method. The difference is that the next approximation in Newton's method is the point of intersection of the tangent with the 0X axis, and in the chord method - the point of intersection of the chord with the 0X axis - the approximations converge to the root from different sides.

The estimate of the error of the chord method is determined by the expression

(1.2.3-15)

Condition for ending the iteration process using the chord method

(1.2.3-16)

In case M 1<2m 1 , то для оценки погрешности метода может быть использована формула | x n -x n -1 |£e.

Example 1.2.3-4. Clarify the root of the equation e x – 3x = 0, separated on the segment with an accuracy of 10 -4.

Let's check the convergence condition:

Consequently, a=0 should be chosen as the fixed point, and x 0 =1 should be taken as the initial approximation, since f(0)=1>0 and f(0)*f"(0)>0.

2. Newton's method for solving systems of nonlinear equations.

This method has much faster convergence than the simple iteration method. Newton’s method for the system of equations (1.1) is based on the use of function expansion

, Where
(2.1)

into the Taylor series, with terms containing second and higher orders of derivatives being discarded. This approach allows the solution of one nonlinear system (1.1) to be replaced by the solution of a number of linear systems.

So, we will solve system (1.1) by Newton’s method. In region D, choose any point
and call it the zero approximation to the exact solution of the original system. Now let us expand functions (2.1) into a Taylor series in a neighborhood of the point . Will have

Because the left-hand sides of (2.2) must vanish according to (1.1), then the right-hand sides of (2.2) must also vanish. Therefore, from (2.2) we have

All partial derivatives in (2.3) must be calculated at the point .

(2.3) is a system of linear algebraic equations for unknowns. This system can be solved by Cramer’s method if its main determinant is nonzero and the quantities can be found

Now we can refine the zero approximation by constructing the first approximation with the coordinates

those.
. (2.6)

Let us find out whether approximation (2.6) has been obtained with a sufficient degree of accuracy. To do this, let's check the condition

,
(2.7)

Where a predetermined small positive number (the accuracy with which system (1.1) must be solved). If condition (2.7) is satisfied, then we choose (2.6) as an approximate solution to system (1.1) and complete the calculations. If condition (2.7) is not satisfied, then we perform the following action. In system (2.3), instead of
let's take the updated values

, (2.8)

those. let's do the following

. (2.9)

After this, system (2.3) will be a system of linear algebraic equations for the quantities Having determined these quantities, the next second approximation
to the solution of system (1.1) we find using the formulas

Now let's check condition (2.7)

If this condition is met, then we complete the calculations by taking the second approximation as an approximate solution to system (1.1)
. If this condition is not met, then we continue to construct the next approximation, taking in (2.3)
It is necessary to build approximations until the condition is not satisfied.

The working formulas of Newton's method for solving system (1.1) can be written in the form.

Compute sequence

Here
are the solution to the system

Let us formulate a calculation algorithm using formulas (2.11)-(2.13).

1. Let us choose a zero approximation belonging to region D.

2. In the system of linear algebraic equations (2.13) we set
,A .

3. Let’s solve system (2.13) and find the quantities
.

4. In formulas (2.12) we put
and calculate the components of the next approximation.

5. Let's check condition (2.7) for: (See the algorithm for calculating the maximum of several quantities.)

6. If this condition is met, then we complete the calculations by choosing the approximation as the approximate solution to system (1.1). If this condition is not met, then move on to step 7.

7. Let's put
for all .

8. Let’s carry out step 3, putting
.

Geometrically, this algorithm can be written as:

Algorithm. Calculation of the maximum of several quantities.

Example. Let's consider using Newton's method to solve a system of two equations.

Using Newton's method, solve the following system of nonlinear equations up to an accuracy

, (2.14)

Here
. Let's choose the zero approximation
, belonging to domain D. Let us construct a system of linear algebraic equations (2.3). She will look like

(2.15)

Let's denote

Let us solve system (2.15) with respect to unknowns
, for example the Cramer method. We write Cramer's formulas in the form

(2.17)

where is the main determinant of the system (2.15)

(2.18)

and the auxiliary determinants of system (2.15) have the form

.

We substitute the found values ​​into (2.16) and find the components of the first approximation
to the solution of system (2.15).

Let's check the condition

, (2.19)

if this condition is met, then we complete the calculations by taking the first approximation as an approximate solution to system (2.15), i.e.
. If condition (2.19) is not satisfied, then we set
,
and construct a new system of linear algebraic equations (2.15). Having solved it, we find the second approximation
. Let's check it on . If this condition is satisfied, then we choose as an approximate solution to system (2.15)
. If the condition on is not satisfied, we set
,
and construct the following system (2.15) to find
etc.

Tasks

All tasks require:

    Draw up a program for the numerical implementation of the method according to the proposed algorithm.

    Get calculation results.

    Check your results.

A system of two nonlinear equations is given.

1.
2.

3.
4.

5.
6.

7.
8.

9.
10.

11.
12.

13.
14.

15.
.

Chapter 3. Numerical methods for solving systems of linear algebraic equations (SLAEs).

Goal of the work. Introduction to some approximate methods for solving SLAEs and their numerical implementation on a PC.

Preliminary remarks. All methods for solving SLAEs are usually divided into two large groups. The first group includes methods that are commonly called exact. These methods make it possible for any system to find the exact values ​​of the unknowns after a finite number of arithmetic operations, each of which is performed exactly.

The second group includes all methods that are not exact. They are called iterative, or numerical, or approximate. The exact solution, when using such methods, is obtained as a result of an endless process of approximations. An attractive feature of such methods is their self-correction and ease of implementation on a PC.

Let us consider some approximate methods for solving SLAEs and construct algorithms for their numerical implementation. We will obtain an approximate solution of the SLAE with an accuracy of , where is some very small positive number.

1. Iteration method.

Let the SLAE be given in the form

(1.1)

This system can be written in matrix form

, (1.2)

Where
- matrix of coefficients for unknowns in system (1.1),
- column of free members,
- column of unknowns of the system (1.1).

. (1.3)

Let us solve system (1.1) by the iteration method. To do this, perform the following steps.

Firstly. We choose the zero approximation

(1.4)

to the exact solution (1.3) of system (1.1). The components of the zero approximation can be any numbers. But it is more convenient to take either zeros for the components of the zero approximation
, or free terms of system (1.1)

Secondly. We substitute the components of the zero approximation into the right-hand side of system (1.1) and calculate

(1.5)

The quantities on the left in (1.5) are components of the first approximation
The actions that resulted in the first approximation are called iteration.

Third. Let's check the zero and first approximations for

(1.6)

If all conditions (1.6) are met, then for the approximate solution of system (1.1) we choose either , or it doesn’t matter, because they differ from each other by no more than by and let’s finish the calculations. If at least one of the conditions (1.6) is not met, then we move on to the next action.

Fourthly. Let's perform the next iteration, i.e. into the right side of system (1.1) we substitute the components of the first approximation and calculate the components of the second approximation
, Where

Fifthly. Let's check
and on , i.e. Let us check condition (1.6) for these approximations. If all conditions (1.6) are met, then for the approximate solution of system (1.1) we will choose either , or it doesn’t matter, because they differ from each other by no more than . Otherwise, we will construct the next iteration by substituting the components of the second approximation into the right side of system (1.1).

Iterations need to be built until two adjacent approximations
and will differ from each other by no more than .

The working formula of the iteration method for solving system (1.1) can be written as

The algorithm for the numerical implementation of formula (1.7) can be as follows.

Sufficient conditions for the convergence of the iteration method for system (1.1) have the form

1.
, .

2.
,
.

3.

2. Simple iteration method.

Let the system of linear algebraic equations (SLAE) be given in the form

(2.1)

In order to solve system (2.1) using the simple iteration method, it must first be reduced to the form

(2.2)

In system (2.2) The -th equation is the -th equation of system (2.1), resolved with respect to the -th unknown (
).

The method for solving system (2.1), which consists of reducing it to system (2.2) followed by solving system (2.2) using the iteration method, is called the simple iteration method for system (2.1).

Thus, the working formulas of the simple iteration method for solving system (2.1) will have the form

(2.3)

Formulas (2.3) can be written as

The algorithm for the numerical implementation of the simple iteration method for system (2.1) according to formulas (2.4) can be as follows.

This algorithm can be written geometrically.

Sufficient conditions for the convergence of the simple iteration method for system (2.1) have the form

1.
, .

2.
,
.

3.

3. Stationary Seidel method.

The Seidel method for solving SLAEs differs from the iteration method in that having found some approximation for the -th component, we immediately use it to find the next
,
, …, -th component. This approach allows for a higher convergence rate of the Seidel method compared to the iteration method.

Let the SLAE be given in the form

(3.1)

Let
- zero approximation to the exact solution
systems (3.1). And let it be found th approximation
. Let's define the components
th approximation by formulas

(3.2)

Formulas (3.2) can be written in a compact form

,
,
(3.3)

The algorithm for the numerical implementation of the Seidel method for solving system (3.1) using formulas (3.3) can be as follows.

1. Let's choose, for example,
,

2. Let's put .

3. Let's calculate for everyone.

4. For all, check the conditions
.

5. If all the conditions in item 4 are satisfied, then for the approximate solution of the system (3.1) we choose either or and finish the calculations. If at least one condition in step 4 is not met, move on to step 6.

6. Let’s put it down and move on to step 3.

This algorithm can be written geometrically.

The sufficient condition for the convergence of the Seidel method for system (3.1) has the form
, .

4. Non-stationary Seidel method.

This method of solving SLAE (3.1) provides an even higher rate of convergence of the Seidel method.

Let the components of the -th approximation and the -th approximation be found in some way for the system (3.1).

Let's calculate the correction vector

Let's calculate the values

, (4.2)

Let's arrange the quantities
, in descending order.

In the same order, we rewrite the equations in system (3.1) and the unknowns in this system., : Linearalgebra And nonlinear ... ManagementFor laboratory worksBy ... methodological instructions ForpracticalworksBy Forstudents ...

  • Educational literature (natural sciences and technical) 2000-2011 opd cycle - 10 years sd cycle - 5 years

    Literature

    ... NaturalSciences in general 1. Astronomy [Text]: manual For ... Numericalmethods: Linearalgebra And nonlinear ... ManagementFor laboratory worksBy ... methodological instructions ForpracticalworksBy discipline "Transport Economics" Forstudents ...

  • - natural sciences (1)

    Tutorial

    ... managementForstudents and teachers, intended For use not only for studying methodswork... production practical skills using real data. Methodical recommendations By fulfillment of test workBy this...

  • - natural sciences - physical and mathematical sciences - chemical sciences - earth sciences (geodesic geophysical geological and geographical sciences)

    Document

    ... Forstudentsnaturally- ... worksBy discipline "Genetics and selection", dedicated to current problems of this Sciences. Systematized independent JobstudentsBy theoretical and practical ... linear, nonlinear, dynamic. All methods ...

  • - natural sciences - physical and mathematical sciences - chemical sciences - earth sciences (geodesic geophysical geological and geographical sciences) (7)

    List of textbooks

    Eremin's determinant linear And nonlinearalgebra : linear And nonlinear programming: new method/ Eremin, Mikhail... Forstudents and teachers of geological specialties at universities. kh-1 1794549 99. D3 P 693 PracticalmanagementBy ...

  • Newton's method (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643-1727), under whose name he gained his fame.

    The method was described by Isaac Newton in the manuscript De analysi per aequationes numero terminorum infinitas (lat. .About analysis by equations of infinite series), addressed in 1669 to Barrow, and in the work De metodis fluxionum et serierum infinitarum (Latin: The method of fluxions and infinite series) or Geometria analytica ( lat.Analytical geometry) in Newton's collections, which was written in 1671. However, the description of the method differed significantly from its current presentation: Newton applied his method exclusively to polynomials. He did not calculate successive approximations of x n, but a sequence of polynomials and as a result obtained an approximate solution of x.

    The method was first published in the treatise Algebra by John Wallis in 1685, at whose request it was briefly described by Newton himself. In 1690, Joseph Raphson published a simplified description in his work Analysis aequationum universalis (lat. General analysis of equations). Raphson viewed Newton's method as purely algebraic and limited its use to polynomials, but he described the method in terms of successive approximations x n instead of the more difficult to understand sequence of polynomials used by Newton.

    Finally, in 1740, Newton's method was described by Thomas Simpson as a first-order iterative method for solving nonlinear equations using derivatives as outlined here. In the same publication, Simpson generalized the method to the case of a system of two equations and noted that Newton's method can also be applied to solve optimization problems by finding the zero of the derivative or gradient.

    In accordance with this method, the task of finding the root of a function is reduced to the task of finding the point of intersection with the x-axis of the tangent plotted to the graph of the function.

    Fig.1 . Function change graph

    A tangent line drawn at any point to the graph of a function is determined by the derivative of this function at the point under consideration, which in turn is determined by the tangent of the angle α (). The point of intersection of the tangent with the abscissa axis is determined based on the following relationship in a right triangle: tangent of the anglein a right triangle is determined by the ratio of the opposite side to the adjacent side of the triangle. Thus, at each step, a tangent to the graph of the function is constructed at the point of the next approximation . Point of intersection of the tangent with the axis Ox will be the next point of approach. In accordance with the method under consideration, calculating the approximate value of the root oni-iterations are made according to the formula:

    The slope of the straight line is adjusted at each step in the best possible way, however, you should pay attention to the fact that the algorithm does not take into account the curvature of the graph and, therefore, during the calculation process it remains unknown in which direction the graph may deviate.

    The condition for the end of the iterative process is the fulfillment of the following condition:

    Where ˗ permissible error in determining the root.

    The method has quadratic convergence. The quadratic rate of convergence means that the number of correct signs in the approximation doubles with each iteration.

    Mathematical justification

    Let a real function be given, which is defined and continuous on the section under consideration. It is necessary to find the real root of the considered function.

    The derivation of the equation is based on the method of simple iterations, according to which the equation is reduced to an equivalent equation for any function. Let us introduce the concept of contraction mapping, which is defined by the relation .

    For the best convergence of the method, the condition must be satisfied at the point of the next approximation. This requirement means that the root of the function must correspond to the extremum of the function.

    Derivative of contraction mappingis defined in the following form:

    Let's express a variable from this expressionsubject to the previously accepted statement that when it is necessary to ensure the condition . As a result, we obtain an expression for determining the variable :

    With this in mind, the contraction function reception is as follows:

    Thus, the algorithm for finding a numerical solution to the equation is reduced to an iterative calculation procedure:

    Algorithm for finding the root of a nonlinear equation using the method

    1. Set the starting point of the approximate value of the root of the function, as well as the calculation error (small positive number) and the initial iteration step ().

    2. Calculate the approximate value of the root of the function in accordance with the formula:

    3. We check the approximate value of the root for the specified accuracy, in the case of:

    If the difference between two successive approximations becomes less than the specified accuracy, then the iterative process ends.

    If the difference between two successive approximations does not reach the required accuracy, then it is necessary to continue the iterative process and go to step 2 of the algorithm under consideration.

    Example of solving equations

    by methodNewton for an equation with one variable

    As an example, consider solving a nonlinear equation using the methodNewton for an equation with one variable. The root must be found with accuracy as a first approximation.

    Option for solving a nonlinear equation in a software packageMathCADpresented in Figure 3.

    The calculation results, namely the dynamics of changes in the approximate value of the root, as well as the calculation errors depending on the iteration step, are presented in graphical form (see Fig. 2).

    Fig.2. Calculation results using Newton's method for an equation with one variable

    To ensure the specified accuracy when searching for an approximate value of the root of the equation in the range, it is necessary to perform 4 iterations. At the last iteration step, the approximate value of the root of the nonlinear equation will be determined by the value: .

    Fig.3 . Program listing inMathCad

    Modifications of Newton's method for an equation with one variable

    There are several modifications of Newton's method that are aimed at simplifying the computational process.

    Simplified Newton's method

    In accordance with Newton's method, it is necessary to calculate the derivative of the function f(x) at each iteration step, which leads to an increase in computational costs. To reduce the costs associated with calculating the derivative at each calculation step, you can replace the derivative f’(x n) at point x n in the formula with the derivative f’(x 0) at point x 0. In accordance with this calculation method, the approximate value of the root is determined by the following formula:Modified Newton's method

    Newton's difference method

    As a result, the approximate value of the root of the function f(x) will be determined by the expression of Newton’s difference method:

    Newton's two-step method

    In accordance with Newton's method, it is necessary to calculate the derivative of the function f(x) at each iteration step, which is not always convenient and sometimes practically impossible. This method allows you to replace the derivative of a function with a difference ratio (approximate value):

    As a result, the approximate value of the root of the function f(x) will be determined by the following expression:

    Where

    Fig.5 . Newton's two-step method

    The secant method is a two-step method, that is, a new approximationdetermined by the two previous iterations And . The method must specify two initial approximations And . The convergence rate of the method will be linear.

    • Back
    • Forward

    In order to add your comment to the article, please register on the site.

    The problem of finding solutions to a system of n nonlinear algebraic or transcendental equations with n unknowns of the form

    f 1(x 1, x 2, … x n) = 0,

    f 2(x 1, x 2, … x n) = 0,

    ……………………

    f n (x 1 ,x 2 ,… x n ) = 0,

    widely considered in computing practice. Similar systems of equations can arise, for example, during numerical modeling of nonlinear physical systems at the stage of searching for their stationary states. In a number of cases, systems of the form (6.1) are obtained indirectly, in the process of solving some other computational problem. For example, when trying to minimize a function of several variables, you can look for those points in multidimensional space where the gradient of the function is zero. In this case, it is necessary to solve the system of equations (6.1) with the left sides - projections of the gradient onto the coordinate axes.

    In vector notation, system (6.1) can be written in a more compact form

    vector column of functions, the symbol () T denotes the transponition operation

    Finding solutions to a system of nonlinear equations is a much more complex task than solving a single nonlinear equation. Nevertheless, a number of iterative methods for solving nonlinear equations can be extended to systems of nonlinear equations.

    Simple iteration method

    The simple iteration method for systems of nonlinear equations is essentially a generalization of the method of the same name for one equation. It is based on the fact that the system of equations (6.1) is reduced to the form

    x 1= g 1(x 1, x 2, … , x n) , x 2= g 2(x 1, x 2, … , x n) ,

    ……………………

    x n= g n(x 1 , x 2 , … , x n) ,

    and iterations are carried out according to the formulas

    x 1 (k + 1 )= g 1 (x 1 (k ), x 2 (k ), ... , x n (k )), x 2 (k + 1 )= g 2 (x 1 (k ), x 2 (k ), … , x n (k )) ,

    ……………………………

    x n (k + 1)= g n (x 1 (k), x 2 (k), ..., x n (k)).

    Here the superscript indicates the approximation number. The iterative process (6.3) begins with some initial approximation

    (x 1 (0) ,x 2 (0) ,… ,x n (0) ) and continue until the increment modules

    all arguments after one k-iteration will not become less than a given value ε : x i (k + 1 ) − x i (k )< ε дляi = 1,2,… ,n .

    Although the simple iteration method leads directly to a solution and is easy to program, it has two significant disadvantages. One of them is slow convergence. Another is that if the initial approximation is chosen far from the true solution (X 1,X 2,…,X n), then the convergence

    method is not guaranteed. It is clear that the problem of choosing an initial approximation, which is not simple even for one equation, becomes very complex for nonlinear systems.

    Solve a system of nonlinear equations:

    (x...

    ) =0

    F n (x 1 ...

    x n) = 0 .

    There are no direct methods for solving nonlinear systems of general form. Only in some cases can system (4.1) be solved directly. For example, in the case of two equations, it is sometimes possible to express one unknown in terms of the other and thus reduce the problem to solving one nonlinear equation with respect to one unknown.

    Iterative methods are usually used to solve systems of nonlinear equations.

    Newton's method

    In the case of one equation F(x) = 0, the algorithm of Newton's method was easily obtained by writing the tangent equations to the curve y = F(x). The Newton method for systems of equations is based on the use of the expansion of functions F 1 (x 1 ...x n) in a Taylor series, and the terms containing

    existing second (and higher order) derivatives are discarded. Let the approximate values ​​of the unknowns of system (4.1) be equal to

    responsible a 1 ,a 2 ,....,a n . The task is to find the increments (by

    edits) to these values

    x 1 ,x 2 ,...,

    x n , thanks to which the solution of the system

    topics will be written as:

    x 1= a 1+ x 1,

    x 2= a 2+

    x 2, .... ,x n = a n + x n.

    Let us expand the left-hand sides of equations (4.1) taking into account the Taylor series expansion, limiting ourselves to only the linear terms of the relative

    exactly increments:

    F1 (x1 ... xn ) ≈ F1 (a1 ... an ) +

    ∂ F 1

    x 1+

    + ∂ F 1

    xn,

    ∂x

    ∂x

    F2 (x1 ... xn ) ≈ F2 (a1 ... an ) +

    ∂ F 2

    x 1+

    ∂ F 2

    xn,

    ∂x

    ∂x

    ...................................

    F n(x 1 ... x n) ≈ F n(a 1 ... a n) +

    ∂Fn

    x 1+

    ∂Fn

    xn

    ∂x

    ∂x

    Substituting into system (4.1), we obtain the following system of linear algebraic equations for increments:

    ∂ F 1

    ∂ F 1

    + ∂ F 1

    = −F ,

    ∂x

    ∂x

    ∂x

    ∂ F 2

    ∂ F 2

    ∂ F 2

    = −F ,

    ∂x

    ∂x

    ∂x

    ..............................

    ∂Fn

    ∂Fn

    ∂Fn

    = −F .

    ∂x

    ∂x

    ∂x

    Values ​​F 1 ...

    derivatives

    are calculated at

    x 2 = a 2, …x n = a n.

    The determinant of system (4.3) is the Jacobian:

    ∂ F 1

    ∂ F 1

    ∂x

    ∂x

    ∂ F 2

    ∂ F 2

    J = ∂ x

    ∂x.

    … … … …

    ∂ F n… … ∂ F n∂ x 1 ∂ x n

    x 1= a 1,

    For a unique solution to the system to exist, the Jacobian must be nonzero at each iteration.

    Thus, the iterative process of solving a system of equations by Newton’s method consists of determining the increments x 1 , x 2 , ..., x n to the values ​​of the unknowns at each iteration by solving the system of linear algebraic equations (4.3). Counting stops if all increments become small in absolute value: maxx i< ε . В ме-

    In Newton's method, a good choice of initial approximation is also important to ensure good convergence. Convergence deteriorates as the number of equations in the system increases.

    As an example, consider using Newton's method to solve a system of two equations:

    ∂ ∂ F 1. x

    The quantities on the right side are calculated at x = a,y = b.

    If the conditions are met

    y−b

    < εи

    x−a

    for a given M, then

    x and y values ​​are displayed,

    otherwise

    output occurs

    x ,y ,M .

    

    Keywords:

    Goal of the work: study methods for solving nonlinear equations with one unknown and test them in experimental work.

    Job objectives:

    1. Analyze specialized literature and choose the most rational methods for solving nonlinear equations, allowing all high school graduates to deeply study and assimilate this topic.
    2. Develop some aspects of a methodology for solving nonlinear equations using ICT.
    3. Explore methods for solving nonlinear equations:

    ‒ Step method

    ‒ Halving method

    ‒ Newton's method

    Introduction.

    Without mathematical literacy, it is impossible to successfully master methods for solving problems in physics, chemistry, biology and other subjects. The entire complex of natural sciences is built and developed on the basis of mathematical knowledge. For example, the study of a number of topical problems in mathematical physics leads to the need to solve nonlinear equations. The solution of nonlinear equations is necessary in nonlinear optics, plasma physics, superconductivity theory, and low-temperature physics. There is a sufficient amount of literature on this topic, but many textbooks and articles are difficult for a high school student to understand. This paper discusses methods for solving nonlinear equations that can be used to solve applied problems in physics and chemistry. An interesting aspect is the application of information technology to solving equations and problems in mathematics.

    Step method.

    Let it be necessary to solve a nonlinear equation of the form F(x)=0. Let's also assume that we are given a certain search interval. It is required to find the interval [a,b] of length h, containing the first root of the equation, starting from the left border of the search interval.

    Rice. 1. Step method

    There are several ways to solve such a problem. The step method is the simplest of the numerical methods for solving inequalities, but to achieve high accuracy it is necessary to significantly reduce the step, and this greatly increases the calculation time. The algorithm for solving equations using this method consists of two stages.

    Istage. Root separation.

    At this stage, sections are determined, each of which contains only one root of the equation. There are several options for implementing this stage:

    • We substitute the values ​​of X (preferably with some fairly small step) and see where the function changes sign. If the function has changed its sign, this means that there is a root in the area between the previous and current value of X (if the function does not change the nature of its increase/decrease, then we can say that there is only one root in this interval).
    • Graphic method. We build a graph and evaluate on which intervals one root lies.
    • Let's explore the properties of a specific function.

    IIstage. Refinement of roots.

    At this stage, the meaning of the roots of the equation determined earlier is clarified. As a rule, iterative methods are used at this stage. For example, the method of halves (dichotomy) or Newton's method.

    Half division method

    A fast and fairly simple numerical method for solving equations, based on the sequential narrowing of the interval containing the only root of the equation F(x) = 0 until the specified accuracy E is achieved. This method is usually used when solving quadratic equations and equations of higher degrees. However, this method has a significant drawback - if the segment [a,b] contains more than one root, then it will not be able to achieve good results.

    Rice. 2. Dichotomy method

    The algorithm for this method is as follows:

    ‒ Determine a new approximation of the root x in the middle of the segment [a;b]: x=(a+b)/2.

    ‒ Find the values ​​of the function at points a and x: F(a) and F(x).

    ‒ Check the condition F(a)*F(x)

    ‒ Go to step 1 and again divide the segment in half. Continue the algorithm until the condition |F(x)|

    Newton's method

    The most accurate of the numerical solution methods; suitable for solving very complex equations, but is complicated by the need to calculate derivatives at each step. is that if x n is some approximation to the root of the equation , then the next approximation is defined as the root of the tangent to the function f(x) drawn at the point x n.

    The tangent equation to the function f(x) at point x n has the form:

    In the tangent equation we put y = 0 and x = x n +1.

    Then the algorithm for sequential calculations in Newton’s method is as follows:

    The convergence of the tangent method is quadratic, the order of convergence is 2.

    Thus, the convergence of Newton's tangent method is very fast.

    Without any changes, the method is generalized to the complex case. If the root x i is a root of the second multiplicity or higher, then the order of convergence drops and becomes linear.

    The disadvantages of Newton's method include its locality, since it is guaranteed to converge for an arbitrary starting approximation only if the condition is satisfied everywhere , in the opposite situation, convergence occurs only in a certain neighborhood of the root.

    Newton's method (tangent method) is usually used when the equation f(x) = 0 has a root and the following conditions are met:

    1) function y=f(x) defined and continuous at ;

    2) f(a) f(b) (the function takes values ​​of different signs at the ends of the segment [ a;b]);

    3) derivatives f"(x) And f""(x) preserve sign on the interval [ a;b] (i.e. the function f(x) either increases or decreases on the segment [ a;b], while maintaining the direction of the convexity);

    The meaning of the method is as follows: on the segment [ a;b] such a number is selected x 0 , at which f(x 0) has the same sign as f""(x 0), i.e. the condition is satisfied f(x 0) f""(x) > 0. Thus, the point with the abscissa is selected x 0, in which the tangent to the curve y=f(x) on the segment [ a;b] intersects the axis Ox. Per point x 0 First, it is convenient to select one of the ends of the segment.

    Let's consider this algorithm using a specific example.

    Let us be given an increasing function y = f(x) =x 2– 2, continuous on the segment (0;2), and having f "(x) =2x>0 And f ""(x) = 2> 0.

    In our case, the tangent equation has the form: y-y 0 =2x 0 ·(x-x 0). IN as point x 0 we select point B 1 (b; f(b)) = (2,2). Draw a tangent to the function y = f(x) at point B 1, and denote the point of intersection of the tangent and axis Ox dot x 1. We get the equation of the first tangent: y-2=2·2(x-2), y=4x-6. Ox: x 1 =

    Rice. 3. Construction of the first tangent to the graph of the function f(x)

    y=f(x) Ox through the point x 1, we get the point B 2 =(1.5; 0.25). Draw a tangent to the function again y = f(x) at point B 2, and denote the intersection point of the tangent and Ox dot x 2.

    Equation of the second tangent: y-2.25=2*1.5(x-1.5), y = 3x - 4.25. Intersection point of tangent and axis Ox: x 2 =.

    Then we find the point of intersection of the function y=f(x) and a perpendicular to the axis Ox through the point x 2, we get the point B 3 and so on.

    Rice. 4. Construction of the second tangent to the graph of the function f(x)

    The first approximation of the root is determined by the formula:

    = 1.5.

    The second approximation of the root is determined by the formula:

    =

    The third approximation of the root is determined by the formula:

    Thus ,i The th approximation of the root is determined by the formula:

    Calculations are carried out until the decimal places that are needed in the answer match, or the specified precision e is achieved - until the inequality is satisfied |xi-xi-1|

    In our case, let's compare the approximation obtained in the third step with the real answer. As you can see, already at the third step we received an error of less than 0.000002.

    Solving an equation using CADMathCAD

    For the simplest equations of the form f(x) = 0 the solution in MathCAD is found using the function root.

    root(f (X 1 , x 2 , … ) , X 1 , a, b ) - returns value X 1 , belonging to the segment [ a, b ] , in which the expression or function f (X ) goes to 0. Both arguments to this function must be scalars. The function returns a scalar.

    Rice. 5. Solving a nonlinear equation in MathCAD (root function)

    If an error occurs as a result of applying this function, this may mean that the equation has no roots, or the roots of the equation are located far from the initial approximation, the expression has local max And min between the initial approximation and the roots.

    To establish the cause of the error, it is necessary to examine the graph of the function f(x). It will help to find out the presence of roots of the equation f(x) = 0 and, if they exist, then approximately determine their values. The more accurately the initial approximation of the root is chosen, the faster its exact value will be found.

    If the initial approximation is unknown, then it is advisable to use the function solve . Moreover, if the equation contains several variables, you need to specify after the solve keyword a list of variables with respect to which the equation is solved.

    Rice. 6. Solving a nonlinear equation in MathCAD (solve function)

    Conclusion

    The study examined both mathematical methods and solving equations using programming in the CAD system MathCAD. Different methods have their own advantages and disadvantages. It should be noted that the use of a particular method depends on the initial conditions of the given equation. Those equations that can be solved well by methods of factorization, etc., known in school, do not make sense to solve using more complex methods. Applied mathematics problems that are important for physics and chemistry and require complex computational operations when solving equations are successfully solved, for example, using programming. It’s good to solve them using Newton’s method.

    To clarify the roots, you can use several methods for solving the same equation. It was this research that formed the basis of this work. At the same time, it is easy to see which method is most successful when solving each stage of the equation, and which method is better not to use at this stage.

    The studied material, on the one hand, helps to expand and deepen mathematical knowledge and instill interest in mathematics. On the other hand, it is important to be able to solve real mathematics problems for those who are planning to acquire technical and engineering professions. Therefore, this work is important for further education (for example, in a higher educational institution).

    Literature:

    1. Mityakov S. N. Informatics. A set of educational and methodological materials. - N. Novgorod: Nizhny Novgorod. state tech. univ., 2006
    2. Vainberg M. M., Trenogin V. A. The theory of branching solutions of nonlinear equations. M.: Nauka, 1969. - 527 p.
    3. Bronshtein I. N., Semendyaev K. A. Handbook of mathematics for engineers and students of technical colleges - M.: Nauka, 1986.
    4. Omelchenko V.P., Kurbatova E.V. Mathematics: textbook. - Rostov n/d.: Phoenix, 2005.
    5. Savin A.P. Encyclopedic dictionary of a young mathematician. - M.: Pedagogy, 1989.
    6. Korn G., Korn T. Handbook of mathematics for scientists and engineers. - M.: Nauka, 1973.
    7. Kiryanov D. Mathcad 15/MathcadPrime 1.0. - St. Petersburg: BHV-Petersburg, 2012.
    8. Chernyak A., Chernyak Zh., Domanova Yu. Higher mathematics based on Mathcad. General course. - St. Petersburg: BHV-Petersburg, 2004.
    9. Porshnev S., Belenkova I. Numerical methods based on Mathcad. - St. Petersburg: BHV-Petersburg, 2012.

    Keywords: nonlinear equations, applied mathematics, CAD MathCAD, Newton's method, step method, dichotomy method..

    Annotation: The article is devoted to the study of methods for solving nonlinear equations, including using the MathCAD computer-aided design system. The step method, halves and Newton methods are considered, detailed algorithms for applying these methods are given, and a comparative analysis of these methods is carried out.

    CATEGORIES

    POPULAR ARTICLES

    2023 “kingad.ru” - ultrasound examination of human organs