Simple iteration method for solving systems of linear equations (slough). Numerical solution of systems of linear algebraic equations

The simple iteration method is based on replacing the original equation with an equivalent equation:

Let the initial approximation to the root be known x = x 0. Substituting it into the right side of equation (2.7), we obtain a new approximation , then in a similar way we get etc.:

. (2.8)


Not under all conditions the iterative process converges to the root of the equation X. Let's take a closer look at this process. Figure 2.6 shows a graphical interpretation of a one-way convergent and divergent process. Figure 2.7 shows two-way convergent and divergent processes. A divergent process is characterized by a rapid increase in the values ​​of the argument and function and the abnormal termination of the corresponding program.


With a two-way process, cycling is possible, that is, endless repetition of the same function and argument values. Looping separates a divergent process from a convergent one.

It is clear from the graphs that for both one-sided and two-sided processes, convergence to the root is determined by the slope of the curve near the root. The smaller the slope, the better the convergence. As is known, the tangent of the slope of a curve is equal to the derivative of the curve at a given point.

Therefore, the smaller the number near the root, the faster the process converges.

In order for the iteration process to be convergent, the following inequality must be satisfied in the vicinity of the root:

The transition from equation (2.1) to equation (2.7) can be carried out in various ways depending on the type of function f(x). In such a transition, it is necessary to construct the function so that the convergence condition (2.9) is satisfied.

Let's consider one of the general algorithms for transition from equation (2.1) to equation (2.7).

Let's multiply the left and right sides of equation (2.1) by an arbitrary constant b and add the unknown to both parts X. In this case, the roots of the original equation will not change:

Let us introduce the notation and let's move from relation (2.10) to equation (2.8).


Arbitrary choice of constant b will ensure the fulfillment of the convergence condition (2.9). The criterion for ending the iterative process will be condition (2.2). Figure 2.8 shows a graphical interpretation of the method of simple iterations using the described method of representation (the scales along the X and Y axes are different).

If a function is chosen in the form , then the derivative of this function will be . The highest speed of convergence will be at , then and the iteration formula (2.11) turns into Newton's formula. Thus, Newton's method has the highest degree of convergence of all iterative processes.

The software implementation of the simple iteration method is made in the form of a subroutine procedure Iteras(PROGRAM 2.1).


The entire procedure practically consists of one Repeat ... Until cycle, implementing formula (2.11) taking into account the condition for stopping the iterative process (formula (2.2)).

The procedure has built-in loop protection by counting the number of loops using the Niter variable. In practical classes, you need to make sure by running the program how the choice of coefficient affects b and initial approximation in the process of searching for the root. When changing the coefficient b the nature of the iterative process for the function under study changes. It first becomes two-sided, and then loops (Fig. 2.9). Axis scales X And Y are different. An even larger value of modulus b leads to a divergent process.

Comparison of methods for approximate solution of equations

A comparison of the methods described above for numerical solution of equations was carried out using a program that allows one to observe the process of finding the root in graphical form on the PC screen. The procedures included in this program and implementing the compared methods are given below (PROGRAM 2.1).

Rice. 2.3-2.5, 2.8, 2.9 are copies of the PC screen at the end of the iteration process.

In all cases, the quadratic equation x 2 -x-6 = 0 was taken as the function under study, having an analytical solution x 1 = -2 and x 2 = 3. The error and initial approximations were assumed equal for all methods. Root search results x= 3, presented in the figures, are as follows. The dichotomy method converges the slowest - 22 iterations, the fastest is the simple iteration method with b = -0.2 - 5 iterations. There is no contradiction here with the statement that Newton's method is the fastest.

Derivative of the function under study at the point X= 3 is equal to -0.2, that is, the calculation in this case was carried out practically by Newton’s method with the value of the derivative at the point of the root of the equation. When changing the coefficient b the rate of convergence drops and the gradually convergent process first goes in cycles and then becomes divergent.

Lecture Iterative methods for solving a system of algebraic linear equations.

Condition for convergence of the iterative process. Jacobi method. Seidel method

Simple iteration method

A system of linear algebraic equations is considered

To apply iterative methods, the system must be reduced to an equivalent form

Then an initial approximation to the solution of the system of equations is selected and a sequence of approximations to the root is found.

For the iterative process to converge, it is sufficient that the condition be satisfied
(matrix norm). The criterion for ending iterations depends on the iterative method used.

Jacobi method .

The simplest way to bring the system into a form convenient for iteration is as follows:

From the first equation of the system we express the unknown x 1, from the second equation of the system we express x 2, etc.

As a result, we obtain a system of equations with matrix B, in which zero elements are on the main diagonal, and the remaining elements are calculated using the formulas:

The components of vector d are calculated using the formulas:

The calculation formula for the simple iteration method is:

or in coordinate notation it looks like this:

The criterion for finishing iterations in the Jacobi method has the form:

If
, then we can apply a simpler criterion for ending iterations

Example 1. Solving a system of linear equations using the Jacobi method.

Let the system of equations be given:

It is required to find a solution to the system with accuracy

Let us reduce the system to a form convenient for iteration:

Let us choose an initial approximation, for example,

- vector of the right side.

Then the first iteration looks like this:

The following approximations to the solution are obtained similarly.

Let's find the norm of matrix B.

We will use the norm

Since the sum of the modules of the elements in each row is 0.2, then
, so the criterion for ending iterations in this problem is

Let's calculate the norms of vector differences:

Because
the specified accuracy was achieved at the fourth iteration.

Answer: x 1 = 1.102, x 2 = 0.991, x 3 = 1.0 1 1

Seidel method .

The method can be considered as a modification of the Jacobi method. The main idea is that when calculating the next (n+1)-th approach to the unknown x i at i >1 use already found (n+1)- e approaching the unknown x 1 ,x 2 , ...,x i - 1 and not n th approximation, as in the Jacobi method.

The calculation formula of the method in coordinate notation looks like this:

The convergence conditions and the criterion for ending iterations can be taken the same as in the Jacobi method.

Example 2. Solving systems of linear equations using the Seidel method.

Let us consider in parallel the solution of 3 systems of equations:

Let us reduce the systems to a form convenient for iterations:

Note that the convergence condition
done only for the first system. Let us calculate 3 first approximations to the solution in each case.

1st system:

The exact solution will be the following values: x 1 = 1.4, x 2 = 0.2 . The iterative process converges.

2nd system:

It can be seen that the iteration process diverges.

Exact solution x 1 = 1, x 2 = 0.2 .

3rd system:

It can be seen that the iteration process has gone in cycles.

Exact solution x 1 = 1, x 2 = 2 .

Let the matrix of the system of equations A be symmetric and positive definite. Then, for any choice of initial approximation, the Seidel method converges. No additional conditions are imposed on the smallness of the norm of a certain matrix.

Simple iteration method.

If A is a symmetric and positive definite matrix, then the system of equations is often reduced to the equivalent form:

x=x-τ (A x- b), τ – iteration parameter.

The calculation formula of the simple iteration method in this case has the form:

x (n+1) =x n- τ (A x (n) - b).

and the parameter τ > 0 is chosen so as to minimize, if possible, the value

Let λ min and λ max be the minimum and maximum eigenvalues ​​of matrix A. The optimal choice of parameter is

In this case
takes the minimum value equal to:

Example 3. Solving systems of linear equations using the simple iteration method. (in MathCAD)

Let the system of equations Ax = b be given

    To construct an iterative process, we find the eigenvalues ​​of matrix A:

- uses a built-in function to find eigenvalues.

    Let's calculate the iteration parameter and check the convergence condition

The convergence condition is satisfied.

    Let's take the initial approximation - vector x0, set the accuracy to 0.001 and find the initial approximations using the program below:

Exact solution

Comment. If the program returns the rez matrix, then you can view all the iterations found.

The advantage of iterative methods is their applicability to ill-conditioned systems and high-order systems, their self-correction and ease of implementation on a PC. To begin calculations, iterative methods require specifying some initial approximation to the desired solution.

It should be noted that the conditions and rate of convergence of the iterative process significantly depend on the properties of the matrix A system and on the choice of initial approximations.

To apply the iteration method, the original system (2.1) or (2.2) must be reduced to the form

after which the iterative process is performed according to recurrent formulas

, k = 0, 1, 2, ... . (2.26A)

Matrix G and vector are obtained as a result of transformation of system (2.1).

For convergence (2.26 A) is necessary and sufficient so that |l i(G)| < 1, где li(G) – all eigenvalues ​​of the matrix G. Convergence will also occur if || G|| < 1, так как |li(G)| < " ||G||, where " is any.

Symbol || ... || means the norm of the matrix. When determining its value, they most often stop at checking two conditions:

||G|| = or || G|| = , (2.27)

Where . Convergence is also guaranteed if the original matrix A has diagonal dominance, i.e.

. (2.28)

If (2.27) or (2.28) is satisfied, the iteration method converges for any initial approximation. Most often, the vector is taken either zero or unit, or the vector itself is taken from (2.26).

There are many approaches to transforming the original system (2.2) with the matrix A to ensure the form (2.26) or satisfy the convergence conditions (2.27) and (2.28).

For example, (2.26) can be obtained as follows.

Let A = IN+ WITH, det IN#0; Then ( B+ WITH)= Þ B= −C+ Þ Þ B –1 B= −B –1 C+ B–1, whence= − B –1 C+ B –1 .

Putting - B –1 C = G, B–1 = , we obtain (2.26).

From the convergence conditions (2.27) and (2.28) it is clear that the representation A = IN+ WITH cannot be arbitrary.

If matrix A satisfies conditions (2.28), then as a matrix IN you can select the lower triangular one:

, a ii ¹ 0.

; Þ ; Þ ; Þ

By choosing the parameter a, we can ensure that || G|| = ||E+a A|| < 1.

If (2.28) prevails, then the transformation to (2.26) can be done by solving each i th equation of system (2.1) with respect to x i according to the following recurrent formulas:

(2.28A)

If in the matrix A there is no diagonal dominance; it must be achieved using some linear transformations that do not violate their equivalence.

As an example, consider the system

(2.29)

As you can see, in equations (1) and (2) there is no diagonal dominance, but in (3) there is, so we leave it unchanged.

Let us achieve diagonal dominance in equation (1). Let's multiply (1) by a, (2) by b, add both equations and in the resulting equation choose a and b so that there is diagonal dominance:

(2a + 3b) X 1 + (–1.8a + 2b) X 2 +(0.4a – 1.1b) X 3 = a.

Taking a = b = 5, we get 25 X 1 + X 2 – 3,5X 3 = 5.

To transform equation (2) with a predominance of (1) multiply by g, (2) multiply by d and subtract (1) from (2). We get

(3d – 2g) X 1 + (2d + 1.8g) X 2 +(–1.1d – 0.4g) X 3 = −g.

Putting d = 2, g = 3, we get 0 X 1 + 9,4 X 2 – 3,4 X 3 = −3. As a result, we obtain the system

(2.30)

This technique can be used to find solutions to a wide class of matrices.

or

Taking vector = (0.2; –0.32; 0) as an initial approximation T, we will solve this system using technology (2.26 A):

k = 0, 1, 2, ... .

The calculation process stops when two adjacent approximations of the solution vector coincide in accuracy, i.e.

.

Technology of iterative solution of the form (2.26 A) named simple iteration method .

Absolute error estimate for the simple iteration method:

where is the symbol || ... || means normal.

Example 2.1. Using a simple iteration method with an accuracy of e = 0.001, solve the system of linear equations:

The number of steps that give an answer accurate to e = 0.001 can be determined from the relation

£0.001.

Let us estimate the convergence using formula (2.27). Here || G|| = = max(0.56; 0.61; 0.35; 0.61) = 0.61< 1; = 2,15. Значит, сходимость обеспечена.

As an initial approximation, we take the vector of free terms, i.e. = (2.15; –0.83; 1.16; 0.44) T. Let's substitute the vector values ​​into (2.26 A):

Continuing the calculations, we enter the results into the table:

k X 1 X 2 X 3 X 4
2,15 –0,83 1,16 0,44
2,9719 –1,0775 1,5093 –0,4326
3,3555 –1,0721 1,5075 –0,7317
3,5017 –1,0106 1,5015 –0,8111
3,5511 –0,9277 1,4944 –0,8321
3,5637 –0,9563 1,4834 –0,8298
3,5678 –0,9566 1,4890 –0,8332
3,5760 –0,9575 1,4889 –0,8356
3,5709 –0,9573 1,4890 –0,8362
3,5712 –0,9571 1,4889 –0,8364
3,5713 –0,9570 1,4890 –0,8364

Convergence in thousandths occurs already at the 10th step.

Answer: X 1 » 3.571; X 2 "-0.957; X 3 » 1.489; X 4 "-0.836.

This solution can also be obtained using formulas (2.28 A).

Example 2.2. To illustrate the algorithm using formulas (2.28 A) consider the solution of the system (only two iterations):

; . (2.31)

Let us transform the system to the form (2.26) according to (2.28 A):

Þ (2.32)

Let's take the initial approximation = (0; 0; 0) T. Then for k= 0 it is obvious that the value = (0.5; 0.8; 1.5) T. Let us substitute these values ​​into (2.32), i.e., when k= 1 we get = (1.075; 1.3; 1.175) T.

Error e 2 = = max(0.575; 0.5; 0.325) = 0.575.

Block diagram of the algorithm for finding a solution to the SLAE using the method of simple iterations according to working formulas (2.28 A) is shown in Fig. 2.4.

A special feature of the block diagram is the presence of the following blocks:

– block 13 – its purpose is discussed below;

– block 21 – displaying results on the screen;

– block 22 – check (indicator) of convergence.

Let us analyze the proposed scheme using the example of system (2.31) ( n= 3, w = 1, e = 0.001):

= ; .

Block 1. Enter the initial data A, ,w,e, n: n= 3, w = 1, e = 0.001.

Cycle I. Set the initial values ​​of the vectors x 0i And x i (i = 1, 2, 3).

Block 5. Reset the counter for the number of iterations.

Block 6. Reset the current error counter to zero.

IN cycle II, the matrix row numbers are changed A and vector.

Cycle II:i = 1: s = b 1 = 2 (block 8).

Go to nested loop III, block 9 – matrix column number counter A: j = 1.

Block 10: j = i, therefore, we return to block 9 and increase j per unit: j = 2.

In block 10 j ¹ i(2 ¹ 1) – we move to block 11.

Block 11: s= 2 – (–1) × X 0 2 = 2 – (–1) × 0 = 2, go to block 9, in which j increase by one: j = 3.

In block 10 the condition j ¹ i is fulfilled, so let's move on to block 11.

Block 11: s= 2 – (–1) × X 0 3 = 2 – (–1) × 0 = 2, after which we move on to block 9, in which j increase by one ( j= 4). Meaning j more n (n= 3) – we finish the cycle and move on to block 12.

Block 12: s = s / a 11 = 2 / 4 = 0,5.

Block 13: w = 1; s = s + 0 = 0,5.

Block 14: d = | x is | = | 1 – 0,5 | = 0,5.

Block 15: x i = 0,5 (i = 1).

Block 16. Checking the condition d > de: 0.5 > 0, therefore, go to block 17, in which we assign de= 0.5 and return using the link “ A» to the next step of cycle II – to block 7, in which i increase by one.

Cycle II: i = 2: s = b 2 = 4 (block 8).

j = 1.

Through block 10 j ¹ i(1 ¹ 2) – we move to block 11.

Block 11: s= 4 – 1 × 0 = 4, go to block 9, in which j increase by one: j = 2.

In block 10 the condition is not met, so we move on to block 9, in which j increase by one: j= 3. By analogy, we move on to block 11.

Block 11: s= 4 – (–2) × 0 = 4, after which we finish cycle III and move on to block 12.

Block 12: s = s/ a 22 = 4 / 5 = 0,8.

Block 13: w = 1; s = s + 0 = 0,8.

Block 14: d = | 1 – 0,8 | = 0,2.

Block 15: x i = 0,8 (i = 2).

Block 16. Checking the condition d > de: 0,2 < 0,5; следовательно, возвращаемся по ссылке «A» to the next step of cycle II - to block 7.

Cycle II: i = 3: s = b 3 = 6 (block 8).

Go to nested loop III, block 9: j = 1.

Block 11: s= 6 – 1 × 0 = 6, go to block 9: j = 2.

Using block 10 we move to block 11.

Block 11: s= 6 – 1 × 0 = 6. We finish cycle III and move on to block 12.

Block 12: s = s/ a 33 = 6 / 4 = 1,5.

Block 13: s = 1,5.

Block 14: d = | 1 – 1,5 | = 0,5.

Block 15: x i = 1,5 (i = 3).

According to block 16 (including references " A" And " WITH") we leave cycle II and move on to block 18.

Block 18. Increasing the number of iterations it = it + 1 = 0 + 1 = 1.

In blocks 19 and 20 of cycle IV, we replace the initial values X 0i obtained values x i (i = 1, 2, 3).

Block 21. We print intermediate values ​​of the current iteration, in this case: = (0.5; 0.8; 1.5) T, it = 1; de = 0,5.

We go to cycle II to block 7 and perform the considered calculations with new initial values X 0i (i = 1, 2, 3).

After which we get X 1 = 1,075; X 2 = 1,3; X 3 = 1,175.

Here, then, Seidel's method converges.

According to formulas (2.33)

k X 1 X 2 X 3
0,19 0,97 –0,14
0,2207 1,0703 –0,1915
0,2354 1,0988 –0,2118
0,2424 1,1088 –0,2196
0,2454 1,1124 –0,2226
0,2467 1,1135 –0,2237
0,2472 1,1143 –0,2241
0,2474 1,1145 –0,2243
0,2475 1,1145 –0,2243

Answer: x 1 = 0,248; x 2 = 1,115; x 3 = –0,224.

Comment. If the simple iteration and Seidel methods converge for the same system, then the Seidel method is preferable. However, in practice, the areas of convergence of these methods may be different, i.e., the simple iteration method converges, but the Seidel method diverges, and vice versa. For both methods, if || G|| close to unit, the convergence speed is very low.

To speed up convergence, an artificial technique is used - the so-called relaxation method . Its essence is that the next value obtained by the iteration method x i (k) is recalculated using the formula

where w is usually changed in the range from 0 to 2 (0< w £ 2) с каким-либо шагом (h= 0.1 or 0.2). The parameter w is selected so that the convergence of the method is achieved in a minimum number of iterations.

Relaxation– a gradual weakening of any state of the body after the cessation of the factors that caused this state (physical engineering).

Example 2.4. Let us consider the result of the fifth iteration using the relaxation formula. Let's take w = 1.5:

As you can see, the result of almost the seventh iteration was obtained.

The simple iteration method, also called the successive approximation method, is a mathematical algorithm for finding the value of an unknown quantity by gradually refining it. The essence of this method is that, as the name implies, gradually expressing subsequent ones from the initial approximation, more and more refined results are obtained. This method is used to find the value of a variable in a given function, as well as when solving systems of equations, both linear and nonlinear.

Let us consider how this method is implemented when solving SLAEs. The simple iteration method has the following algorithm:

1. Checking the fulfillment of the convergence condition in the original matrix. Convergence theorem: if the original matrix of the system has diagonal dominance (i.e., in each row, the elements of the main diagonal must be greater in absolute value than the sum of the elements of the secondary diagonals in absolute value), then the simple iteration method is convergent.

2. The matrix of the original system does not always have a diagonal predominance. In such cases, the system can be converted. Equations that satisfy the convergence condition are left untouched, and linear combinations are made with those that do not, i.e. multiply, subtract, add equations to each other until the desired result is obtained.

If in the resulting system there are inconvenient coefficients on the main diagonal, then terms of the form with i * x i are added to both sides of such an equation, the signs of which must coincide with the signs of the diagonal elements.

3. Transformation of the resulting system to normal form:

x - =β - +α*x -

This can be done in many ways, for example, like this: from the first equation, express x 1 in terms of other unknowns, from the second - x 2, from the third - x 3, etc. In this case we use the formulas:

α ij = -(a ij / a ii)

i = b i /a ii
You should again make sure that the resulting system of normal form meets the convergence condition:

∑ (j=1) |α ij |≤ 1, while i= 1,2,...n

4. We begin to apply, in fact, the method of successive approximations itself.

x (0) is the initial approximation, we will express x (1) through it, then we will express x (2) through x (1). The general formula in matrix form looks like this:

x (n) = β - +α*x (n-1)

We calculate until we achieve the required accuracy:

max |x i (k)-x i (k+1) ≤ ε

So, let's put the simple iteration method into practice. Example:
Solve SLAE:

4.5x1-1.7x2+3.5x3=2
3.1x1+2.3x2-1.1x3=1
1.8x1+2.5x2+4.7x3=4 with accuracy ε=10 -3

Let's see whether the diagonal elements predominate in modulus.

We see that only the third equation satisfies the convergence condition. Let's transform the first and second, and add the second to the first equation:

7.6x1+0.6x2+2.4x3=3

From the third we subtract the first:

2.7x1+4.2x2+1.2x3=2

We converted the original system into an equivalent one:

7.6x1+0.6x2+2.4x3=3
-2.7x1+4.2x2+1.2x3=2
1.8x1+2.5x2+4.7x3=4

Now let's bring the system to its normal form:

x1=0.3947-0.0789x2-0.3158x3
x2=0.4762+0.6429x1-0.2857x3
x3= 0.8511-0.383x1-0.5319x2

We check the convergence of the iterative process:

0.0789+0.3158=0,3947 ≤ 1
0.6429+0.2857=0.9286 ≤ 1
0.383+ 0.5319= 0.9149 ≤ 1, i.e. the condition is met.

0,3947
Initial guess x(0) = 0.4762
0,8511

Substituting these values ​​into the normal form equation, we obtain the following values:

0,08835
x(1) = 0.486793
0,446639

Substituting new values, we get:

0,215243
x(2) = 0.405396
0,558336

We continue calculations until we approach values ​​that satisfy the given condition.

x (7) = 0.441091

Let's check the correctness of the results obtained:

4,5*0,1880 -1.7*0,441+3.5*0,544=2,0003
3.1*0.1880+2.3*0.441-1.1x*0.544=0.9987
1.8*0,1880+2.5*0,441+4.7*0,544=3,9977

The results obtained by substituting the found values ​​into the original equations fully satisfy the conditions of the equation.

As we can see, the simple iteration method gives fairly accurate results, but to solve this equation we had to spend a lot of time and do cumbersome calculations.



CATEGORIES

POPULAR ARTICLES

2024 “kingad.ru” - ultrasound examination of human organs