Conditional extrema and the Lagrange multiplier method. Lagrange multiplier method

Brief theory

The method of Lagrange multipliers is a classical method for solving problems of mathematical programming (in particular, convex). Unfortunately, in the practical application of the method, significant computational difficulties may occur, narrowing the area of ​​its use. We consider here the Lagrange method mainly because it is an apparatus actively used to justify various modern numerical methods that are widely used in practice. As for the Lagrange function and Lagrange multipliers, they play an independent and extremely important role in the theory and applications not only of mathematical programming.

Consider a classic optimization problem:

Among the restrictions of this problem there are no inequalities, there are no conditions for the non-negativity of the variables, their discreteness, and the functions and are continuous and have partial derivatives of at least the second order.

The classical approach to solving the problem gives a system of equations (necessary conditions), which must be satisfied by the point that provides the function with a local extremum on the set of points that satisfy the constraints (for a convex programming problem, the found point will be simultaneously the global extremum point).

Let us assume that function (1) has a local conditional extremum at the point and the rank of the matrix is ​​equal to . Then the necessary conditions will be written in the form:

there is a Lagrange function; – Lagrange multipliers.

There are also sufficient conditions under which the solution of the system of equations (3) determines the extremum point of the function . This question is solved on the basis of the study of the sign of the second differential of the Lagrange function. However, sufficient conditions are mainly of theoretical interest.

You can specify the following procedure for solving problem (1), (2) using the Lagrange multiplier method:

1) compose the Lagrange function (4);

2) find the partial derivatives of the Lagrange function with respect to all variables and equate them

zero. Thus, a system (3) consisting of equations will be obtained. Solve the resulting system (if it turns out to be possible!) and thus find all the stationary points of the Lagrange function;

3) from stationary points taken without coordinates, select points at which the function has conditional local extrema in the presence of restrictions (2). This choice is made, for example, using sufficient conditions for a local extremum. Often the study is simplified if specific conditions of the problem are used.

Example of problem solution

The task

The company produces two types of goods in quantities and . The useful cost function is determined by the relation. The prices of these goods in the market are equal and accordingly.

Determine at what output volumes maximum profit is achieved and what it is equal to if total costs do not exceed

Having trouble understanding the progress of a decision? The website offers a service Solving problems using methods of optimal solutions to order

The solution of the problem

Economic and mathematical model of the problem

Profit function:

Cost restrictions:

We get the following economic and mathematical model:

In addition, according to the meaning of the task

Lagrange multiplier method

Let's compose the Lagrange function:

We find the 1st order partial derivatives:

Let's create and solve a system of equations:

Since then

Maximum profit:

Answer

Thus, it is necessary to release food. goods of the 1st type and units. goods of the 2nd type. In this case, the profit will be maximum and amount to 270.
An example of solving a quadratic convex programming problem using a graphical method is given.

Solving a linear problem by graphical method
A graphical method for solving a linear programming problem (LPP) with two variables is considered. Using the example of a problem, a detailed description of constructing a drawing and finding a solution is given.

Wilson's inventory management model
Using the example of solving the problem, the basic model of inventory management (Wilson model) is considered. Such model indicators as the optimal order batch size, annual storage costs, interval between deliveries and order placement point were calculated.

Direct Cost Ratio Matrix and Input-Output Matrix
Using the example of solving a problem, Leontiev's intersectoral model is considered. The calculation of the matrix of coefficients of direct material costs, the matrix “input-output”, the matrix of coefficients of indirect costs, vectors of final consumption and gross output is shown.

WITH The essence of the Lagrange method is to reduce the conditional extremum problem to solving the unconditional extremum problem. Consider the nonlinear programming model:

(5.2)

Where
– known functions,

A
– given coefficients.

Note that in this formulation of the problem, the constraints are specified by equalities, and there is no condition for the variables to be non-negative. In addition, we believe that the functions
are continuous with their first partial derivatives.

Let us transform conditions (5.2) so that on the left or right sides of the equalities there is zero:

(5.3)

Let's compose the Lagrange function. It includes the objective function (5.1) and the right-hand sides of the constraints (5.3), taken respectively with the coefficients
. There will be as many Lagrange coefficients as there are constraints in the problem.

The extremum points of function (5.4) are the extremum points of the original problem and vice versa: the optimal plan of problem (5.1)-(5.2) is the global extremum point of the Lagrange function.

Indeed, let a solution be found
problems (5.1)-(5.2), then conditions (5.3) are satisfied. Let's substitute the plan
into function (5.4) and verify the validity of equality (5.5).

Thus, in order to find the optimal plan of the original problem, it is necessary to investigate the Lagrange function for an extremum. The function has extreme values ​​at the points where its partial derivatives are equal zero. Such points are called stationary.

Let us define the partial derivatives of the function (5.4)

,

.

After equalization zero derivatives we get the system m+n equations with m+n unknown

,(5.6)

In the general case, the system (5.6)-(5.7) will have several solutions, which include all the maxima and minima of the Lagrange function. In order to highlight the global maximum or minimum, the values ​​of the objective function are calculated at all found points. The largest of these values ​​will be the global maximum, and the smallest will be the global minimum. In some cases it is possible to use sufficient conditions for a strict extremum continuous functions (see Problem 5.2 below):

let function
is continuous and twice differentiable in some neighborhood of its stationary point (those.
)). Then:

A ) If
,
(5.8)

That – point of strict maximum of the function
;

b) If
,
(5.9)

That – point of strict minimum of the function
;

G ) If
,

then the question of the presence of an extremum remains open.

In addition, some solutions of system (5.6)-(5.7) may be negative. Which is inconsistent with the economic meaning of the variables. In this case, the possibility of replacing negative values ​​with zero should be analyzed.

Economic meaning of Lagrange multipliers. Optimal multiplier value
shows how much the criterion value will change Z when the resource increases or decreases j by one unit, since

The Lagrange method can also be applied when the constraints are inequalities. Thus, finding the extremum of the function
under conditions

,

performed in several stages:

1. Determine stationary points of the objective function, for which they solve a system of equations

.

2. From the stationary points, select those whose coordinates satisfy the conditions

3. Using the Lagrange method, solve the problem with equality constraints (5.1)-(5.2).

4. The points found in the second and third stages are examined for the global maximum: the values ​​of the objective function at these points are compared - the largest value corresponds to the optimal plan.

Problem 5.1 Let us solve problem 1.3, considered in the first section, using the Lagrange method. The optimal distribution of water resources is described by a mathematical model

.

Let's compose the Lagrange function

Let's find the unconditional maximum of this function. To do this, we calculate the partial derivatives and equate them to zero

,

Thus, we obtained a system of linear equations of the form

The solution to the system of equations represents an optimal plan for the distribution of water resources across irrigated areas

, .

Quantities
measured in hundreds of thousands of cubic meters.
- the amount of net income per one hundred thousand cubic meters of irrigation water. Therefore, the marginal price of 1 m 3 of irrigation water is equal to
den. units

The maximum additional net income from irrigation will be

160·12.26 2 +7600·12.26-130·8.55 2 +5900·8.55-10·16.19 2 +4000·16.19=

172391.02 (den. units)

Problem 5.2 Solve a nonlinear programming problem

Let's represent the limitation in the form:

.

Let's compose the Lagrange function and determine its partial derivatives

.

To determine the stationary points of the Lagrange function, its partial derivatives should be set equal to zero. As a result, we obtain a system of equations

.

From the first equation it follows

. (5.10)

Expression let's substitute into the second equation

,

which implies two solutions for :

And
. (5.11)

Substituting these solutions into the third equation, we get

,
.

Values ​​of the Lagrange multiplier and the unknown Let's calculate using expressions (5.10)-(5.11):

,
,
,
.

Thus, we got two extremum points:

;
.

In order to find out whether these points are maximum or minimum points, we use sufficient conditions for strict extremum (5.8)-(5.9). Pre-expression for , obtained from the constraint of the mathematical model, we substitute it into the objective function

,

. (5.12)

To check the conditions of a strict extremum, we should determine the sign of the second derivative of function (5.11) at the extreme points we found
And
.

,
;

.

Thus, (·)
is the minimum point of the original problem (
), A (·)
– maximum point.

Optimal plan:

,
,
,

.

Today in lesson we will learn to find conditional or, as they are also called, relative extremes functions of several variables, and, first of all, we will talk, of course, about conditional extrema functions of two And three variables, which are found in the vast majority of thematic problems.

What do you need to know and be able to do at the moment? Despite the fact that this article is “on the outskirts” of the topic, not much is required to successfully master the material. At this point you should be aware of the basic surfaces of space, be able to find partial derivatives (at least at an average level) and, as merciless logic dictates, to understand unconditional extremes. But even if you have a low level of preparation, do not rush to leave - all the missing knowledge/skills can really be “picked up along the way”, and without any hours of torment.

First, let’s analyze the concept itself and at the same time carry out a quick repetition of the most common surfaces. So, what is a conditional extremum? ...The logic here is no less merciless =) The conditional extremum of a function is an extremum in the usual sense of the word, which is achieved when a certain condition (or conditions) are met.

Imagine an arbitrary "oblique" plane V Cartesian system. None extremum there is no trace of it here. But this is for the time being. Let's consider elliptical cylinder, for simplicity - an endless round “pipe” parallel to the axis. Obviously, this “pipe” will “cut” out of our plane ellipse, as a result of which there will be a maximum at its upper point, and a minimum at its lower point. In other words, the function defining the plane reaches extrema given that that it was crossed by a given circular cylinder. Exactly “provided”! Another elliptical cylinder intersecting this plane will almost certainly produce different minimum and maximum values.

If it’s not very clear, then the situation can be simulated realistically (though in reverse order): take an ax, go outside and cut down... no, Greenpeace won’t forgive you later - it’s better to cut the drainpipe with a grinder =). The conditional minimum and conditional maximum will depend on at what height and under what (non-horizontal) the cut is made at an angle.

The time has come to dress the calculations in mathematical attire. Let's consider elliptical paraboloid, which has absolute minimum at point . Now let's find the extremum given that. This plane parallel to the axis, which means it “cuts” out of the paraboloid parabola. The top of this parabola will be the conditional minimum. Moreover, the plane does not pass through the origin of coordinates, therefore, the point will remain irrelevant. Didn't provide a picture? Let's follow the links immediately! It will take many, many more times.

Question: how to find this conditional extremum? The simplest way to solve is to use the equation (which is called - condition or connection equation) express, for example: – and substitute it into the function:

The result is a function of one variable that defines a parabola, the vertex of which is “calculated” with your eyes closed. Let's find critical points:

- critical point.

The next easiest thing to use is second sufficient condition for extremum:

In particular: this means that the function reaches a minimum at point . It can be calculated directly: , but we will take a more academic route. Let's find the “game” coordinate:
,

write down the conditional minimum point, make sure that it really lies in the plane (satisfies the coupling equation):

and calculate the conditional minimum of the function:
given that (“additive” is required!!!).

The considered method without a shadow of a doubt can be used in practice, however, it has a number of disadvantages. Firstly, the geometry of the problem is far from always clear, and secondly, it is often unprofitable to express "x" or "y" from the equation of communication (if it is possible to express something at all). And now we will consider a universal method for finding conditional extrema, called Lagrange multiplier method:

Example 1

Find the conditional extrema of the function for the specified connection equation for the arguments .

Do you recognize the surfaces? ;-) ...I'm glad to see your happy faces =)

By the way, from the formulation of this problem it becomes clear why the condition is called connection equation– function arguments connected an additional condition, that is, the found extremum points must necessarily belong to a circular cylinder.

Solution: in the first step you need to present the connection equation in the form and compose Lagrange function:
, where is the so-called Lagrange multiplier.

In our case and:

The algorithm for finding conditional extrema is very similar to the scheme for finding “ordinary” extremes. Let's find partial derivatives Lagrange functions, while the “lambda” should be treated as a constant:

Let's compose and solve the following system:

The tangle is unraveled as standard:
from the first equation we express ;
from the second equation we express .

Let’s substitute the connections into the equation and carry out simplifications:

As a result, we obtain two stationary points. If , then:

if , then:

It is easy to see that the coordinates of both points satisfy the equation . Scrupulous people can also perform a full check: for this you need to substitute into the first and second equations of the system, and then do the same with the set . Everything must “come together”.

Let us check the fulfillment of the sufficient extremum condition for the found stationary points. I will discuss three approaches to solving this issue:

1) The first method is a geometric justification.

Let's calculate the values ​​of the function at stationary points:

Next, we write down a phrase with approximately the following content: a section of a plane by a circular cylinder is an ellipse, at the upper vertex of which the maximum is reached, and at the lower vertex the minimum. Thus, a larger value is a conditional maximum, and a smaller value is a conditional minimum.

If possible, it is better to use this method - it is simple, and this decision is counted by teachers (a big plus is that you showed an understanding of the geometric meaning of the problem). However, as already noted, it is not always clear what intersects with what and where, and then analytical verification comes to the rescue:

2) The second method is based on the use of second order differential signs. If it turns out that at a stationary point, then the function reaches a maximum there, but if it does, then it reaches a minimum.

Let's find second order partial derivatives:

and create this differential:

When , this means that the function reaches its maximum at point ;
at , which means the function reaches a minimum at the point .

The method considered is very good, but has the disadvantage that in some cases it is almost impossible to determine the sign of the 2nd differential (usually this happens if and/or are different signs). And then the “heavy artillery” comes to the rescue:

3) Let’s differentiate the connection equation by “X” and “Y”:

and compose the following symmetrical matrix:

If at a stationary point, then the function reaches there ( attention!) minimum, if – then maximum.

Let's write the matrix for the value and the corresponding point:

Let's calculate it determinant:
, thus, the function has a maximum at point .

Likewise for value and point:

Thus, the function has a minimum at point .

Answer: given that :

After a thorough analysis of the material, I simply cannot help but offer you a couple of typical tasks for self-test:

Example 2

Find the conditional extremum of the function if its arguments are related by the equation

Example 3

Find the extrema of the function given the condition

And again, I strongly recommend understanding the geometric essence of the tasks, especially in the last example, where analytical verification of a sufficient condition is not a gift. Remember what 2nd order line sets the equation, and what surface this line generates in space. Analyze along which curve the cylinder will intersect the plane and where on this curve there will be a minimum and where there will be a maximum.

Solutions and answers at the end of the lesson.

The problem under consideration is widely used in various fields, in particular - we won’t go far - in geometry. Let's solve everyone's favorite problem about the half-liter bottle (see Example 7 of articleExtreme Challenges ) second way:

Example 4

What should be the dimensions of a cylindrical tin can so that the least amount of material is used to make the can, if the volume of the can is equal to

Solution: consider a variable base radius, a variable height and compose a function of the area of ​​the total surface of the can:
(area of ​​two covers + side surface area)

Parameter name Meaning
Article topic: Lagrange method.
Rubric (thematic category) Mathematics

Finding a polynomial means determining the values ​​of its coefficient . To do this, using the interpolation condition, you can form a system of linear algebraic equations (SLAE).

The determinant of this SLAE is usually called the Vandermonde determinant. The Vandermonde determinant is not equal to zero for for , that is, in the case when there are no matching nodes in the lookup table. However, it can be argued that the SLAE has a solution and this solution is unique. Having solved the SLAE and determined the unknown coefficients you can construct an interpolation polynomial.

A polynomial that satisfies the interpolation conditions, when interpolated by the Lagrange method, is constructed in the form of a linear combination of polynomials of the nth degree:

Polynomials are usually called basic polynomials. In order to Lagrange polynomial satisfies the interpolation conditions, it is extremely important that the following conditions are satisfied for its basis polynomials:

For .

If these conditions are met, then for any we have:

Moreover, the fulfillment of the specified conditions for the basis polynomials means that the interpolation conditions are also satisfied.

Let us determine the type of basis polynomials based on the restrictions imposed on them.

1st condition: at .

2nd condition: .

Finally, for the basis polynomial we can write:

Then, substituting the resulting expression for the basis polynomials into the original polynomial, we obtain the final form of the Lagrange polynomial:

A particular form of the Lagrange polynomial at is usually called the linear interpolation formula:

.

The Lagrange polynomial taken at is usually called the quadratic interpolation formula:

Lagrange method. - concept and types. Classification and features of the category "Lagrange method." 2017, 2018.

  • - Lagrange method (method of variation of an arbitrary constant).

    Linear remote controls. Definition. DU type i.e. linear with respect to an unknown function and its derivative is called linear. For a solution of this type, we will consider two methods: the Lagrange method and the Bernoulli method. Consider a homogeneous differential equation. This equation is with separable variables. The solution of the equation is General... .


  • - Linear control systems, homogeneous and heterogeneous. The concept of general decision. Lagrange method of variation of production constants.

    Definition. A control system is called homogeneous if the function can be represented as the relationship between its arguments. Example. The f-th is called a homogeneous f-th measurement if Examples: 1) - 1st order of homogeneity. 2) - 2nd order of homogeneity. 3) - zero order of homogeneity (simply homogeneous... .


  • - Lecture 8. Application of partial derivatives: extremum problems. Lagrange method.

    Extremum problems are of great importance in economic calculations. This is the calculation, for example, of maximum income, profit, minimum costs depending on several variables: resources, production assets, etc. The theory of finding extrema of functions... .


  • - T.2.3. DE of higher orders. Equation in total differentials. T.2.4. Linear differential equations of the second order with constant coefficients. Lagrange method.

    3. 2. 1. DE with separable variables S.R. 3. In natural sciences, technology and economics, one often has to deal with empirical formulas, i.e. formulas compiled based on the processing of statistical data or...

  • Consider a linear inhomogeneous differential equation of the first order:
    (1) .
    There are three ways to solve this equation:

    • method of variation of constant (Lagrange).

    Let's consider solving a first-order linear differential equation using the Lagrange method.

    Method of variation of constant (Lagrange)

    In the variation of constant method, we solve the equation in two steps. At the first stage, we simplify the original equation and solve the homogeneous equation. At the second stage, we replace the constant of integration obtained at the first stage of the solution with a function. Then we look for the general solution of the original equation.

    Consider the equation:
    (1)

    Step 1 Solution of the homogeneous equation

    We are looking for a solution to the homogeneous equation:

    This is a separable equation

    Separate variables - multiply by dx , divide by y :

    Let's integrate:

    Integral over y - tabular:

    Then

    Let's potentiate:

    Let's replace the constant e C with C and remove the modulus sign, which comes down to multiplying by a constant ±1, which we will include in C:

    Step 2 Replace the constant C with the function

    Now let's replace the constant C with a function of x:
    C → u (x)
    That is, we will look for a solution to the original equation (1) as:
    (2)
    Finding the derivative.

    According to the rule of differentiation of a complex function:
    .
    According to the product differentiation rule:

    .
    Substitute into the original equation (1) :
    (1) ;

    .
    Two members are reduced:
    ;
    .
    Let's integrate:
    .
    Substitute in (2) :
    .
    As a result, we obtain a general solution to a first-order linear differential equation:
    .

    An example of solving a first-order linear differential equation by the Lagrange method

    Solve the equation

    Solution

    We solve the homogeneous equation:

    We separate the variables:

    Multiply by:

    Let's integrate:

    Tabular integrals:

    Let's potentiate:

    Let's replace the constant e C with C and remove the modulus signs:

    From here:

    Let's replace the constant C with a function of x:
    C → u (x)

    Finding the derivative:
    .
    Substitute into the original equation:
    ;
    ;
    Or:
    ;
    .
    Let's integrate:
    ;
    Solution of the equation:
    .

    CATEGORIES

    POPULAR ARTICLES

    2023 “kingad.ru” - ultrasound examination of human organs