Uniform random distribution. Converting a uniformly distributed random variable to a normally distributed one

As an example of a continuous random variable, consider a random variable X uniformly distributed over the interval (a; b). We say that the random variable X evenly distributed on the interval (a; b), if its distribution density is not constant on this interval:

From the normalization condition, we determine the value of the constant c . The area under the distribution density curve should be equal to one, but in our case it is the area of ​​a rectangle with a base (b - α) and a height c (Fig. 1).

Rice. 1 Uniform distribution density
From here we find the value of the constant c:

So, the density of a uniformly distributed random variable is equal to

Let us now find the distribution function by the formula:
1) for
2) for
3) for 0+1+0=1.
Thus,

The distribution function is continuous and does not decrease (Fig. 2).

Rice. 2 Distribution function of a uniformly distributed random variable

Let's find mathematical expectation of a uniformly distributed random variable according to the formula:

Uniform distribution variance is calculated by the formula and is equal to

Example #1. The scale division value of the measuring instrument is 0.2 . Instrument readings are rounded to the nearest whole division. Find the probability that an error will be made during the reading: a) less than 0.04; b) big 0.02
Solution. The rounding error is a random variable uniformly distributed over the interval between adjacent integer divisions. Consider the interval (0; 0.2) as such a division (Fig. a). Rounding can be carried out both towards the left border - 0, and towards the right - 0.2, which means that an error less than or equal to 0.04 can be made twice, which must be taken into account when calculating the probability:



P = 0.2 + 0.2 = 0.4

For the second case, the error value can also exceed 0.02 on both division boundaries, that is, it can be either greater than 0.02 or less than 0.18.


Then the probability of an error like this:

Example #2. It was assumed that the stability of the economic situation in the country (the absence of wars, natural disasters, etc.) over the past 50 years can be judged by the nature of the distribution of the population by age: in a calm situation, it should be uniform. As a result of the study, the following data were obtained for one of the countries.

Is there any reason to believe that there was an unstable situation in the country?

We carry out the decision using the calculator Hypothesis testing. Table for calculating indicators.

GroupsInterval middle, x iQuantity, fix i * f iCumulative frequency, S|x - x cf |*f(x - x sr) 2 *fFrequency, f i /n
0 - 10 5 0.14 0.7 0.14 5.32 202.16 0.14
10 - 20 15 0.09 1.35 0.23 2.52 70.56 0.09
20 - 30 25 0.1 2.5 0.33 1.8 32.4 0.1
30 - 40 35 0.08 2.8 0.41 0.64 5.12 0.08
40 - 50 45 0.16 7.2 0.57 0.32 0.64 0.16
50 - 60 55 0.13 7.15 0.7 1.56 18.72 0.13
60 - 70 65 0.12 7.8 0.82 2.64 58.08 0.12
70 - 80 75 0.18 13.5 1 5.76 184.32 0.18
1 43 20.56 572 1
Distribution Center Metrics.
weighted average


Variation indicators.
Absolute Variation Rates.
The range of variation is the difference between the maximum and minimum values ​​of the attribute of the primary series.
R = X max - X min
R=70 - 0=70
Dispersion- characterizes the measure of spread around its mean value (measure of dispersion, i.e. deviation from the mean).


Standard deviation.

Each value of the series differs from the average value of 43 by no more than 23.92
Testing hypotheses about the type of distribution.
4. Testing the hypothesis about uniform distribution the general population.
In order to test the hypothesis about the uniform distribution of X, i.e. according to the law: f(x) = 1/(b-a) in the interval (a,b)
necessary:
1. Estimate the parameters a and b - the ends of the interval in which the possible values ​​of X were observed, according to the formulas (the * sign denotes the estimates of the parameters):

2. Find the probability density of the estimated distribution f(x) = 1/(b * - a *)
3. Find theoretical frequencies:
n 1 \u003d nP 1 \u003d n \u003d n * 1 / (b * - a *) * (x 1 - a *)
n 2 \u003d n 3 \u003d ... \u003d n s-1 \u003d n * 1 / (b * - a *) * (x i - x i-1)
n s = n*1/(b * - a *)*(b * - x s-1)
4. Compare the empirical and theoretical frequencies using the Pearson test, assuming the number of degrees of freedom k = s-3, where s is the number of initial sampling intervals; if, however, a combination of small frequencies, and therefore the intervals themselves, was made, then s is the number of intervals remaining after the combination.

Solution:
1. Find the estimates of the parameters a * and b * of the uniform distribution using the formulas:


2. Find the density of the assumed uniform distribution:
f(x) = 1/(b * - a *) = 1/(84.42 - 1.58) = 0.0121
3. Find the theoretical frequencies:
n 1 \u003d n * f (x) (x 1 - a *) \u003d 1 * 0.0121 (10-1.58) \u003d 0.1
n 8 \u003d n * f (x) (b * - x 7) \u003d 1 * 0.0121 (84.42-70) \u003d 0.17
The remaining n s will be equal:
n s = n*f(x)(x i - x i-1)

in in*in i - n * i(n i - n* i) 2(n i - n * i) 2 /n * i
1 0.14 0.1 0.0383 0.00147 0.0144
2 0.09 0.12 -0.0307 0.000943 0.00781
3 0.1 0.12 -0.0207 0.000429 0.00355
4 0.08 0.12 -0.0407 0.00166 0.0137
5 0.16 0.12 0.0393 0.00154 0.0128
6 0.13 0.12 0.0093 8.6E-5 0.000716
7 0.12 0.12 -0.000701 0 4.0E-6
8 0.18 0.17 0.00589 3.5E-5 0.000199
Total 1 0.0532
Let us define the boundary of the critical region. Since the Pearson statistic measures the difference between the empirical and theoretical distributions, the larger its observed value of K obs, the stronger the argument against the main hypothesis.
Therefore, the critical region for this statistic is always right-handed: if its probability density is constant on this segment, and outside it is 0 (i.e., a random variable X focused on the segment [ a, b], on which it has a constant density). According to this definition, the density of a uniformly distributed on the segment [ a, b] random variable X looks like:

Where With there is some number. However, it is easy to find it using the probability density property for r.v. concentrated on the segment [ a, b]:
. Hence it follows that
, where
. Therefore, the density uniformly distributed on the segment [ a, b] random variable X looks like:

.

To judge the uniformity of the distribution of n.s.v. X possible from the following consideration. A continuous random variable has a uniform distribution on the interval [ a, b] if it takes values ​​only from this segment, and any number from this segment does not have an advantage over other numbers of this segment in the sense of being able to be the value of this random variable.

Random variables with a uniform distribution include such variables as the waiting time of a transport at a stop (at a constant interval of movement, the waiting time is evenly distributed over this interval), the rounding error of the number to an integer (evenly distributed on [−0.5 , 0.5 ]) and others.

Type of distribution function F(x) a, b] random variable X is searched for by the known probability density f(x) using the formula of their connection
. As a result of the corresponding calculations, we obtain the following formula for the distribution function F(x) uniformly distributed segment [ a, b] random variable X :

.

The figures show graphs of the probability density f(x) and distribution functions f(x) uniformly distributed segment [ a, b] random variable X :


Mathematical expectation, variance, standard deviation, mode and median of a uniformly distributed segment [ a, b] random variable X calculated from the probability density f(x) in the usual way (and quite simply because of the simple appearance f(x) ). The result is the following formulas:

but fashion d(X) is any number of the interval [ a, b].

Let us find the probability of hitting the uniformly distributed segment [ a, b] random variable X in the interval
, completely lying inside [ a, b]. Taking into account the known form of the distribution function, we obtain:

Thus, the probability of hitting the uniformly distributed segment [ a, b] random variable X in the interval
, completely lying inside [ a, b], does not depend on the position of this interval, but depends only on its length and is directly proportional to this length.

Example. The bus interval is 10 minutes. What is the probability that a passenger arriving at a bus stop will wait less than 3 minutes for the bus? What is the average waiting time for a bus?

Normal distribution

This distribution is most often encountered in practice and plays an exceptional role in probability theory and mathematical statistics and their applications, since so many random variables in natural science, economics, psychology, sociology, military sciences, and so on have such a distribution. This distribution is the limiting law, which is approached (under certain natural conditions) by many other laws of distribution. With the help of the normal distribution law, phenomena are also described that are subject to the action of many independent random factors of any nature and any law of their distribution. Let's move on to definitions.

A continuous random variable is called distributed over normal law (or Gaussian law), if its probability density has the form:

,

where are the numbers A And σ (σ>0 ) are the parameters of this distribution.

As already mentioned, the Gauss law of distribution of random variables has numerous applications. According to this law, measurement errors by instruments, deviation from the center of the target during shooting, dimensions of manufactured parts, weight and height of people, annual precipitation, number of newborns, and much more are distributed.

The above formula for the probability density of a normally distributed random variable contains, as was said, two parameters A And σ , and therefore defines a family of functions that vary depending on the values ​​of these parameters. If we apply the usual methods of mathematical analysis of the study of functions and plotting to the probability density of a normal distribution, we can draw the following conclusions.


are its inflection points.

Based on the information received, we build a graph of the probability density f(x) normal distribution (it is called the Gaussian curve - figure).

Let's find out how changing the parameters affects A And σ on the shape of the Gaussian curve. It is obvious (this can be seen from the formula for the density of the normal distribution) that the change in the parameter A does not change the shape of the curve, but only leads to its shift to the right or left along the axis X. Dependence σ more difficult. It can be seen from the above study how the value of the maximum and the coordinates of the inflection points depend on the parameter σ . In addition, it should be taken into account that for any parameters A And σ the area under the Gaussian curve remains equal to 1 (this is a general property of the probability density). From what has been said, it follows that with an increase in the parameter σ curve becomes flatter and stretches along the axis X. The figure shows the Gaussian curves for various values ​​of the parameter σ (σ 1 < σ< σ 2 ) and the same parameter value A.

Find out the probabilistic meaning of the parameters A And σ normal distribution. Already from the symmetry of the Gaussian curve with respect to the vertical line passing through the number A on axle X it is clear that the average value (i.e. the mathematical expectation M(X)) of a normally distributed random variable is equal to A. For the same reasons, the mode and median must also be equal to the number a. Exact calculations according to the corresponding formulas confirm this. If we write out the above expression for f(x) substitute in the formula for the variance
, then after the (rather difficult) calculation of the integral, we obtain in the answer the number σ 2 . Thus, for a random variable X distributed according to the normal law, the following main numerical characteristics were obtained:

Therefore, the probabilistic meaning of the parameters of the normal distribution A And σ next. If r.v. XA And σ A σ.

Let us now find the distribution function F(x) for a random variable X, distributed according to the normal law, using the above expression for the probability density f(x) and formula
. When substituting f(x) we obtain an "untaken" integral. Everything that can be done to simplify the expression for F(x), this is the representation of this function in the form:

,

Where F(x)- the so-called Laplace function, which looks like

.

The integral in terms of which the Laplace function is expressed is also non-taken (but for each X this integral can be calculated approximately with any predetermined accuracy). However, it is not required to calculate it, since at the end of any textbook on probability theory there is a table for determining the values ​​of the function F(x) at a given value X. In what follows, we will need the oddity property of the Laplace function: F(−x)=F(x) for all numbers X.

Let us now find the probability that a normally distributed r.v. X will take a value from the given numerical interval (α, β) . From the general properties of the distribution function Р(α< X< β)= F(β) F(α) . Substituting α And β into the above expression for F(x) , we get

.

As mentioned above, if the r.v. X distributed normally with parameters A And σ , then its mean value is equal to A, and the standard deviation is equal to σ. That's why average deviation of the values ​​of this r.v. when tested from the number A equals σ. But this is the average deviation. Therefore, larger deviations are also possible. We find out how possible these or those deviations from the average value. Let us find the probability that the value of a random variable distributed according to the normal law X deviate from its mean M(X)=a less than some number δ, i.e. R(| Xa|<δ ) : . Thus,

.

Substituting into this equality δ=3σ, we obtain the probability that the value of r.v. X(in one trial) will deviate from the mean by less than three times σ (with an average deviation, as we remember, equal to σ ): (meaning F(3) taken from the table of values ​​of the Laplace function). It's almost 1 ! Then the probability of the opposite event (that the value deviates by at least ) is equal to 1 0.997=0.003 , which is very close to 0 . Therefore, this event is "almost impossible" happens very rarely (on average 3 times out 1000 ). This reasoning is the rationale for the well-known "three sigma rule".

Three sigma rule. Normally distributed random variable in a single test practically does not deviate from its average further than .

Once again, we emphasize that we are talking about one test. If there are many trials of a random variable, then it is quite possible that some of its values ​​will move further from the average than . This confirms the following

Example. What is the probability that after 100 trials of a normally distributed random variable X at least one of its values ​​will deviate from the mean by more than three times the standard deviation? What about 1000 trials?

Solution. Let the event A means that when testing a random variable X its value deviated from the mean by more than 3σ. As has just been found out, the probability of this event p=P(A)=0.003. 100 such tests have been carried out. We need to find the probability that the event A happened at least times, i.e. came from 1 before 100 once. This is a typical Bernoulli scheme problem with parameters n=100 (number of independent trials), p=0.003(probability of event A in one test) q=1− p=0.997 . Wanted to find R 100 (1≤ k≤100) . In this case, of course, it is easier to find first the probability of the opposite event R 100 (0) − the probability that the event A never happened (i.e. happened 0 times). Considering the connection between the probabilities of the event itself and its opposite, we get:

Not so little. It may well happen (occurs on average in every fourth such series of tests). At 1000 tests according to the same scheme, it can be obtained that the probability of at least one deviation is further than , equals: . So it is safe to wait for at least one such deviation.

Example. The height of men of a certain age group is normally distributed with mathematical expectation a, and standard deviation σ . What proportion of costumes k-th growth should be included in the total production for a given age group if k-th growth is determined by the following limits:

1 height : 158 164cm 2 height : 164 - 170cm 3 height : 170 - 176cm 4 height : 176 - 182cm

Solution. Let's solve the problem with the following parameter values: a=178,σ=6,k=3 . Let r.v. X the height of a randomly selected man (it is distributed according to the condition normally with the given parameters). Find the probability that a randomly chosen man will need 3 th growth. Using the oddity of the Laplace function F(x) and a table of its values: P(170 Therefore, in the total volume of production it is necessary to provide 0.2789*100%=27.89% costumes 3 th growth.

Probability distribution of a continuous random variable X, which takes all values ​​from the segment , is called uniform, if its probability density on this segment is constant, and outside it is equal to zero. Thus, the probability density of a continuous random variable X, distributed evenly on the segment , looks like:

Let's define expected value, dispersion and for a random variable with a uniform distribution.

, , .

Example. All values ​​of a uniformly distributed random variable lie on the interval . Find the probability of a random variable falling into the interval (3;5) .

a=2, b=8, .

Binomial distribution

Let it be produced n tests, and the probability of occurrence of an event A in each test is p and does not depend on the outcome of other trials (independent trials). Since the probability of an event occurring A in one test is p, then the probability of its non-occurrence is equal to q=1-p.

Let the event A came in n trials m once. This complex event can be written as a product:

.

Then the probability that n test event A will come m times , is calculated by the formula:

or (1)

Formula (1) is called Bernoulli formula.

Let X is a random variable equal to the number of occurrences of the event A V n tests, which takes values ​​with probabilities:

The resulting law of distribution of a random variable is called binomial distribution law.

X m n
P

Expected value, dispersion And standard deviation random variables distributed according to the binomial law are determined by the formulas:

, , .

Example. Three shots are fired at the target, and the probability of hitting each shot is 0.8. We consider a random variable X- the number of hits on the target. Find its distribution law, mathematical expectation, variance and standard deviation.

p=0.8, q=0.2, n=3, , , .

- probability of 0 hits;



Probability of one hit;

Probability of two hits;

is the probability of three hits.

We get the distribution law:

X
P 0,008 0,096 0,384 0,512

Tasks

1. A coin is tossed 7 times. Find the probability that it will fall upside down 4 times.

2. A coin is tossed 8 times. Find the probability that the coat of arms will appear no more than three times.

3. The probability of hitting the target when firing from a gun p=0.6. Find the mathematical expectation of the total number of hits if 10 shots are fired.

4. Find the mathematical expectation of the number of lottery tickets that will win if 20 tickets are purchased, and the probability of winning for one ticket is 0.3.

The distribution function in this case, according to (5.7), will take the form:

where: m is the mathematical expectation, s is the standard deviation.

The normal distribution is also called Gaussian after the German mathematician Gauss. The fact that a random variable has a normal distribution with parameters: m,, is denoted as follows: N (m, s), where: m =a =M ;

Quite often, in formulas, the mathematical expectation is denoted by A . If a random variable is distributed according to the law N(0,1), then it is called a normalized or standardized normal value. The distribution function for it has the form:

.

The graph of the density of the normal distribution, which is called the normal curve or Gaussian curve, is shown in Fig. 5.4.

Rice. 5.4. Normal distribution density

Determining the numerical characteristics of a random variable by its density is considered on an example.

Example 6.

A continuous random variable is given by the distribution density: .

Determine the type of distribution, find the mathematical expectation M(X) and the variance D(X).

Comparing the given distribution density with (5.16), we can conclude that the normal distribution law with m =4 is given. Therefore, mathematical expectation M(X)=4, variance D(X)=9.

Standard deviation s=3.

The Laplace function, which has the form:

,

is related to the normal distribution function (5.17), by the relation:

F 0 (x) \u003d F (x) + 0.5.

The Laplace function is odd.

Ф(-x)=-Ф(x).

The values ​​of the Laplace function Ф(х) are tabulated and taken from the table according to the value of x (see Appendix 1).

The normal distribution of a continuous random variable plays an important role in the theory of probability and in the description of reality; it is very widespread in random natural phenomena. In practice, very often there are random variables that are formed precisely as a result of the summation of many random terms. In particular, the analysis of measurement errors shows that they are the sum of various kinds of errors. Practice shows that the probability distribution of measurement errors is close to the normal law.

Using the Laplace function, one can solve problems of calculating the probability of falling into a given interval and a given deviation of a normal random variable.

This issue has long been studied in detail, and the method of polar coordinates, proposed by George Box, Mervyn Muller and George Marsaglia in 1958, was most widely used. This method allows you to get a pair of independent normally distributed random variables with mean 0 and variance 1 as follows:

Where Z 0 and Z 1 are the desired values, s \u003d u 2 + v 2, and u and v are random variables uniformly distributed on the segment (-1, 1), selected in such a way that the condition 0 is satisfied< s < 1.
Many use these formulas without even thinking, and many do not even suspect their existence, since they use ready-made implementations. But there are people who have questions: “Where did this formula come from? And why do you get a pair of values ​​at once? In the following, I will try to give a clear answer to these questions.


To begin with, let me remind you what the probability density, the distribution function of a random variable and the inverse function are. Suppose there is some random variable, the distribution of which is given by the density function f(x), which has the following form:

This means that the probability that the value of this random variable will be in the interval (A, B) is equal to the area of ​​the shaded area. And as a consequence, the area of ​​the entire shaded area must be equal to unity, since in any case the value of the random variable will fall into the domain of the function f.
The distribution function of a random variable is an integral of the density function. And in this case, its approximate form will be as follows:

Here the meaning is that the value of the random variable will be less than A with probability B. And as a result, the function never decreases, and its values ​​lie in the interval .

An inverse function is a function that returns the argument of the original function if you pass the value of the original function into it. For example, for the function x 2 the inverse will be the root extraction function, for sin (x) it is arcsin (x), etc.

Since most pseudo-random number generators give only a uniform distribution at the output, it often becomes necessary to convert it to some other one. In this case, to a normal Gaussian:

The basis of all methods for transforming a uniform distribution into any other distribution is the inverse transformation method. It works as follows. A function is found that is inverse to the function of the required distribution, and a random variable uniformly distributed on the segment (0, 1) is passed to it as an argument. At the output, we obtain a value with the required distribution. For clarity, here is the following picture.

Thus, a uniform segment is, as it were, smeared in accordance with the new distribution, being projected onto another axis through an inverse function. But the problem is that the integral of the density of the Gaussian distribution is not easy to calculate, so the above scientists had to cheat.

There is a chi-square distribution (Pearson distribution), which is the distribution of the sum of squares of k independent normal random variables. And in the case when k = 2, this distribution is exponential.

This means that if a point in a rectangular coordinate system has random X and Y coordinates distributed normally, then after converting these coordinates to the polar system (r, θ), the square of the radius (the distance from the origin to the point) will be distributed exponentially, since the square of the radius is the sum of the squares of the coordinates (according to the Pythagorean law). The distribution density of such points on the plane will look like this:


Since it is equal in all directions, the angle θ will have a uniform distribution in the range from 0 to 2π. The converse is also true: if you specify a point in the polar coordinate system using two independent random variables (the angle distributed uniformly and the radius distributed exponentially), then the rectangular coordinates of this point will be independent normal random variables. And the exponential distribution from the uniform distribution is already much easier to obtain, using the same inverse transformation method. This is the essence of the Box-Muller polar method.
Now let's get the formulas.

(1)

To obtain r and θ, it is necessary to generate two random variables uniformly distributed on the segment (0, 1) (let's call them u and v), the distribution of one of which (let's say v) must be converted to exponential to obtain the radius. The exponential distribution function looks like this:

Its inverse function:

Since the uniform distribution is symmetrical, the transformation will work similarly with the function

It follows from the chi-square distribution formula that λ = 0.5. We substitute λ, v into this function and get the square of the radius, and then the radius itself:

We obtain the angle by stretching the unit segment to 2π:

Now we substitute r and θ into formulas (1) and get:

(2)

These formulas are ready to use. X and Y will be independent and normally distributed with a variance of 1 and a mean of 0. To get a distribution with other characteristics, it is enough to multiply the result of the function by the standard deviation and add the mean.
But it is possible to get rid of trigonometric functions by specifying the angle not directly, but indirectly through the rectangular coordinates of a random point in a circle. Then, through these coordinates, it will be possible to calculate the length of the radius vector, and then find the cosine and sine by dividing x and y by it, respectively. How and why does it work?
We choose a random point from uniformly distributed in the circle of unit radius and denote the square of the length of the radius vector of this point by the letter s:

The choice is made by assigning random x and y rectangular coordinates uniformly distributed in the interval (-1, 1), and discarding points that do not belong to the circle, as well as the central point at which the angle of the radius vector is not defined. That is, the condition 0< s < 1. Тогда, как и в случае с Гауссовским распределением на плоскости, угол θ будет распределен равномерно. Это очевидно - количество точек в каждом направлении одинаково, значит каждый угол равновероятен. Но есть и менее очевидный факт - s тоже будет иметь равномерное распределение. Полученные s и θ будут независимы друг от друга. Поэтому мы можем воспользоваться значением s для получения экспоненциального распределения, не генерируя третью случайную величину. Подставим теперь s в формулы (2) вместо v, а вместо тригонометрических функций - их расчет делением координаты на длину радиус-вектора, которая в данном случае является корнем из s:

We get the formulas, as at the beginning of the article. The disadvantage of this method is the rejection of points that are not included in the circle. That is, using only 78.5% of the generated random variables. On older computers, the lack of trigonometric functions was still a big advantage. Now, when one processor instruction simultaneously calculates sine and cosine in an instant, I think these methods can still compete.

Personally, I have two more questions:

  • Why is the value of s evenly distributed?
  • Why is the sum of squares of two normal random variables exponentially distributed?
Since s is the square of the radius (for simplicity, the radius is the length of the radius vector that specifies the position of a random point), we first find out how the radii are distributed. Since the circle is filled uniformly, it is obvious that the number of points with radius r is proportional to the circumference of the circle with radius r. The circumference of a circle is proportional to the radius. This means that the distribution density of the radii increases uniformly from the center of the circle to its edges. And the density function has the form f(x) = 2x on the interval (0, 1). Coefficient 2 so that the area of ​​the figure under the graph is equal to one. When such a density is squared, it becomes uniform. Since theoretically, in this case, for this it is necessary to divide the density function by the derivative of the transformation function (that is, from x 2). And visually it happens like this:

If a similar transformation is done for a normal random variable, then the density function of its square will turn out to be similar to a hyperbola. And the addition of two squares of normal random variables is already a much more complex process associated with double integration. And the fact that the result will be an exponential distribution, personally, it remains for me to check it with a practical method or accept it as an axiom. And for those who are interested, I suggest that you familiarize yourself with the topic closer, drawing knowledge from these books:

  • Wentzel E.S. Probability theory
  • Knut D.E. The Art of Programming Volume 2

In conclusion, I will give an example of the implementation of a normally distributed random number generator in JavaScript:

Function Gauss() ( var ready = false; var second = 0.0; this.next = function(mean, dev) ( mean = mean == undefined ? 0.0: mean; dev = dev == undefined ? 1.0: dev; if ( this.ready) ( this.ready = false; return this.second * dev + mean; ) else ( var u, v, s; do ( u = 2.0 * Math.random() - 1.0; v = 2.0 * Math. random() - 1.0; s = u * u + v * v; ) while (s > 1.0 || s == 0.0); var r = Math.sqrt(-2.0 * Math.log(s) / s); this.second = r * u; this.ready = true; return r * v * dev + mean; ) ); ) g = new Gauss(); // create an object a = g.next(); // generate a pair of values ​​and get the first one b = g.next(); // get the second c = g.next(); // generate a pair of values ​​again and get the first one
The mean (mathematical expectation) and dev (standard deviation) parameters are optional. I draw your attention to the fact that the logarithm is natural.

CATEGORIES

POPULAR ARTICLES

2023 "kingad.ru" - ultrasound examination of human organs