Conditional characteristics of the components of a two-dimensional random variable. 2D Random Variables

Set of random variables X 1 ,X 2 ,...,X p defined on the probability space () forms P- dimensional random variable ( X 1 ,X 2 ,...,X p). If the economic process is described using two random variables X 1 and X 2 , then a two-dimensional random variable is determined ( X 1 ,X 2)or( X,Y).

distribution function systems of two random variables ( X,Y), considered as a function of variables is the probability of an event occurring. :

The values ​​of the distribution function satisfy the inequality

From a geometric point of view, the distribution function F(x,y) determines the probability that a random point ( X,Y) will fall into an infinite quadrant with vertex at the point ( X,at), since the point ( X,Y) will be below and to the left of the specified vertex (Fig. 9.1).

X,Y) in a half-band (Figure 9.2) or in a half-band (Figure 9.3) is expressed by the formulas:

respectively. Probability of hitting values X,Y) into a rectangle (Fig. 9.4) can be found by the formula:

Fig.9.2 Fig.9.3 Fig.9.4

Discrete called a two-dimensional quantity, the components of which are discrete.

distribution law two-dimensional discrete random variable ( X,Y) is the set of possible values ​​( x i, y j), , discrete random variables X And Y and their corresponding probabilities characterizing the probability that the component X will take on the meaning x i and at the same time the component Y will take on the meaning y j, and

The distribution law of a two-dimensional discrete random variable ( X,Y) are given in the form of a table. 9.1.

Table 9.1

Ω X Ω Y x 1 x 2 x i
y 1 p(x 1 ,y 1) p(x 2 ,y 1) p( x i,y 1)
y 2 p(x 1 ,y 2) p(x 2 ,y 2) p( x i,y 2)
y i p(x 1 ,y i) p(x 2 ,y i) p( x i,y i)

continuous is a two-dimensional random variable whose components are continuous. Function R(X,at) equal to the limit of the ratio of the probability of hitting a two-dimensional random variable ( X,Y) to a rectangle with sides and to the area of ​​this rectangle, when both sides of the rectangle tend to zero, is called probability distribution density:

Knowing the distribution density, you can find the distribution function by the formula:

At all points where there is a second-order mixed derivative of the distribution function , probability distribution density can be found using the formula:

The probability of hitting a random point ( X,at) to the area D is defined by the equality:

The probability that the random variable X took on the meaning X<х provided that the random variable Y took a fixed value Y=y, is calculated by the formula:




Likewise,

Formulas for calculating the conditional probability distribution densities of the components X And Y :

Set of conditional probabilities p(x 1 |y i), p(x 2 |y i), …, p(x i |y i), … satisfying the condition Y=y i, is called the conditional distribution of the component X at Y=y iX,Y), Where

Similarly, the conditional distribution of the component Y at X=x i discrete two-dimensional random variable ( X,Y) is a set of conditional probabilities corresponding to the condition X=x i, Where

The initial moment of the orderk+s two-dimensional random variable ( X,Y and , i.e. .

If X And Y- discrete random variables, then

If X And Y- continuous random variables, then

Central point order k+s two-dimensional random variable ( X,Y) is called the expectation of products And ,those.

If the constituent quantities are discrete, then

If the constituent quantities are continuous, then

Where R(X,y) is the distribution density of a two-dimensional random variable ( X,Y).

Conditional expectationY(X)at X=x(at Y=y) is called an expression of the form:

– for a discrete random variable Y(X);

for a continuous random variable Y(X).

Mathematical expectations of the components X And Y two-dimensional random variable are calculated by the formulas:



correlation moment independent random variables X And Y, included in the two-dimensional random variable ( X,Y), is called the mathematical expectation of the products of the deviations of these quantities:

Correlation moment of two independent random variables XX,Y) is equal to zero.

Correlation coefficient random variables X and Y included in a two-dimensional random variable ( X,Y), they call the ratio of the correlation moment to the product of the standard deviations of these quantities:



The correlation coefficient characterizes the degree (tightness) of the linear correlation dependence between X And Y.Random variables for which , are called uncorrelated.

The correlation coefficient satisfies the properties:

1. The correlation coefficient does not depend on the units of measurement of random variables.

2. The absolute value of the correlation coefficient does not exceed one:

3. If then between components X And Y random variable ( x, Y) there is a linear functional dependence:

4. If then components X And Y bivariate random variable are uncorrelated.

5. If then components X And Y two-dimensional random variable are dependent.

Equations M(X|Y=y)=φ( at)And M(Y|X=x)=ψ( x) are called regression equations, and the lines defined by them are called regression lines.

Tasks

9.1. Two-dimensional discrete random variable (X, Y) given by the distribution law:

Table 9.2

Ω x Ω y
0,2 0,15 0,08 0,05
0,1 0,05 0,05 0,1
0,05 0,07 0,08 0,02

Find: a) laws of distribution of components X And Y;

b) the conditional distribution law of the quantity Y at X =1;

c) distribution function.

Find out if the quantities are independent X And Y. Calculate probability and basic numerical characteristics M(X),M(Y),D(X),D(Y),R(X,Y), .

Solution. a) Random variables X and Y are defined on the set consisting of elementary outcomes, which has the form:

event ( X= 1) there corresponds a set of such outcomes for which the first component is equal to 1: (1;0), (1;1), (1;2). These outcomes are incompatible. The probability that X will take on the meaning x i, according to Kolmogorov's axiom 3, is equal to:

Similarly

Therefore, the marginal distribution of the component X, can be given in the form of a table. 9.3.

Table 9.3

b) Set of conditional probabilities R(1;0), R(1;1), R(1;2) satisfying the condition X=1, is called the conditional distribution of the component Y at X=1. Probability of magnitude values Y at X=1 we find using the formula:

Since , then, substituting the values ​​of the corresponding probabilities, we obtain

So, the conditional distribution of the component Y at X=1 looks like:

Table 9.5

y j
0,48 0,30 0,22

Since the conditional and unconditional distribution laws do not coincide (see tables 9.4 and 9.5), then the quantities X And Y dependent. This conclusion is confirmed by the fact that the equality

for any pair of possible values X And Y.

For example,

c) Distribution function F(x,y) of a two-dimensional random variable (X,Y) looks like:

where the summation is performed over all points (), for which the inequalities are simultaneously satisfied x i And y j . Then for a given distribution law, we get:

It is more convenient to present the result in the form of Table 9.6.

Table 9.6

X y
0,20 0,35 0,43 0,48
0,30 0,5 0,63 0,78
0,35 0,62 0,83

We use the formulas for the initial moments and the results of tables 9.3 and 9.4 and calculate the mathematical expectations of the components X And Y:

Dispersions are calculated through the second initial moment and the results of Table. 9.3 and 9.4:

To calculate covariance TO(X,Y) we use a similar formula in terms of the initial moment:

The correlation coefficient is determined by the formula:

The desired probability is defined as the probability of falling into a region on the plane, defined by the corresponding inequality:

9.2. The ship transmits an SOS message, which can be received by two radio stations. This signal can be received by one radio station independently of the other. The probability that the signal is received by the first radio station is 0.95; the probability that the signal is received by the second radio station is 0.85. Find the distribution law of a two-dimensional random variable characterizing the reception of a signal by two radio stations. Write a distribution function.

Solution: Let X– an event consisting in the fact that the signal is received by the first radio station. Y– the event is that the signal is received by the second radio station.

Many values .

X=1 – signal received by the first radio station;

X=0 – the signal was not received by the first radio station.

Many values .

Y=l – signal received by the second radio station,

Y=0 – the signal was not received by the second radio station.

The probability that the signal is not received by either the first or second radio stations is equal to:

The probability of receiving the signal by the first radio station:

The probability that the signal is received by the second radio station:

The probability that the signal is received by both the first and second radio stations is equal to: .

Then the law of distribution of a two-dimensional random variable is equal to:

y x
0,007 0,142
0,042 0,807

X,y) meaning F(X,y) is equal to the sum of the probabilities of those possible values ​​of the random variable ( X,Y) that fall inside the specified rectangle.

Then the distribution function will look like:

9.3. Two firms produce the same product. Each, independently of the other, can decide to modernize production. The probability that the first firm made this decision is 0.6. The probability of making such a decision by the second firm is 0.65. Write the distribution law of a two-dimensional random variable characterizing the decision to modernize the production of two firms. Write a distribution function.

Answer: Distribution law:

0,14 0,21
0,26 0,39

For each fixed value of the point with coordinates ( x,y) the value is equal to the sum of the probabilities of those possible values ​​that fall inside the specified rectangle .

9.4. Piston rings for car engines are made on an automatic lathe. The thickness of the ring is measured (random value X) and hole diameter (random value Y). About 5% of all piston rings are known to be defective. Moreover, 3% of the rejects are due to non-standard hole diameters, 1% - to non-standard thickness and 1% - are rejected on both grounds. Find: joint distribution of a two-dimensional random variable ( X,Y); one-dimensional component distributions X And Y;expectations of the components X And Y; correlation moment and correlation coefficient between components X And Y two-dimensional random variable ( X,Y).

Answer: Distribution law:

0,01 0,03
0,01 0,95

; ; ; ; ; .

9.5. In the production of the factory, marriage due to a defect A is 4%, and due to a defect IN- 3.5%. Standard production is 96%. Determine what percentage of all products have defects of both types.

9.6. Random value ( X,Y) distributed with a constant density inside the square R, whose vertices have coordinates (–2;0), (0;2), (2;0), (0;–2). Determine the distribution density of a random variable ( X,Y) and conditional distribution densities R(X\at), R(at\X).

Solution. Let's build on a plane x 0y a given square (Fig. 9.5) and determine the equations of the sides of the square ABCD using the equation of a straight line passing through two given points: Substituting the coordinates of the vertices A And IN we obtain in succession the equation of the side AB: or .

Similarly, we find the equation of the side sun: ;side CD: and sides DA: . : .D X , Y) is a hemisphere centered at the origin of radius R.Find the probability distribution density.

Answer:

9.10. A discrete two-dimensional random variable is given:

0,25 0,10
0,15 0,05
0,32 0,13

Find: a) conditional distribution law X, provided that y= 10;

b) conditional distribution law Y, provided that x =10;

c) mathematical expectation, variance, correlation coefficient.

9.11. Continuous two-dimensional random variable ( X,Y) is evenly distributed inside a right triangle with vertices ABOUT(0;0), A(0;8), IN(8,0).

Find: a) probability distribution density;

Definition 2.7. is a pair of random numbers (X, Y), or a point on the coordinate plane (Fig. 2.11).

Rice. 2.11.

A two-dimensional random variable is a special case of a multidimensional random variable, or random vector.

Definition 2.8. Random vector - is it a random function?,(/) with a finite set of possible argument values t, whose value for any value t is a random variable.

A two-dimensional random variable is called continuous if its coordinates are continuous, and discrete if its coordinates are discrete.

To set the law of distribution of two-dimensional random variables means to establish a correspondence between its possible values ​​and the probability of these values. According to the ways of setting, random variables are divided into continuous and discrete, although there are general ways to set the distribution law of any RV.

Discrete two-dimensional random variable

A discrete two-dimensional random variable is specified using a distribution table (Table 2.1).

Table 2.1

Allocation table (joint allocation) CB ( X, U)

Table elements are defined by the formula

Distribution table element properties:

The distribution over each coordinate is called one-dimensional or marginal:

R 1> = P(X =.d,) - marginal distribution of SW X;

p^2) = P(Y= y,)- marginal distribution of SV U.

Communication of the joint distribution of CB X and Y, given by the set of probabilities [p () ), i = 1,..., n,j = 1,..., T(distribution table), and marginal distribution.


Similarly for SV U p- 2)= X p, g

Problem 2.14. Given:

Continuous 2D random variable

/(X, y)dxdy- element of probability for a two-dimensional random variable (X, Y) - probability of hitting a random variable (X, Y) in a rectangle with sides cbc, dy at dx, dy -* 0:

f(x, y) - distribution density two-dimensional random variable (X, Y). Task /(x, y) we give complete information about the distribution of a two-dimensional random variable.

Marginal distributions are specified as follows: for X - by the distribution density of CB X/,(x); By Y- SV distribution density f>(y).

Setting the distribution law of a two-dimensional random variable by the distribution function

A universal way to specify the distribution law for a discrete or continuous two-dimensional random variable is the distribution function F(x, y).

Definition 2.9. Distribution function F(x, y)- probability of joint occurrence of events (Xy), i.e. F(x0,y n) = = P(X y), thrown onto the coordinate plane, fall into an infinite quadrant with a vertex at the point M(x 0, u i)(in the shaded area in Fig. 2.12).

Rice. 2.12. Illustration of the distribution function F( x, y)

Function Properties F(x, y)

  • 1) 0 1;
  • 2) F(-oo,-oo) = F(x,-oo) = F(-oo, y) = 0; F( oo, oo) = 1;
  • 3) F(x, y)- non-decreasing in each argument;
  • 4) F(x, y) - continuous left and bottom;
  • 5) consistency of distributions:

F(x, X: F(x, oo) = F,(x); F(y, oo) - marginal distribution over Y F( oo, y) = F 2 (y). Connection /(x, y) With F(x, y):

Relationship between joint density and marginal density. Dana f(x, y). We get the marginal distribution densities f(x),f 2 (y)".


The case of independent coordinates of a two-dimensional random variable

Definition 2.10. SW X And Yindependent(nc) if any events associated with each of these RVs are independent. From the definition of nc CB it follows:

  • 1 )Pij = p X) pf
  • 2 )F(x,y) = F l (x)F 2 (y).

It turns out that for independent SWs X And Y completed and

3 )f(x,y) = J(x)f,(y).

Let us prove that for independent SWs X And Y2) 3). Proof, a) Let 2), i.e.,

in the same time F(x,y) = f J f(u,v)dudv, whence it follows 3);

b) let 3 now hold, then


those. true 2).

Let's consider tasks.

Problem 2.15. The distribution is given by the following table:

We build marginal distributions:

We get P(X = 3, U = 4) = 0,17 * P(X = 3) P (Y \u003d 4) \u003d 0.1485 => => SV X and Dependents.

Distribution function:


Problem 2.16. The distribution is given by the following table:

We get P tl = 0.2 0.3 = 0.06; P 12 \u003d 0.2? 0.7 = 0.14; P2l = 0,8 ? 0,3 = = 0,24; R 22 - 0.8 0.7 = 0.56 => SW X And Y nz.

Problem 2.17. Dana /(x, y) = 1/st exp| -0.5(d "+ 2xy + 5d/ 2)]. Find Oh) And /Ay)-

Solution

(calculate yourself).

Quite often, when studying random variables, one has to deal with two, three, or even more random variables. For example, the two-dimensional random variable $\left(X,\ Y\right)$ will describe the hit point of the projectile, where the random variables $X,\ Y$ are the abscissa and the ordinate, respectively. The performance of a randomly selected student during the session is characterized by an $n$-dimensional random variable $\left(X_1,\ X_2,\ \dots ,\ X_n\right)$, where the random variables are $X_1,\ X_2,\ \dots ,\ X_n $ - these are the grades put down in the grade book in various disciplines.

The set of $n$ random variables $\left(X_1,\ X_2,\ \dots ,\ X_n\right)$ is called random vector. We restrict ourselves to the case $\left(X,\ Y\right)$.

Let $X$ be a discrete random variable with possible values ​​$x_1,x_2,\ \dots ,\ x_n$, and $Y$ be a discrete random variable with possible values ​​$y_1,y_2,\ \dots ,\ y_n$.

Then a discrete two-dimensional random variable $\left(X,\ Y\right)$ can take the values ​​$\left(x_i,\ y_j\right)$ with probabilities $p_(ij)=P\left(\left(X=x_i \right)\left(Y=y_j\right)\right)=P\left(X=x_i\right)P\left(Y=y_j|X=x_i\right)$. Here $P\left(Y=y_j|X=x_i\right)$ is the conditional probability that the random variable $Y$ takes the value $y_j$ given that the random variable $X$ takes the value $x_i$.

The probability that the random variable $X$ takes the value $x_i$ is equal to $p_i=\sum_j(p_(ij))$. The probability that the random variable $Y$ takes the value $y_j$ is equal to $q_j=\sum_i(p_(ij))$.

$$P\left(X=x_i|Y=y_j\right)=((P\left(\left(X=x_i\right)\left(Y=y_j\right)\right))\over (P\ left(Y=y_j\right)))=((p_(ij))\over (q_j)).$$

$$P\left(Y=y_j|X=x_i\right)=((P\left(\left(X=x_i\right)\left(Y=y_j\right)\right))\over (P\ left(X=x_i\right)))=((p_(ij))\over (p_i)).$$

Example 1 . The distribution of a two-dimensional random variable is given:

$\begin(array)(|c|c|)
\hline
X\backslash Y & 2 & 3 \\
\hline
-1 & 0,15 & 0,25 \\
\hline
0 & 0,28 & 0,13 \\
\hline
1 & 0,09 & 0,1 \\
\hline
\end(array)$

Let us define the distribution laws for the random variables $X$ and $Y$. Let us find the conditional distributions of the random variable $X$ under the condition $Y=2$ and the random variable $Y$ under the condition $X=0$.

Let's fill in the following table:

$\begin(array)(|c|c|)
\hline
X\backslash Y & 2 & 3 & p_i & p_(ij)/q_1 \\
\hline
-1 & 0,15 & 0,25 & 0,4 & 0,29 \\
\hline
0 & 0,28 & 0,13 & 0,41 & 0,54 \\
\hline
1 & 0,09 & 0,1 & 0,19 & 0,17 \\
\hline
q_j & 0.52 & 0.48 & 1 & \\
\hline
p_(ij)/p_2 & 0.68 & 0.32 & & \\
\hline
1 & 0,09 & 0,1 \\
\hline
\end(array)$

Let's explain how the table is filled. The values ​​of the first three columns of the first four rows are taken from the condition. The sum of the numbers of the $2$th and $3$th columns of the $2$th ($3$th) row is indicated in the $4$th column of the $2$th ($3$th) row. The sum of the numbers in the $2$th and $3$th columns of the $4$th row is indicated in the $4$th column of the $4$th row.

The sum of numbers in the $2$th, $3$th and $4$th rows of the $2$th ($3$th) column is written in the $5$th row of the $2$th ($3$th) column. Each number in the $2$th column is divided by $q_1=0.52$, the result is rounded up to two decimal places and written in the $5$th column. The numbers from the $2$th and $3$th columns of the $3$th row are divided by $p_2=0.41$, the result is rounded up to two decimal places and written in the last line.

Then the law of distribution of the random variable $X$ has the following form.

$\begin(array)(|c|c|)
\hline
X & -1 & 0 & 1 \\
\hline
p_i & 0.4 & 0.41 & 0.19 \\
\hline
\end(array)$

The law of distribution of the random variable $Y$.

$\begin(array)(|c|c|)
\hline
Y & 2 & 3 \\
\hline
q_j & 0.52 & 0.48 \\
\hline
\end(array)$

The conditional distribution of the random variable $X$ under the condition $Y=2$ has the following form.

$\begin(array)(|c|c|)
\hline
X & -1 & 0 & 1 \\
\hline
p_(ij)/q_1 & 0.29 & 0.54 & 0.17 \\
\hline
\end(array)$

The conditional distribution of the random variable $Y$ under the condition $X=0$ has the following form.

$\begin(array)(|c|c|)
\hline
Y & 2 & 3 \\
\hline
p_(ij)/p_2 & 0.68 & 0.32 \\
\hline
\end(array)$

Example 2 . We have six pencils, two of which are red. We put the pencils in two boxes. $2$ pieces are put into the first one, and two into the second one. $X$ is the number of red pencils in the first box, and $Y$ is in the second. Write the distribution law for the system of random variables $(X,\ Y)$.

Let the discrete random variable $X$ be the number of red pencils in the first box, and the discrete random variable $Y$ be the number of red pencils in the second box. The possible values ​​of the random variables $X,\ Y$ are respectively $X:0,\ 1,\ 2$, $Y:0,\ 1,\ 2$. Then a discrete two-dimensional random variable $\left(X,\ Y\right)$ can take the values ​​$\left(x,\ y\right)$ with probabilities $P=P\left(\left(X=x\right) \times \left(Y=y\right)\right)=P\left(X=x\right)\times P\left(Y=y|X=x\right)$, where $P\left(Y =y|X=x\right)$ - the conditional probability that the random variable $Y$ takes the value $y$, provided that the random variable $X$ takes the value $x$. Let us represent the correspondence between the values ​​$\left(x,\ y\right)$ and the probabilities $P\left(\left(X=x\right)\times \left(Y=y\right)\right)$ as follows tables.

$\begin(array)(|c|c|)
\hline
X\backslash Y & 0 & 1 & 2 \\
\hline
0 & ((1)\over (15)) & ((4)\over (15)) & ((1)\over (15)) \\
\hline
1 & ((4)\over (15)) & ((4)\over (15)) & 0 \\
\hline
2 & ((1)\over (15)) & 0 & 0 \\
\hline
\end(array)$

The rows of such a table indicate the values ​​$X$, and the columns indicate the values ​​$Y$, then the probabilities $P\left(\left(X=x\right)\times \left(Y=y\right)\right)$ are indicated at the intersection of the corresponding row and column. Let's calculate the probabilities using the classical definition of probability and the product theorem of probabilities of dependent events.

$$P\left(\left(X=0\right)\left(Y=0\right)\right)=((C^2_4)\over (C^2_6))\cdot ((C^2_2) \over (C^2_4))=((6)\over (15))\cdot ((1)\over (6))=((1)\over (15));$$

$$P\left(\left(X=0\right)\left(Y=1\right)\right)=((C^2_4)\over (C^2_6))\cdot ((C^1_2\ cdot C^1_2)\over (C^2_4))=((6)\over (15))\cdot ((2\cdot 2)\over (6))=((4)\over (15)) ;$$

$$P\left(\left(X=0\right)\left(Y=2\right)\right)=((C^2_4)\over (C^2_6))\cdot ((C^2_2) \over (C^2_4))=((6)\over (15))\cdot ((1)\over (6))=((1)\over (15));$$

$$P\left(\left(X=1\right)\left(Y=0\right)\right)=((C^1_2\cdot C^1_4)\over (C^2_6))\cdot ( (C^2_3)\over (C^2_4))=((2\cdot 4)\over (15))\cdot ((3)\over (6))=((4)\over (15)) ;$$

$$P\left(\left(X=1\right)\left(Y=1\right)\right)=((C^1_2\cdot C^1_4)\over (C^2_6))\cdot ( (C^1_1\cdot C^1_3)\over (C^2_4))=((2\cdot 4)\over (15))\cdot ((1\cdot 3)\over (6))=(( 4)\over(15));$$

$$P\left(\left(X=2\right)\left(Y=0\right)\right)=((C^2_2)\over (C^2_6))\cdot ((C^2_4) \over (C^2_4))=((1)\over (15))\cdot 1=((1)\over (15)).$$

Since in the distribution law (the resulting table) the entire set of events forms a complete group of events, the sum of the probabilities should be equal to 1. Let's check this:

$$\sum_(i,\ j)(p_(ij))=((1)\over (15))+((4)\over (15))+((1)\over (15))+ ((4)\over (15))+((4)\over (15))+((1)\over (15))=1.$$

Distribution function of a two-dimensional random variable

distribution function A two-dimensional random variable $\left(X,\ Y\right)$ is a function $F\left(x,\ y\right)$, which for any real numbers $x$ and $y$ is equal to the probability of two events $ \left\(X< x\right\}$ и $\left\{Y < y\right\}$. Таким образом, по определению

$$F\left(x,\ y\right)=P\left\(X< x,\ Y < y\right\}.$$

For a discrete two-dimensional random variable, the distribution function is found by summing all probabilities $p_(ij)$ for which $x_i< x,\ y_j < y$, то есть

$$F\left(x,\ y\right)=\sum_(x_i< x}{\sum_{y_j < y}{p_{ij}}}.$$

Properties of the distribution function of a two-dimensional random variable.

1 . The distribution function $F\left(x,\ y\right)$ is bounded, that is, $0\le F\left(x,\ y\right)\le 1$.

2 . $F\left(x,\ y\right)$ non-decreasing for each of its arguments with the other fixed, i.e. $F\left(x_2,\ y\right)\ge F\left(x_1,\ y\right )$ for $x_2>x_1$, $F\left(x,\ y_2\right)\ge F\left(x,\ y_1\right)$ for $y_2>y_1$.

3 . If at least one of the arguments takes the value $-\infty $, then the distribution function will be equal to zero, i.e. $F\left(-\infty ,\ y\right)=F\left(x,\ -\infty \right ),\ F\left(-\infty ,\ -\infty \right)=0$.

4 . If both arguments take the value $+\infty $, then the distribution function will be equal to $1$, i.e. $F\left(+\infty ,\ +\infty \right)=1$.

5 . In the case when exactly one of the arguments takes the value $+\infty $, the distribution function $F\left(x,\ y\right)$ becomes the distribution function of the random variable corresponding to the other element, i.e. $F\left(x ,\ +\infty \right)=F_1\left(x\right)=F_X\left(x\right),\ F\left(+\infty ,\ y\right)=F_y\left(y\right) =F_Y\left(y\right)$.

6 . $F\left(x,\ y\right)$ is left continuous for each of its arguments, i.e.

$$(\mathop(lim)_(x\to x_0-0) F\left(x,\ y\right)\ )=F\left(x_0,\ y\right),\ (\mathop(lim) _(y\to y_0-0) F\left(x,\ y\right)\ )=F\left(x,\ y_0\right).$$

Example 3 . Let a discrete two-dimensional random variable $\left(X,\ Y\right)$ be given by a distribution series.

$\begin(array)(|c|c|)
\hline
X\backslash Y & 0 & 1 \\
\hline
0 & ((1)\over (6)) & ((2)\over (6)) \\
\hline
1 & ((2)\over (6)) & ((1)\over (6)) \\
\hline
\end(array)$

Then the distribution function:

$F(x,y)=\left\(\begin(matrix)
0,\ at\ x\le 0,\ y\le 0 \\
0,\ at\ x\le 0,\ 0< y\le 1 \\
0,\ for\ x\le 0,\ y>1 \\
0,\ at\ 0< x\le 1,\ y\le 0 \\
((1)\over (6)),\ at\ 0< x\le 1,\ 0 < y\le 1 \\
((1)\over (6))+((2)\over (6))=((1)\over (2)),\ when\ 0< x\le 1,\ y>1 \\
0,\ for\ x>1,\ y\le 0 \\
((1)\over (6))+((2)\over (6))=((1)\over (2)),\ when\ x>1,\ 0< y\le 1 \\
((1)\over (6))+((2)\over (6))+((2)\over (6))+((1)\over (6))=1,\ for\ x >1,\ y>1 \\
\end(matrix)\right.$