Bivariate Distributions Pdf

  1. Bivariate Frequency Distribution Pdf
  2. Bivariate Binomial Distribution Pdf
  3. Bivariate Poisson Distribution Pdf

Bivariate Distributions — Continuous Random Variables When there are two continuous random variables, the equivalent of the two-dimensional array is a region of the x–y (cartesian) plane. Above the plane, over the region of interest, is a surface which represents the probability density function associated with a bivariate distribution. Bivariate distribution are the probabilities that a certain event will occur when there are two independent random variables in your scenario. It can be in list form or table form like this. The Bivariate Normal Distribution 3 Thus, the two pairs of random variables (X,Y)and(X,Y) are associated with the same multivariate transform. Since the multivariate transform completely determines the joint PDF, it follows that the pair (X,Y) has the same joint PDF as the pair (X,Y). Since X and Y are independent, X and Y must also. Extend the definition of the conditional probability of events in order to find the conditional probability distribution of a random variable X given that Y has occurred; investigate a particular joint probability distribution, namely the bivariate normal distribution.

Y{displaystyle Y}
p(Y){displaystyle p(Y)}
Many sample observations (black) are shown from a joint probability distribution. The marginal densities are shown as well.
Part of a series on statistics
Probability theory

Given random variablesX,Y,{displaystyle X,Y,ldots }, that are defined on a probability space, the joint probability distribution for X,Y,{displaystyle X,Y,ldots } is a probability distribution that gives the probability that each of X,Y,{displaystyle X,Y,ldots } falls in any particular range or discrete set of values specified for that variable. In the case of only two random variables, this is called a bivariate distribution, but the concept generalizes to any number of random variables, giving a multivariate distribution.

The joint probability distribution can be expressed either in terms of a joint cumulative distribution function or in terms of a joint probability density function (in the case of continuous variables) or joint probability mass function (in the case of discrete variables). These in turn can be used to find two other types of distributions: the marginal distribution giving the probabilities for any one of the variables with no reference to any specific ranges of values for the other variables, and the conditional probability distribution giving the probabilities for any subset of the variables conditional on particular values of the remaining variables.

  • 1Examples
  • 3Joint density function or mass function
  • 4Additional properties

Examples[edit]

Draws from an urn[edit]

Suppose each of two urns contains twice as many red balls as blue balls, and no others, and suppose one ball is randomly selected from each urn, with the two draws independent of each other. Let A{displaystyle A} and B{displaystyle B} be discrete random variables associated with the outcomes of the draw from the first urn and second urn respectively. The probability of drawing a red ball from either of the urns is 2/3, and the probability of drawing a blue ball is 1/3. We can present the joint probability distribution as the following table:

A=RedA=BlueP(B)
B=Red(2/3)(2/3)=4/9(1/3)(2/3)=2/94/9+2/9=2/3
B=Blue(2/3)(1/3)=2/9(1/3)(1/3)=1/92/9+1/9=1/3
P(A)4/9+2/9=2/32/9+1/9=1/3

Each of the four inner cells shows the probability of a particular combination of results from the two draws; these probabilities are the joint distribution. In any one cell the probability of a particular combination occurring is (since the draws are independent) the product of the probability of the specified result for A and the probability of the specified result for B. The probabilities in these four cells sum to 1, as it is always true for probability distributions.

Moreover, the final row and the final column give the marginal probability distribution for A and the marginal probability distribution for B respectively. For example, for A the first of these cells gives the sum of the probabilities for A being red, regardless of which possibility for B in the column above the cell occurs, as 2/3. Thus the marginal probability distribution for A{displaystyle A} gives A{displaystyle A}'s probabilities unconditional on B{displaystyle B}, in a margin of the table.

Coin flips[edit]

Consider the flip of two fair coins; let A{displaystyle A} and B{displaystyle B} be discrete random variables associated with the outcomes of the first and second coin flips respectively. Each coin flip is a Bernoulli trial and has a Bernoulli distribution. If a coin displays 'heads' then the associated random variable takes the value 1, and it takes the value 0 otherwise. The probability of each of these outcomes is 1/2, so the marginal (unconditional) density functions are

P(A)=1/2forA{0,1};{displaystyle P(A)=1/2quad {text{for}}quad Ain {0,1};}
P(B)=1/2forB{0,1}.{displaystyle P(B)=1/2quad {text{for}}quad Bin {0,1}.}

The joint probability density function of A{displaystyle A} and B{displaystyle B} defines probabilities for each pair of outcomes. All possible outcomes are

Bivariate distributions pdf template
(A=0,B=0),(A=0,B=1),(A=1,B=0),(A=1,B=1).{displaystyle (A=0,B=0),(A=0,B=1),(A=1,B=0),(A=1,B=1).}

Since each outcome is equally likely the joint probability density function becomes

P(A,B)=1/4forA,B{0,1}.{displaystyle P(A,B)=1/4quad {text{for}}quad A,Bin {0,1}.}

Since the coin flips are independent, the joint probability density function is the productof the marginals:

P(A,B)=P(A)P(B)forA,B{0,1}.{displaystyle P(A,B)=P(A)P(B)quad {text{for}}quad A,Bin {0,1}.}

Roll of a die[edit]

Consider the roll of a fair die and let A=1{displaystyle A=1} if the number is even (i.e. 2, 4, or 6) and A=0{displaystyle A=0} otherwise. Furthermore, let B=1{displaystyle B=1} if the number is prime (i.e. 2, 3, or 5) and B=0{displaystyle B=0} otherwise.

123456
A010101
B011010

Then, the joint distribution of A{displaystyle A} and B{displaystyle B}, expressed as a probability mass function, is

Bivariate Frequency Distribution Pdf

P(A=0,B=0)=P{1}=16,P(A=1,B=0)=P{4,6}=26,{displaystyle mathrm {P} (A=0,B=0)=P{1}={frac {1}{6}},quad quad mathrm {P} (A=1,B=0)=P{4,6}={frac {2}{6}},}
P(A=0,B=1)=P{3,5}=26,P(A=1,B=1)=P{2}=16.{displaystyle mathrm {P} (A=0,B=1)=P{3,5}={frac {2}{6}},quad quad mathrm {P} (A=1,B=1)=P{2}={frac {1}{6}}.}

These probabilities necessarily sum to 1, since the probability of some combination of A{displaystyle A} and B{displaystyle B} occurring is 1.

Bivariate normal distribution[edit]

Bivariate normal joint density

The multivariate normal distribution, which is a continuous distribution, is the most commonly encountered distribution in statistics. When there are specifically two random variables, this is the bivariate normal distribution, shown in the graph, with the possible values of the two variables plotted in two of the dimensions and the value of the density function for any pair of such values plotted in the third dimension. The probability that the two variables together fall in any region of their two dimensions is given by the volume under the density function above that region.

Joint cumulative distribution function[edit]

For a pair of random variables X,Y{displaystyle X,Y}, the joint cumulative distribution function (CDF) FXY{displaystyle F_{XY}} is given by[1]:p. 89

FX,Y(x,y)=P(Xx,Yy){displaystyle F_{X,Y}(x,y)=operatorname {P} (Xleq x,Yleq y)}

(Eq.1)

where the right-hand side represents the probability that the random variable X{displaystyle X} takes on a value less than or equal to x{displaystyle x}and that Y{displaystyle Y} takes on a value less than or equal to y{displaystyle y}.

For N{displaystyle N} random variables X1,,XN{displaystyle X_{1},ldots ,X_{N}}, the joint CDF FX1,,XN{displaystyle F_{X_{1},ldots ,X_{N}}} is given by

FX1,,XN(x1,,xN)=P(X1x1,,XNxn){displaystyle F_{X_{1},ldots ,X_{N}}(x_{1},ldots ,x_{N})=operatorname {P} (X_{1}leq x_{1},ldots ,X_{N}leq x_{n})}

(Eq.2)

Interpreting the N{displaystyle N} random variables as a random vectorX=(X1,,XN)T{displaystyle mathbf {X} =(X_{1},ldots ,X_{N})^{T}} yields a shorter notation:

FX(x)=P(X1x1,,XNxn){displaystyle F_{mathbf {X} }(mathbf {x} )=operatorname {P} (X_{1}leq x_{1},ldots ,X_{N}leq x_{n})}

Joint density function or mass function[edit]

Discrete case[edit]

The joint probability mass function of two discrete random variablesX,Y{displaystyle X,Y} is:

pX,Y(x,y)=P(X=xandY=y){displaystyle p_{X,Y}(x,y)=mathrm {P} (X=x mathrm {and} Y=y)}

(Eq.3)

or written in term of conditional distributions

pX,Y(x,y)=P(Y=yX=x)P(X=x)=P(X=xY=y)P(Y=y){displaystyle p_{X,Y}(x,y)=mathrm {P} (Y=ymid X=x)cdot mathrm {P} (X=x)=mathrm {P} (X=xmid Y=y)cdot mathrm {P} (Y=y)}

where P(Y=yX=x){displaystyle mathrm {P} (Y=ymid X=x)} is the probability of Y=y{displaystyle Y=y} given that X=x{displaystyle X=x}.

The generalization of the preceding two-variable case is the joint probability distribution of n{displaystyle n,} discrete random variables X1,X2,,Xn{displaystyle X_{1},X_{2},dots ,X_{n}} which is:

pX1,,Xn(x1,,xn)=P(X1=x1 and and Xn=xn){displaystyle p_{X_{1},ldots ,X_{n}}(x_{1},ldots ,x_{n})=mathrm {P} (X_{1}=x_{1}{text{ and }}dots {text{ and }}X_{n}=x_{n})}

(Eq.4)

or equivalently

pX1,,Xn(x1,,xn)=P(X1=x1)P(X2=x2X1=x1)P(X3=x3X1=x1,X2=x2)P(Xn=xnX1=x1,X2=x2,,Xn1=xn1).{displaystyle {begin{aligned}p_{X_{1},ldots ,X_{n}}(x_{1},ldots ,x_{n})&=mathrm {P} (X_{1}=x_{1})cdot mathrm {P} (X_{2}=x_{2}mid X_{1}=x_{1})&cdot mathrm {P} (X_{3}=x_{3}mid X_{1}=x_{1},X_{2}=x_{2})&dots &cdot P(X_{n}=x_{n}mid X_{1}=x_{1},X_{2}=x_{2},dots ,X_{n-1}=x_{n-1}).end{aligned}}}.

This identity is known as the chain rule of probability.

Since these are probabilities, we have in the two-variable case

Distributions
ijP(X=xiandY=yj)=1,{displaystyle sum _{i}sum _{j}mathrm {P} (X=x_{i} mathrm {and} Y=y_{j})=1,}

which generalizes for n{displaystyle n,} discrete random variables X1,X2,,Xn{displaystyle X_{1},X_{2},dots ,X_{n}} to

ijkP(X1=x1i,X2=x2j,,Xn=xnk)=1.{displaystyle sum _{i}sum _{j}dots sum _{k}mathrm {P} (X_{1}=x_{1i},X_{2}=x_{2j},dots ,X_{n}=x_{nk})=1.;}

Continuous case[edit]

The joint probability density functionfX,Y(x,y){displaystyle f_{X,Y}(x,y)} for two continuous random variables is defined as the derivative of the joint cumulative distribution function (see Eq.1):

fX,Y(x,y)=2FX,Y(x,y)xy{displaystyle f_{X,Y}(x,y)={frac {partial ^{2}F_{X,Y}(x,y)}{partial xpartial y}}}

(Eq.5)

This is equal to:

fX,Y(x,y)=fYX(yx)fX(x)=fXY(xy)fY(y){displaystyle f_{X,Y}(x,y)=f_{Ymid X}(ymid x)f_{X}(x)=f_{Xmid Y}(xmid y)f_{Y}(y)}

where fYX(yx){displaystyle f_{Ymid X}(ymid x)} and fXY(xy){displaystyle f_{Xmid Y}(xmid y)} are the conditional distributions of Y{displaystyle Y} given X=x{displaystyle X=x} and of X{displaystyle X} given Y=y{displaystyle Y=y} respectively, and fX(x){displaystyle f_{X}(x)} and fY(y){displaystyle f_{Y}(y)} are the marginal distributions for X{displaystyle X} and Y{displaystyle Y} respectively.

Bivariate Binomial Distribution Pdf

The definition extends naturally to more than two random variables:

fX1,,Xn(x1,,xn)=nFX1,,Xn(x1,,xn)x1xn{displaystyle f_{X_{1},ldots ,X_{n}}(x_{1},ldots ,x_{n})={frac {partial ^{n}F_{X_{1},ldots ,X_{n}}(x_{1},ldots ,x_{n})}{partial x_{1}ldots partial x_{n}}}}

(Eq.6)

Again, since these are probability distributions, one has

xyfX,Y(x,y)dydx=1{displaystyle int _{x}int _{y}f_{X,Y}(x,y);dy;dx=1}

respectively

x1xnfX1,,Xn(x1,,xn)dx1dxn=1{displaystyle int _{x_{1}}ldots int _{x_{n}}f_{X_{1},ldots ,X_{n}}(x_{1},ldots ,x_{n});dx_{1}ldots ;dx_{n}=1}

Mixed case[edit]

The 'mixed joint density' may be defined where one or more random variables are continuous and the other random variables are discrete. With one variable of each type we have

fX,Y(x,y)=fXY(xy)P(Y=y)=P(Y=yX=x)fX(x).{displaystyle {begin{aligned}f_{X,Y}(x,y)=f_{Xmid Y}(xmid y)mathrm {P} (Y=y)=mathrm {P} (Y=ymid X=x)f_{X}(x).end{aligned}}}

One example of a situation in which one may wish to find the cumulative distribution of one random variable which is continuous and another random variable which is discrete arises when one wishes to use a logistic regression in predicting the probability of a binary outcome Y conditional on the value of a continuously distributed outcome X{displaystyle X}. One must use the 'mixed' joint density when finding the cumulative distribution of this binary outcome because the input variables (X,Y){displaystyle (X,Y)} were initially defined in such a way that one could not collectively assign it either a probability density function or a probability mass function. Formally, fX,Y(x,y){displaystyle f_{X,Y}(x,y)} is the probability density function of (X,Y){displaystyle (X,Y)} with respect to the product measure on the respective supports of X{displaystyle X} and Y{displaystyle Y}. Either of these two decompositions can then be used to recover the joint cumulative distribution function:

FX,Y(x,y)=tys=xfX,Y(s,t)ds.{displaystyle {begin{aligned}F_{X,Y}(x,y)&=sum limits _{tleq y}int _{s=-infty }^{x}f_{X,Y}(s,t);ds.end{aligned}}}

The definition generalizes to a mixture of arbitrary numbers of discrete and continuous random variables.

Additional properties[edit]

Joint distribution for independent variables[edit]

In general two random variables X{displaystyle X} and Y{displaystyle Y} are independent if and only if the joint cumulative distribution function satisfies

FX,Y(x,y)=FX(x)FY(y){displaystyle F_{X,Y}(x,y)=F_{X}(x)cdot F_{Y}(y)}

Two discrete random variables X{displaystyle X} and Y{displaystyle Y} are independent if and only if the joint probability mass function satisfies

P(X=xandY=y)=P(X=x)P(Y=y){displaystyle P(X=x {mbox{and}} Y=y)=P(X=x)cdot P(Y=y)}

for all x{displaystyle x} and y{displaystyle y}.

While the number of independent random events grows, the related joint probability value decreases rapidly to zero, according to a negative exponential law.

Creative cam drivers. Create Labs was started a few years later to be the United States division of the company, which was located in Silicon Valley, CA. Creative Labs is a division of Creative Technology, Ltd., which was established in 1981 with its headquarters in Singapore.

Similarly, two absolutely continuous random variables are independent if and only if

fX,Y(x,y)=fX(x)fY(y){displaystyle f_{X,Y}(x,y)=f_{X}(x)cdot f_{Y}(y)}

for all x{displaystyle x} and y{displaystyle y}. This means that acquiring any information about the value of one or more of the random variables leads to a conditional distribution of any other variable that is identical to its unconditional (marginal) distribution; thus no variable provides any information about any other variable.

Joint distribution for conditionally dependent variables[edit]

If a subset A{displaystyle A} of the variables X1,,Xn{displaystyle X_{1},cdots ,X_{n}} is conditionally dependent given another subset B{displaystyle B} of these variables, then the probability mass function of the joint distribution is P(X1,,Xn){displaystyle mathrm {P} (X_{1},ldots ,X_{n})}. P(X1,,Xn){displaystyle mathrm {P} (X_{1},ldots ,X_{n})} is equal to P(B)P(AB){displaystyle P(B)cdot P(Amid B)}. Therefore, it can be efficiently represented by the lower-dimensional probability distributions P(B){displaystyle P(B)} and P(AB){displaystyle P(Amid B)}. Such conditional independence relations can be represented with a Bayesian network or copula functions.

Important named distributions[edit]

Named joint distributions that arise frequently in statistics include the multivariate normal distribution, the multivariate stable distribution, the multinomial distribution, the negative multinomial distribution, the multivariate hypergeometric distribution, and the elliptical distribution.

See also[edit]

References[edit]

Bivariate Poisson Distribution Pdf

  1. ^Park,Kun Il (2018). Fundamentals of Probability and Stochastic Processes with Applications to Communications. Springer. ISBN978-3-319-68074-3.

External links[edit]

  • Hazewinkel, Michiel, ed. (2001) [1994], 'Joint distribution', Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN978-1-55608-010-4
  • Hazewinkel, Michiel, ed. (2001) [1994], 'Multi-dimensional distribution', Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN978-1-55608-010-4
  • 'Joint continuous density function'. PlanetMath.
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Joint_probability_distribution&oldid=916553072'