Introduction
As we know
\(f( x,y )=\frac{xy}{36}\)for \(x=1,2,3,y=1,2,3\)
is a probability function of discrete random variable
\( f( x,y )=\frac{3}{5}x( x+y )\) for \( 0 \le x \le 1,0 \le y \le 2\)
is probability function for continuous random variable
Among such various types of probability functions (discrete or continuous), some particular types of continuous probability functions are
- gamma probability function
- beta probability function
- normal probability function
- exponential probability function
- chi-square probability function
Gamma Probability Function
In mathematics, the gamma function (represented by \(\Gamma\) ) is one commonly used extension of the factorial function.
For any positive integer \( \alpha\), the gamma function is denoted by \(\Gamma (\alpha) \) and defined by
\( \Gamma(\alpha)=(\alpha-1)! \)
It is Derived by Daniel Bernoulli, with \( \alpha \in \mathbb{Z}^+\), \(\Gamma (\alpha) \) is defined via a convergent improper integral:
\( \Gamma(\alpha)=\int_0^\infty e^{-x}x^{\alpha-1}dx \) for \( \alpha > 0 \)
This is also known as the Euler integral of the second kind. (Euler's integral of the first kind is the beta function.)
Special Tips
The gamma function \(\Gamma (\alpha) \) can be seen as a solution to the interpolation problem:
a smooth curve of y=(x-1)! for \( x \in \mathbb{Z}^+\)
which satisfy
\( \Gamma(1)=1 \) and
\( \Gamma(\alpha)=(\alpha-1)! \) for \( \alpha \in \mathbb{Z}^+\).
Proof of Recurrence Relation
Prove that \( \Gamma(\alpha+1)=\alpha\Gamma(\alpha) \)
We know that,
\( \Gamma(\alpha)=\int_0^\infty e^{-x}x^{\alpha-1}dx \) for \( \alpha > 0 \) A
\( \Gamma(\alpha+1)=\int_0^\infty e^{-x}x^{\alpha}dx \) for \( \alpha > 0 \) B
Thus
\( \Gamma(\alpha+1)=\int_0^\infty e^{-x}x^{\alpha}dx \) for \( \alpha > 0 \)
Using integration by parts, we can write
\( \Gamma(\alpha+1)= \left [\frac{e^{-x}}{-1}x^\alpha-\int \frac{e^{-x}}{-1} \alpha x^{\alpha-1} dx \right ]_0^\infty \)
or
\( \Gamma(\alpha+1)= \left [\frac{e^{-\infty}}{-1}x^\alpha- \frac{e^{-x}}{-1}0^\alpha\right ]-\int_0^\infty \frac{e^{-x}}{-1}\alpha x^{\alpha-1} dx \)
or
\( \Gamma(\alpha+1)= \alpha \) \( \int_0^\infty e^{-x}x^{\alpha-1} dx \)
or
\( \Gamma(\alpha+1)= \alpha \) \( \Gamma(\alpha) \)
Important Properties
- \( \Gamma (\alpha +1) = \alpha\Gamma (\alpha )\)
- \( \Gamma (\alpha ) \Gamma (1-\alpha)= \frac{\pi}{\sin \alpha \pi} \)
- \( \Gamma \left (\frac{1}{2} \right ) = \sqrt{\pi}\)
- \( \Gamma (\alpha ) = \begin{cases} \frac{\Gamma (\alpha+1 )}{\alpha} & \alpha \ne 0 \\ (\alpha-1 )!& \alpha \text{ is positive integer } \\ \infty & \alpha =0 \end{cases} \)
Definition for Gamma Function
Gamma function is defined by:
\( \Gamma(\alpha)=\int_0^\infty e^{-x}x^{\alpha-1}dx \) for \( \alpha > 0 \)
With positive real number \( \alpha\) and \( \beta\), the gamma function is defined by:
\( \Gamma(\alpha)=\int_0^\infty \frac{e^{-x/\beta}x^{\alpha-1}}{\beta^\alpha} dx \) for \( \alpha > 0 , \beta > 0\)
Thus we have
\(\int_0^\infty e^{-x/\beta}x^{\alpha-1} dx = \Gamma(\alpha) \beta^\alpha\)
We can also write
\(\int_0^\infty \frac{e^{-x/\beta}x^{\alpha-1}}{\Gamma(\alpha) \beta^\alpha} dx = 1\)
Gamma Probability Distribution
A random variable X is said to have gamma distribution if its probability function is
\( f(x) =\frac{e^{-x/\beta}x^{\alpha-1}}{\Gamma(\alpha) \beta^\alpha} \)
where
\( \Gamma(\alpha)=\int_0^\infty \frac{e^{-x/\beta}x^{\alpha-1}}{\beta^\alpha} dx \) for \( \alpha > 0 , \beta > 0\)
Here, Gamma Function
- represents a continuous distribution defined over \( 0,\infty \) and parametrized by a real numbers \( \alpha\) (called "shape parameter") and \( \beta \) ( called "scale parameter")
- predicts the wait time until the x th event occurs
- α : The number of events waiting to occur.
- β : The rate of events happening in Poisson process.
A few more things to note is
- Poisson, Exponential, and Gamma distribution model different aspects of the same process — the Poisson process
- Poisson distribution is used to model the number of events in the future
- Exponential distribution is used to predict the wait time until the very first event
- Gamma distribution is used to predict the wait time until the x-th event
pdf Theorem for Gamma Distribution
The total probability (T) for Gamma distribution is
\(T=\int_0^\infty f(x) dx \)
or
\(T=\int_0^\infty \frac{e^{-x/\beta}x^{\alpha-1}}{\Gamma(\alpha) \beta^\alpha} dx \)
or
\(T=1\)
Mean of Gamma Distribution
Let X be a random variable having gamma distribution with probability function
\(f(x)=\int_0^\infty \frac{e^{-x/\beta}x^{\alpha-1}}{\Gamma(\alpha) \beta^\alpha} \)
Then, the mean of the gamma distribution is
\(E(X)=\int_0^\infty x. f(x) dx \)
or
\(E(X)=\int_0^\infty x \frac{e^{-x/\beta}x^{\alpha-1}}{\Gamma(\alpha) \beta^\alpha} dx \)
or
\(E(X)=\int_0^\infty \frac{e^{-x/\beta}x^{\alpha}}{\Gamma(\alpha) \beta^\alpha} dx \)
or
\(E(X)= \frac{1}{\Gamma(\alpha) \beta^\alpha} \int_0^\infty e^{-x/\beta}x^{\alpha} dx \)
or
\(E(X)= \frac{1}{\Gamma(\alpha) \beta^\alpha} \Gamma(\alpha+1) \beta^{\alpha+1} \)
or
\(E(X)= \frac{1}{\Gamma(\alpha) \beta^\alpha} \alpha \Gamma(\alpha) \beta^{\alpha+1} \)
or
\(E(X)= \alpha\beta \)
Variance of Gamma Distribution
Let X be a random variable having gamma distribution with probability function
\(f(x)=\int_0^\infty \frac{e^{-x/\beta}x^{\alpha-1}}{\Gamma(\alpha) \beta^\alpha} \)
Then, the mean of the gamma distribution is
\(E(X)= \alpha \beta \) A
Also, the \(E(X^2)\) of the gamma distribution is
\(E(X^2)=\int_0^\infty x^2. f(x) dx \)
or
\(E(X^2)=\int_0^\infty x^2 \frac{e^{-x/\beta}x^{\alpha-1}}{\Gamma(\alpha) \beta^\alpha} dx \)
or
\(E(X^2)=\int_0^\infty \frac{e^{-x/\beta}x^{\alpha+1}}{\Gamma(\alpha) \beta^\alpha} dx \)
or
\(E(X^2)= \frac{1}{\Gamma(\alpha) \beta^\alpha} \int_0^\infty e^{-x/\beta}x^{\alpha+1} dx \)
or
\(E(X^2)= \frac{1}{\Gamma(\alpha) \beta^\alpha} \Gamma(\alpha+2) \beta^{\alpha+2} \)
or
\(E(X)= \frac{1}{\Gamma(\alpha) \beta^\alpha} \alpha (\alpha+1) \Gamma(\alpha) \beta^{\alpha+2} \)
or
\(E(X)= \alpha (\alpha+1) \beta^2 \) B
Now, using (A) and (B), the variance of gamma distribution is
\(V(X)= E(X^2)-E(X)^2 \)
or
\(V(X)= \alpha (\alpha+1) \beta^2-(\alpha \beta)^2 \)
or
\(V(X)= \alpha \beta^2 \)
MGF of Gamma Distribution
Let X be a random variable having gamma distribution with probability function
\(f(x)=\int_0^\infty \frac{e^{-x/\beta}x^{\alpha-1}}{\Gamma(\alpha) \beta^\alpha} \)
Then, the MGF of the gamma distribution is
\(M_X(t)=\int_0^\infty e^{tx} f(x) dx \)
or
\(M_X(t)=\int_0^\infty e^{tx} \frac{e^{-x/\beta}x^{\alpha-1}}{\Gamma(\alpha) \beta^\alpha} dx \)
or
\(M_X(t)=\int_0^\infty \frac{e^{-x \frac{1-t \beta}{\beta} }x^{\alpha-1}}{\Gamma(\alpha) \beta^\alpha} dx \)
or
\(M_X(t)=\frac{1}{\Gamma(\alpha) \beta^\alpha} \int_0^\infty e^{-x \frac{1-t \beta}{\beta} }x^{\alpha-1} dx \)
or
\(M_X(t)=\frac{1}{\Gamma(\alpha) \beta^\alpha} \Gamma(\alpha) \left( \frac{\beta}{1-t \beta} \right )^\alpha \)
or
\(M_X(t)=(1-t \beta)^{-\alpha} \)
MGF of Gamma Distribution with one parameter
MGF of Gamma Distribution with one parameter \( \alpha \) is
\(M_X(t)=(1-t)^{-\alpha} \)
Also note that, like in Poisson distribution, mean and variance of one parametric gamma distribution are equal. However, Poisson distribution is discrete while gamma distribution is continuous.
Examples
- In the morning rush hour, customers enter a coffee shop at a rate of 8 customers every 10 minutes. The time between customer arrivals follows an exponential distribution and the time between x arrivals follows a Gamma distribution.
- Find the probability of at least 40 customers arriving in 45 minutes.
- Find the average waiting time until the 40^th customer arrives
- Find the probability that the time until the 40^th customer arrives is at least 1 hour
- CDF[GammaDistribution[40, 1.25]], 45]=0.273696
- Mean[Gamma Distribution]=50 min
- Probability[x >= 1, Gamma Distribution]=0.107276
- I went to a shop and joined a line with two people ahead of me. One is being served and the other is waiting. Their service times S1 and S2 are independent, exponential random variables with a mean of 2 minutes. (Thus the mean
service rate is 0.5/minute). What is the probability that I wait more than 5 minutes in the queue?
Solution
Please note that,- α : The number of events for which you are waiting to occur.
- β : The rate of events happening which follows the Poisson process.
Keeping α = 5 and β = 0.5 into the CDF of the Gamma distribution and taking x=0 and x=1.
i.e,, sum the Poisson probability taking λ= αβ=2.5
Probability=P(x=0)+P(x=1)=0.2873
A less-than-30% chance that I wait more than 5 minutes in the queue.
Comparision charts
Exponential \(\alpha=1,\beta=\theta\) | Gamma \(\alpha,\beta\) | Chi-square \(\alpha=\nu/2,\beta=2\) | |
Probability function | \(\frac{1}{\theta} e^{-\theta}\) | \(\frac{e^{-x/\beta}x^{\alpha-1}}{\Gamma(\alpha) \beta^\alpha} \) | \( \frac{e^{-x/2}x^{\nu/2-1}}{\Gamma(\nu/2) 2^{\nu/2}} \) |
Mean | \(\theta \) | \(\alpha \beta \) | \( \nu \) |
Variance | \(\theta^2\) | \(\alpha \beta^2 \) | \( 2 \nu \) |
MGF | \((1-t \theta)^{-1} \) | \((1-t \beta)^{-\alpha} \) | \( (1-2t )^{-\nu/2} \) |
Beta Probability Function
Beta Function
In mathematics, the beta function, also called the Eulerian integral of the first kind, is defined by
\(\beta(m,n)=\int_0^1 x^{m-1} (1-x)^{n-1} dx\)
The beta function was studied by Euler and Legendre and was given its name by Jacques Binet; its symbol \(\beta\) is a Greek capital letter.
Properties of Beta Function
- \(\beta(m,n)=\beta(n,m) \)
Solution
Given that
\( \beta ( m, n)=\int_0^1 x^{m-1}(1-x)^{n-1}dx\)
Replace x=1-y then dx=-dy
Then y=1 when x=0 and y=0 when x=1
Now
\( \beta ( m, n)=-\int_1^0 (1-y)^{m-1}y^{n-1}dy \)
or \( \beta ( m, n)=\int_0^1 y^{n-1} (1-y)^{m-1}dy \)
or \( \beta ( m, n)= \beta ( n, m) \) - Alternate form of beta function \(\beta(m,n)= \int_0^\infty \frac{x^{n-1}}{(x+1)^{m+n} } dx \)
Solution
Given that
\( \beta ( m, n)=\int_0^1 x^{m-1}(1-x)^{n-1}dx\)
Replace \( x=\frac{y}{y+1} \) then \( y= -\frac{x}{x-1} \) and
Then \(1-x=\frac{1}{1+y} \) and \( dx=\left ( \frac{1}{1+y} \right )^2 dy \)
Now
If x=0 then y=0
If x=1 then y=\( \infty\)
Hence, we have
\( \beta ( m, n)=\int_0^1 x^{m-1} (1-x)^{n-1} dx\)
or \( \beta(m,n)=\int_0^\infty \left ( \frac{y}{y+1} \right)^{m-1} \left ( \frac{1}{y+1} \right)^{n-1} \left ( \frac{1}{y+1} \right)^2 dy \)
or \( \beta(m,n)=\int_0^\infty \frac{y^{m-1}}{(y+1)^{m+n} } dy \)
Thus, using the principle of \(\beta(m,n)=\beta(n,m) \), we write
\( \beta(m,n)=\int_0^\infty \frac{x^{n-1}}{(x+1)^{m+n} } dx \) - \(\beta(m,n)= \frac{\Gamma(m)\Gamma(n)}{\Gamma(m+n)}\)
Solution
Given that
\( \Gamma(m)=\int_0^\infty e^{-\beta} \beta^{m-1} d \beta \) (i)
\( \Gamma(m)=\int_0^\infty e^{-x} x^{m-1} d x \) हुने भएकोले
\( \Gamma(m)=\int_0^\infty e^{-\beta} x^{m-1} d \beta \) हुन्छ।
\( \Gamma(n)=\int_0^\infty e^{-x \beta} x^{n-1} dx \beta^n \) (ii)
\( \Gamma(n)=\int_0^\infty \frac{e^{-\frac{x}{\beta}} x^{n-1}}{\beta ^ n} d x \) हुने भएकोले
\( \Gamma(n)=\int_0^\infty e^{-x.\left (\frac{1}{\beta} \right )} x^{n-1} \left (\frac{1}{\beta} \right ) ^n d x \) हुने भएकोले
\( \Gamma(n)=\int_0^\infty e^{-x (\beta)} x^{n-1} (\beta) ^n dx \) हुन्छ।
Thus, multiplying (i) and (ii) we get
\( \Gamma(m)\Gamma(n)=\int_0^\infty \left[\int_0^\infty e^{-\beta (x+1)} \beta^{m+n-1} d \beta \right ] x^{n-1} dx \)
or \( \Gamma(m)\Gamma(n)=\int_0^\infty \frac{\Gamma(m+n)}{(1+x)^{m+n}} x^{n-1} dx \)
or \( \Gamma(m)\Gamma(n)= \Gamma(m+n) \int_0^\infty \frac{x^{n-1} }{(1+x)^{m+n}} dx \)
or \( \Gamma(m)\Gamma(n)= \Gamma(m+n) \beta(m,n)\)
or \( \beta(m,n) = \frac{\Gamma(m)\Gamma(n)}{\Gamma(m+n)}\) - \(\beta(m,n)= \frac{m+n}{mn} \frac{1}{\begin{pmatrix} m+n \\ m \end{pmatrix}}\)
Beta Probability Function: Definition
A random variable X is said to have Beta Probability Function if it is defined by
\( f(x)=\frac{\Gamma(m+n)}{\Gamma(m)\Gamma(n)} x^{m-1} (1-x)^{n-1} \)
It represents a probability distribution defined over [0,1) and parametrized by two positive values
m, n known as "shape parameters" which determine the "fatness" of the left and right tails in the probability function.
pdf Theorem for Beta Distribution
The total probability T for Beta distribution is
\(T=\int_0^1 f(x) dx \)
or
\(T=\int_0^1 \frac{\Gamma(m+n)}{\Gamma(m)\Gamma(n)} x^{m-1} (1-x)^{n-1} dx \)
or
\(T=\frac{\Gamma(m+n)}{\Gamma(m)\Gamma(n)} \int_0^1 x^{m-1} (1-x)^{n-1} dx \)
or
\(T=\frac{\Gamma(m+n)}{\Gamma(m)\Gamma(n)} \beta(m,n) \)
or
\(T=\frac{\Gamma(m+n)}{\Gamma(m)\Gamma(n)} \frac{\Gamma(m)\Gamma(n)}{\Gamma(m+n)} \)
or \(T=1\)
Geometry of Beta Distribution
The Beta distribution models a probability, therefore, its domain is bounded between 0 and 1. The beta distribution comes into play when we look at it from the lens of the binomial distribution.
For example, see a table given below.
Binomial | \(f(x)=\begin{pmatrix} n\\x \end{pmatrix} p^x (1-p)^{n-x} \) | p is probability |
Beta | \(f(p)=\frac{1}{\beta(m,n)} p^{m -1} (1-p)^{n -1} \) | p is random variable |
The difference between the binomial and the beta distribution is that the former models the number of successes (x), while the latter models the probability (p) of success.
In other words, the probability is a parameter in binomial; in the Beta distribution, the probability is a random variable.
Interpretation of m, n
- think m-1 as the number of successes and n-1 as the number of failures, just like n and n-x terms in binomial probability.
- m becomes larger (more successful events), the bulk of the probability distribution will shift towards the right, whereas an increase in n moves the distribution towards the left (more failures).
- the distribution will narrow if both m and n increase, for we are more certain.
- m-1 as the number of successes and n-1 as the number of failures, then
Beta(2,2) means we got 1 success and 1 failure.
So it makes sense that the probability of the success is highest at 0.5.
Also,
Beta(1,1) mean we got zero for the head and zero for the tail.
Then, the probability of success should be the same throughout [0,1]. The horizontal straight line confirms it.
Beta probability Function: Definition
The probability density function (pdf) of the beta distribution, for \(0 \le x \le 1\), and shape parameters m, n > 0, is a power function of the variable X and of its reflection (1 − x) as follows:
\( f(x)=\frac{\Gamma(m+n)}{\Gamma(m)\Gamma(n)} x^{m-1} (1-x)^{n-1} \)
Mean of Beta Distribution
Let X be a random variable having beta distribution with parameter m and n then \( E(X)=\frac{m}{m+n} \)
Solution
Let X be a random variable having beta distribution with parameter m and n, then expected value of X is
\( E(X)=\int_0^1 x\frac{\Gamma(m+n)}{\Gamma(m)\Gamma(n)} x^{m-1}(1-x)^{n-1}dx \)
or
\( E(X)= \frac{\Gamma(m+n)}{\Gamma(m)\Gamma(n)} \int_0^1x^m(1-x)^{n-1}dx\)
or
\( E(X)=\frac{\Gamma(m+n)}{\Gamma(m)\Gamma(n)}\frac{\Gamma(m+1)\Gamma(n)}{\Gamma(m+n+1)}\)
or
\( E(X)=\frac{\Gamma(m+n)}{\Gamma(m)\Gamma(n)}\frac{m\Gamma(m)\Gamma(n)}{(m+n)\Gamma(m+n)}\)
or
\( E(X)=\frac{m}{m+n} \)
Variance of Beta Distribution
Let X be a random variable having beta distribution with parameter m and n then
\( E(X)=\frac{m}{m+n} \)
Solution
Let X be a random variable having beta distribution with parameter m and n, then expected value of X is
\( E(X)=\frac{m}{(m+n)} \)
Also, the expected value of \(X^2\) is
\( E(X^2)=\int_0^1 x^2 \frac{\Gamma(m+n)}{\Gamma(m)\Gamma(n)} x^{m-1}(1-x)^{n-1}dx \)
or
\( E(X^2)= \frac{\Gamma(m+n)}{\Gamma(m)\Gamma(n)} \int_0^1x^{m+1}(1-x)^{n-1}dx\)
or
\( E(X^2)=\frac{\Gamma(m+n)}{\Gamma(m)\Gamma(n)}\frac{\Gamma(m+2)\Gamma(n)}{\Gamma(m+n+2)}\)
or
\( E(X^2)=\frac{\Gamma(m+n)}{\Gamma(m)\Gamma(n)}\frac{m (m+1)\Gamma(m)\Gamma(n)}{(m+n)(m+n+1)\Gamma(m+n)}\)
or
\( E(X^2)=\frac{m(m+1)}{(m+n)(m+n+1)} \)
Now, the variance of beta distribution is
\( V(X)= E(X^2)-E(X)^2 \)
or
\( V(X)= \frac{m(m+1)}{(m+n)(m+n+1)} -\left ( \frac{m}{(m+n)} \right )^2 \)
or
\( V(X)= \frac{mn}{(m+n)^2(m+n+1)} \)
Examples
- Cloud duration approximately follows a beta distribution with parameters 0.3 and 0.4 for a particular location.
- Find the probability that cloud duration will be longer than half a day
- Find the average cloudiness duration for a day:
- Find the probability of having exactly 20 days in a month with cloud duration less than 10%
Solution
- Probability[x > 0.5, Beta Distribution(0.3, 0.4)]=0.421508
- Mean[Beta Distribution(0.3, 0.4)]=0.428571
- p = Probability[x < 0.1,Beta Distribution(0.3, 0.4)]
Probability[k = 20, Binomial Distribution(30, p)]=0.000178284
- In a basket there are balls which are defective with a Beta distribution of m=5 and n=2 . Compute the probability of defective balls in the basket from 20% to 30%.
Solution
\( P(x) = P(0.2≤x≤0.3)= \displaystyle \int_{0.2}^{0.3} \frac{x^{2-1}(1−x)^{5−1}}{\beta(2,5)} =0.235185 \)
Normal Probability Function
A random variable X is said to have normal distribution if its probability density function is given by
\( f(x)=n\left( x;\mu ,\sigma \right)=\frac{1}{\sigma \sqrt{2\pi }}{e^{-\frac{1}{2}{{\left( \frac{x-\mu }{\sigma } \right)}^{2}}}}\) for \( -\infty < x < \infty \)
where \( \mu, \sigma \) are parameters
Before starting some calculation on normal distribution, we see some important integrals. These integrals are given below
- \( \beta \left( \frac{1}{2},\frac{1}{2} \right)=\pi \)
Proof (1)
From the definition of beta function, we have
\( \beta ( \alpha ,\beta )=\int\limits_0^1 x^{\alpha-1}(1-x)^{\beta -1} dx\)
If we put \( x=\sin^2 \theta \) , then \( 1-x=\cos^2 \theta \) and \( dx=2 \sin \theta \cos \theta d \theta \)
Hence
\( \beta ( \alpha ,\beta )=\int\limits_0^{\frac{\pi }{2}} ( \sin^2 \theta )^{\alpha -1}( \cos ^2 \theta )^{\beta -1}2 \sin \theta \cos \theta d \theta \)
or \( \beta ( \alpha ,\beta )=\int\limits_0^{\frac{\pi }{2}}( \sin \theta )^{2\alpha -2}( \cos \theta )^{2\beta -2}2\sin \theta \cos \theta d\theta \)
or \( \beta ( \alpha ,\beta )=2\int\limits_0^{\frac{\pi }{2}}( \sin \theta )^{2\alpha -1}( \cos \theta )^{2\beta -1}d\theta \)
Now, we put,\( \alpha =\frac{1}{2},\beta =\frac{1}{2}\) , then we have
\( \beta \left( \frac{1}{2},\frac{1}{2} \right)=2\int\limits_0^{\frac{\pi }{2}}( \sin \theta )^{2\frac{1}{2}-1}( \cos \theta )^{2\frac{1}{2}-1}d\theta \)
or\( \beta \left( \frac{1}{2},\frac{1}{2} \right)=2\int\limits_0^{\frac{\pi }{2}}( \sin \theta )^0( \cos \theta )^0 d\theta \)
or\( \beta \left( \frac{1}{2},\frac{1}{2} \right)=2\int\limits_0^{\frac{\pi }{2}}d\theta \)
Since \( \int\limits_0^{\frac{\pi }{2}}d\theta =\left[ \theta \right]_0^{\frac{\pi }{2}}=\frac{\pi }{2}-0=\frac{\pi }{2}\)
we have
\( \beta \left( \frac{1}{2},\frac{1}{2} \right)=2\frac{\pi }{2}\)
or\( \beta \left( \frac{1}{2},\frac{1}{2} \right)=\pi \) - \( \Gamma \left( \frac{1}{2} \right)=\sqrt{\pi }\)
Proof (2)
\( \Gamma \left( \frac{1}{2} \right)=\sqrt{\pi }\)
From the relation between gamma and beta function, we have
\( \beta ( \alpha ,\beta )=\frac{\Gamma (\alpha)\Gamma \left( \beta \right)}{\Gamma \left( \alpha +\beta \right)}\)
If we put,\( \alpha =\frac{1}{2},\beta =\frac{1}{2}\) , then we have
\( \beta \left( \frac{1}{2},\frac{1}{2} \right)=\frac{\Gamma \left( \frac{1}{2} \right)\Gamma \left( \frac{1}{2} \right)}{\Gamma \left( \frac{1}{2}+\frac{1}{2} \right)}\)
or\( \beta \left( \frac{1}{2},\frac{1}{2} \right)=\frac{\Gamma \left( \frac{1}{2} \right)\Gamma \left( \frac{1}{2} \right)}{\Gamma \left( 1 \right)}\)
Since, \( \Gamma \left( 1 \right)=1\) , we have
\( \beta \left( \frac{1}{2},\frac{1}{2} \right)=\Gamma \left( \frac{1}{2} \right)\Gamma \left( \frac{1}{2} \right)\)
or\( \Gamma \left( \frac{1}{2} \right)=\sqrt{\beta \left( \frac{1}{2},\frac{1}{2} \right)}\)
or\( \Gamma \left( \frac{1}{2} \right)=\sqrt{\pi }\) - \( \int\limits_{0}^{\infty } e^{-x} x^{-\frac{1}{2}} dx=\sqrt{\pi }\)
Proof
By the definition, we can write
\( \Gamma (\alpha)=\displaystyle \int_0^{\infty} e^{-x} x^{\alpha -1} dx \)
Putting \( \alpha =\frac{1}{2}\), we get
\( \Gamma (\frac{1}{2})=\displaystyle \int_0^{\infty} e^{-x} x^{\frac{1}{2} -1} dx \)
or\( \Gamma (\frac{1}{2})=\displaystyle \int_0^{\infty} e^{-x} x^{-\frac{1}{2} } dx \)
or\( \sqrt{\pi }=\displaystyle \int_0^{\infty} e^{-x} x^{-\frac{1}{2} } dx \)
or\( \displaystyle \int_0^{\infty} e^{-x} x^{-\frac{1}{2} } dx=\sqrt{\pi } \)
- \( \displaystyle \int_{-\infty }^{\infty }{{e^{-\frac{1}{2}x^2}}}dx=\sqrt{2\pi } \)
Proof
Given that
\( \displaystyle \int_{-\infty }^{\infty }{{e^{-\frac{1}{2}x^2}}}dx \)
Keeping \( \frac{1}{2}x^2=z\), we get \( dx=\frac{dz}{x}\) and \(x=\sqrt{2z}\)
Thus we have
\( \displaystyle \int_{-\infty }^{\infty } e^{-\frac{1}{2}x^2} dx \)
or \( \displaystyle \int_{-\infty }^{\infty }e^{-z} \frac{dz}{\sqrt{2z}} \)
or \( \frac{1}{\sqrt{2}} \displaystyle \int_{-\infty }^{\infty }e^{-z} \frac{dz}{\sqrt{z}} \)
or \( \frac{1}{\sqrt{2}} \displaystyle \int_{-\infty }^{\infty }e^{-z} z^{-\frac{1}{2} } dz \)
or \( \frac{1}{\sqrt{2}} \times 2 \displaystyle \int_0^{\infty }e^{-z} z^{-\frac{1}{2} } dz \)
or \( \frac{1}{\sqrt{2}} \times 2 \times \sqrt{\pi } \)
or \( \sqrt{2\pi } \)
Properties of normal curve
- Symmetrical
- Bell shaped
- Uni-model
- Mean= Median=Mode
- Infinitely lLarge
- \(P(- \infty \le X \le 0)=P(0 \le X \le \infty ) =0.5 \)
- \(P(- c \le X \le 0)=P(0 \le X \le c ) \)
- Equal mean , Different standard deviation (μ=0,σ= 1 and 1.2)
- Different mean, Equal standard deviation (μ=-1 and 1,σ= 1)
- Different mean, Different standard deviation (μ=-1 and 1,σ= 1 and 1.2)
meaning of μ and σ
useful integrals
- \( \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2} z^2} dz =\sqrt{2 \pi}\)
- \( \displaystyle \int_{- \infty}^{\infty} z. e^{ - \frac{1}{2} z^2} dz =0\)
- \( \displaystyle \int_{- \infty}^{\infty} z^2. e^{ - \frac{1}{2} z^2} dz =\sqrt{2 \pi}\)
pdf Theorem
Let X be a random variable having normal distribution with parameter μ and σ then show that total area under the normal curve is 1.Solution
Let X be a random variable having normal distribution with parameter μ and σ then
\( f(x) =\frac{1}{\sigma \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2} \left ( \frac{x- \mu}{\sigma} \right )^2}\)
The total area under the normal curve is
\( \displaystyle \int_{- \infty}^{\infty} f(x) dx \)
or \( \displaystyle \int_{- \infty}^{\infty} \frac{1}{\sigma \sqrt{2 \pi}} e^{ - \frac{1}{2} \left ( \frac{x- \mu}{\sigma} \right )^2} dx \)
or \( \frac{1}{\sigma \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2} \left ( \frac{x- \mu}{\sigma} \right )^2} dx \)
Keeping \( \frac{x- \mu}{\sigma}=z\), we get \(x=\mu + \sigma z \) and \( dx= \sigma dz\) , we get
\( \frac{1}{\sigma \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2} z^2} \sigma dz \)
or \( \frac{1}{ \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2} z^2} dz \)
or \( \frac{1}{ \sqrt{2 \pi}} \sqrt{2 \pi} \)
or \( 1 \)
Mean of Normal Probability Distribution
Let X be a random variable having normal distribution with parameter μ and σ thenMean of the distribution is
\( E(x)\)
or \( \displaystyle \int_{- \infty}^{\infty} x. f(x) dx \)
or \( \displaystyle \int_{- \infty}^{\infty} \frac{1}{\sigma \sqrt{2 \pi}} x. e^{ - \frac{1}{2} \left ( \frac{x- \mu}{\sigma} \right )^2} dx \)
or \( \frac{1}{\sigma \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} x. e^{ - \frac{1}{2} \left ( \frac{x- \mu}{\sigma} \right )^2} dx \)
Keeping \( \frac{x- \mu}{\sigma}=z\), we get \(x=\mu + \sigma z \) and \( dx= \sigma dz\) , we get
\( \frac{1}{\sigma \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} (\mu + \sigma z) e^{ - \frac{1}{2} z^2} \sigma dz \)
or \( \frac{1}{ \sqrt{2 \pi}} \left [ \displaystyle \int_{- \infty}^{\infty} \mu e^{ - \frac{1}{2} z^2} dz + \int_{- \infty}^{\infty} (\sigma z) e^{ - \frac{1}{2} z^2} dz \right ] \)
or \( \frac{1}{ \sqrt{2 \pi}} \left [\mu \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2} z^2} dz + \sigma \int_{- \infty}^{\infty} z. e^{ - \frac{1}{2} z^2} dz \right ] \)
or \( \frac{1}{ \sqrt{2 \pi}} \left [\mu . \sqrt{2 \pi} + \sigma .0 \right ] \)
or \( \frac{1}{ \sqrt{2 \pi}} \left [\mu . \sqrt{2 \pi} \right ] \)
or \( \mu \)
Variance of Normal Probability Distribution
Let X be a random variable having normal distribution with parameter μ and σ thenVariance of the distribution is
\( V(X)=E(X^2)-E(X)^2\)
Therefore
\( E(X^2)\)
or \( \displaystyle \int_{- \infty}^{\infty} x^2. f(x) dx \)
or \( \displaystyle \int_{- \infty}^{\infty} \frac{1}{\sigma \sqrt{2 \pi}} x^2. e^{ - \frac{1}{2} \left ( \frac{x- \mu}{\sigma} \right )^2} dx \)
or \( \frac{1}{\sigma \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} x^2. e^{ - \frac{1}{2} \left ( \frac{x- \mu}{\sigma} \right )^2} dx \)
Keeping \( \frac{x- \mu}{\sigma}=z\), we get \(x=\mu + \sigma z \) and \( dx= \sigma dz\) , we get
\( \frac{1}{\sigma \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} (\mu + \sigma z)^2 e^{ - \frac{1}{2} z^2} \sigma dz \)
or \( \frac{1}{ \sqrt{2 \pi}} \left [ \displaystyle \int_{- \infty}^{\infty} \mu ^2 e^{ - \frac{1}{2} z^2} dz +\displaystyle \int_{- \infty}^{\infty} 2 \mu \sigma z e^{ - \frac{1}{2} z^2} dz+ \int_{- \infty}^{\infty} (\sigma z)^2 e^{ - \frac{1}{2} z^2} dz \right ] \)
or \( \frac{1}{ \sqrt{2 \pi}} \left [\mu ^2 \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2} z^2} dz +2 \mu \sigma \displaystyle \int_{- \infty}^{\infty} z. e^{ - \frac{1}{2} z^2} dz+ \sigma ^2 \int_{- \infty}^{\infty} z^2. e^{ - \frac{1}{2} z^2} dz \right ] \)
or \( \frac{1}{ \sqrt{2 \pi}} \left [\mu ^2 . \sqrt{2 \pi} +2 \mu \sigma .0+ \sigma ^2 .\sqrt{2 \pi} \right ] \)
or \( \frac{1}{ \sqrt{2 \pi}} . \sqrt{2 \pi} \left [\mu ^2 +\sigma ^2 \right ] \)
or \( \mu ^2 +\sigma ^2 \)
Hence, the variance is
\( V(X)=E(X^2)-E(X)^2\)
or \( V(X)=\mu ^2 +\sigma ^2 -(\mu) ^2 \)
or \( V(X)=\sigma ^2\)
MGF of Normal Probability Distribution
Let X be a random variable having normal distribution with parameter μ and σ thenMGF of the distribution is
\( E(e^{tx})\)
or \( \displaystyle \int_{- \infty}^{\infty} e^{tx}. f(x) dx \)
or \( \displaystyle \int_{- \infty}^{\infty} \frac{1}{\sigma \sqrt{2 \pi}} e^{tx}. e^{ - \frac{1}{2} \left ( \frac{x- \mu}{\sigma} \right )^2} dx \)
or \( \frac{1}{\sigma \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2} \left ( \frac{x- \mu}{\sigma} \right )^2+tx} dx \)
or \( \frac{1}{\sigma \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2 \sigma ^2} \left [(x- \mu)^2-2tx \sigma ^2 \right ]} dx \)
or \( \frac{1}{\sigma \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2 \sigma ^2} \left [ \left \{ (x- \mu - t \sigma ^2)^2- t^2 \sigma ^4 +2 tx \sigma ^2 -2 t \mu \sigma ^2 \right \} -2t x \sigma ^2\right ]} dx \)
or \( \frac{1}{\sigma \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2 \sigma ^2} \left [ (x- \mu - t \sigma ^2)^2- t^2 \sigma ^4 -2 t \mu \sigma ^2 \right ]} dx \)
or \( \frac{1}{\sigma \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2 \sigma ^2} (x- \mu - t \sigma ^2)^2 } . e^{ - \frac{1}{2 \sigma ^2} (- t^2 \sigma ^4 -2 t \mu \sigma ^2 ) } dx \)
or \( \frac{1}{\sigma \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2 \sigma ^2} (x- \mu - t \sigma ^2)^2 } . e^{ \mu t + \frac{1}{2} \sigma ^2 t^2 } dx \)
or \( e^{ \mu t + \frac{1}{2} \sigma ^2 t^2 } \) \( \frac{1}{\sigma \sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2 \sigma ^2} (x- \mu - t \sigma ^2)^2 } . dx \)
or \( e^{ \mu t + \frac{1}{2} \sigma ^2 t^2 } \) 1
or \( e^{ \mu t + \frac{1}{2} \sigma ^2 t^2 } \)
Standard normal distribution
Standard normal distribution is the particular case of normal distribution when \( \mu=0, \sigma =1\). Therefore, a random variable X is said to have standard normal distribution if its probability density function is given by\( f(x) =\frac{1}{\sqrt{2 \pi}} \displaystyle \int_{- \infty}^{\infty} e^{ - \frac{1}{2} x^2}\)
Here
- \(E(X)=0\)
- \(V(X)=1\)
- \(M_x(t)=e^{\frac{1}{2} t^2}\)
Theorem
If X is random variable having binomial distribution with parameters n and p then limiting form of binomial distribution is normal distribution.Proof
Let X is random variable having binomial distribution with parameters n and p then
Mean of X is \(np\)
Variance of X is \(npq\)
Therefore,
\( z=\frac{x-np}{\sqrt{npq}} \) has standard binomial distribution
Now, in a bit of calculation, we can show that
\( \displaystyle \lim_{n \to \infty} M_z(t)=e^{\frac{1}{2} t^2} \)
This shows that, limiting form of MGF of Z as \( n \to \infty \) is MGF of standard normal distribution. Since, MGF and pdf are one-one correspondence function, we claim that limiting form of Z as \( n \to \infty \) is standard normal distribution. Since pdf and standard pdf are one-one correspondence function, we claim that limiting form of X as is normal distribution. This completes the proof.
No comments:
Post a Comment