Describe Geometrically the Set in 3 Where F Fails to Be Continuous

Probability distribution

Geometric

Probability mass function

Geometric pmf.svg

Cumulative distribution function

Geometric cdf.svg
Parameters 0 < p 1 {\displaystyle 0<p\leq 1} success probability (real) 0 < p 1 {\displaystyle 0<p\leq 1} success probability (real)
Support k trials where k { 1 , 2 , 3 , } {\displaystyle k\in \{1,2,3,\dots \}} k failures where k { 0 , 1 , 2 , 3 , } {\displaystyle k\in \{0,1,2,3,\dots \}}
PMF ( 1 p ) k 1 p {\displaystyle (1-p)^{k-1}p} ( 1 p ) k p {\displaystyle (1-p)^{k}p}
CDF 1 ( 1 p ) k {\displaystyle 1-(1-p)^{\lfloor k\rfloor }} if k 1 {\displaystyle k\geq 1} ,
0 {\displaystyle 0} if k < 1 {\displaystyle k<1}
1 ( 1 p ) k + 1 {\displaystyle 1-(1-p)^{\lfloor k\rfloor +1}} if k 0 {\displaystyle k\geq 0} ,
0 {\displaystyle 0} if k < 0 {\displaystyle k<0}
Mean 1 p {\displaystyle {\frac {1}{p}}} 1 p p {\displaystyle {\frac {1-p}{p}}}
Median

1 log 2 ( 1 p ) {\displaystyle \left\lceil {\frac {-1}{\log _{2}(1-p)}}\right\rceil }

(not unique if 1 / log 2 ( 1 p ) {\displaystyle -1/\log _{2}(1-p)} is an integer)

1 log 2 ( 1 p ) 1 {\displaystyle \left\lceil {\frac {-1}{\log _{2}(1-p)}}\right\rceil -1}

(not unique if 1 / log 2 ( 1 p ) {\displaystyle -1/\log _{2}(1-p)} is an integer)
Mode 1 {\displaystyle 1} 0 {\displaystyle 0}
Variance 1 p p 2 {\displaystyle {\frac {1-p}{p^{2}}}} 1 p p 2 {\displaystyle {\frac {1-p}{p^{2}}}}
Skewness 2 p 1 p {\displaystyle {\frac {2-p}{\sqrt {1-p}}}} 2 p 1 p {\displaystyle {\frac {2-p}{\sqrt {1-p}}}}
Ex. kurtosis 6 + p 2 1 p {\displaystyle 6+{\frac {p^{2}}{1-p}}} 6 + p 2 1 p {\displaystyle 6+{\frac {p^{2}}{1-p}}}
Entropy ( 1 p ) log 2 ( 1 p ) p log 2 p p {\displaystyle {\tfrac {-(1-p)\log _{2}(1-p)-p\log _{2}p}{p}}} ( 1 p ) log 2 ( 1 p ) p log 2 p p {\displaystyle {\tfrac {-(1-p)\log _{2}(1-p)-p\log _{2}p}{p}}}
MGF p e t 1 ( 1 p ) e t , {\displaystyle {\frac {pe^{t}}{1-(1-p)e^{t}}},}
for t < ln ( 1 p ) {\displaystyle t<-\ln(1-p)}
p 1 ( 1 p ) e t , {\displaystyle {\frac {p}{1-(1-p)e^{t}}},}
for t < ln ( 1 p ) {\displaystyle t<-\ln(1-p)}
CF p e i t 1 ( 1 p ) e i t {\displaystyle {\frac {pe^{it}}{1-(1-p)e^{it}}}} p 1 ( 1 p ) e i t {\displaystyle {\frac {p}{1-(1-p)e^{it}}}}

In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions:

  • The probability distribution of the number X of Bernoulli trials needed to get one success, supported on the set { 1 , 2 , 3 , } {\displaystyle \{1,2,3,\ldots \}} ;
  • The probability distribution of the number Y =X − 1 of failures before the first success, supported on the set { 0 , 1 , 2 , } {\displaystyle \{0,1,2,\ldots \}} .

Which of these is called the geometric distribution is a matter of convention and convenience.

These two different geometric distributions should not be confused with each other. Often, the name shifted geometric distribution is adopted for the former one (distribution of the number X); however, to avoid ambiguity, it is considered wise to indicate which is intended, by mentioning the support explicitly.

The geometric distribution gives the probability that the first occurrence of success requires k independent trials, each with success probability p. If the probability of success on each trial is p, then the probability that the kth trial (out of finite trials) is the first success is

Pr ( X = k ) = ( 1 p ) k 1 p {\displaystyle \Pr(X=k)=(1-p)^{k-1}p}

fork = 1, 2, 3, 4, ....

The above form of the geometric distribution is used for modeling the number of trials up to and including the first success. By contrast, the following form of the geometric distribution is used for modeling the number of failures until the first success:

Pr ( Y = k ) = Pr ( X = k + 1 ) = ( 1 p ) k p {\displaystyle \Pr(Y=k)=\Pr(X=k+1)=(1-p)^{k}p}

fork = 0, 1, 2, 3, ....

In either case, the sequence of probabilities is a geometric sequence.

For example, suppose an ordinary die is thrown repeatedly until the first time a "1" appears. The probability distribution of the number of times it is thrown is supported on the infinite set { 1, 2, 3, ... } and is a geometric distribution with p = 1/6.

The geometric distribution is denoted by Geo(p) where 0 < p ≤ 1. [1]

Definitions [edit]

Consider a sequence of trials, where each trial has only two possible outcomes (designated failure and success). The probability of success is assumed to be the same for each trial. In such a sequence of trials, the geometric distribution is useful to model the number of failures before the first success since the experiment can have an indefinite number of trials until success, unlike the binomial distribution which has a set number of trials. The distribution gives the probability that there are zero failures before the first success, one failure before the first success, two failures before the first success, and so on.

Assumptions: When is the geometric distribution an appropriate model? [edit]

The geometric distribution is an appropriate model if the following assumptions are true.

  • The phenomenon being modeled is a sequence of independent trials.
  • There are only two possible outcomes for each trial, often designated success or failure.
  • The probability of success, p, is the same for every trial.

If these conditions are true, then the geometric random variable Y is the count of the number of failures before the first success. The possible number of failures before the first success is 0, 1, 2, 3, and so on. In the graphs above, this formulation is shown on the right.

An alternative formulation is that the geometric random variable X is the total number of trials up to and including the first success, and the number of failures is X − 1. In the graphs above, this formulation is shown on the left.

Probability outcomes examples [edit]

The general formula to calculate the probability of k failures before the first success, where the probability of success is p and the probability of failure isq = 1 −p, is

Pr ( Y = k ) = q k p . {\displaystyle \Pr(Y=k)=q^{k}\,p.}

for k = 0, 1, 2, 3, ....

E1) A doctor is seeking an antidepressant for a newly diagnosed patient. Suppose that, of the available anti-depressant drugs, the probability that any particular drug will be effective for a particular patient is p = 0.6. What is the probability that the first drug found to be effective for this patient is the first drug tried, the second drug tried, and so on? What is the expected number of drugs that will be tried to find one that is effective?

The probability that the first drug works. There are zero failures before the first success. Y = 0 failures. The probability Pr(zero failures before first success) is simply the probability that the first drug works.

Pr ( Y = 0 ) = q 0 p = 0.4 0 × 0.6 = 1 × 0.6 = 0.6. {\displaystyle \Pr(Y=0)=q^{0}\,p\ =0.4^{0}\times 0.6=1\times 0.6=0.6.}

The probability that the first drug fails, but the second drug works. There is one failure before the first success. Y = 1 failure. The probability for this sequence of events is Pr(first drug fails) × {\displaystyle \times } p(second drug succeeds), which is given by

Pr ( Y = 1 ) = q 1 p = 0.4 1 × 0.6 = 0.4 × 0.6 = 0.24. {\displaystyle \Pr(Y=1)=q^{1}\,p\ =0.4^{1}\times 0.6=0.4\times 0.6=0.24.}

The probability that the first drug fails, the second drug fails, but the third drug works. There are two failures before the first success. Y = 2 failures. The probability for this sequence of events is Pr(first drug fails) × {\displaystyle \times } p(second drug fails) × {\displaystyle \times } Pr(third drug is success)

Pr ( Y = 2 ) = q 2 p , = 0.4 2 × 0.6 = 0.096. {\displaystyle \Pr(Y=2)=q^{2}\,p,=0.4^{2}\times 0.6=0.096.}

E2) A newlywed couple plans to have children and will continue until the first girl. What is the probability that there are zero boys before the first girl, one boy before the first girl, two boys before the first girl, and so on?

The probability of having a girl (success) is p= 0.5 and the probability of having a boy (failure) is q = 1 −p = 0.5.

The probability of no boys before the first girl is

Pr ( Y = 0 ) = q 0 p = 0.5 0 × 0.5 = 1 × 0.5 = 0.5. {\displaystyle \Pr(Y=0)=q^{0}\,p\ =0.5^{0}\times 0.5=1\times 0.5=0.5.}

The probability of one boy before the first girl is

Pr ( Y = 1 ) = q 1 p = 0.5 1 × 0.5 = 0.5 × 0.5 = 0.25. {\displaystyle \Pr(Y=1)=q^{1}\,p\ =0.5^{1}\times 0.5=0.5\times 0.5=0.25.}

The probability of two boys before the first girl is

Pr ( Y = 2 ) = q 2 p = 0.5 2 × 0.5 = 0.125. {\displaystyle \Pr(Y=2)=q^{2}\,p\ =0.5^{2}\times 0.5=0.125.}

and so on.

Properties [edit]

Moments and cumulants [edit]

The expected value for the number of independent trials to get the first success, and the variance of a geometrically distributed random variable X is:

E ( X ) = 1 p , var ( X ) = 1 p p 2 . {\displaystyle \operatorname {E} (X)={\frac {1}{p}},\qquad \operatorname {var} (X)={\frac {1-p}{p^{2}}}.}

Similarly, the expected value and variance of the geometrically distributed random variable Y = X - 1 (See definition of distribution Pr ( Y = k ) {\displaystyle \Pr(Y=k)} ) is:

E ( Y ) = E ( X 1 ) = E ( X ) 1 = 1 p p , var ( Y ) = 1 p p 2 . {\displaystyle \operatorname {E} (Y)=\operatorname {E} (X-1)=\operatorname {E} (X)-1={\frac {1-p}{p}},\qquad \operatorname {var} (Y)={\frac {1-p}{p^{2}}}.}

Proof [edit]

That the expected value is (1 −p)/p can be shown in the following way. Let Y be as above. Then

E ( Y ) = k = 0 ( 1 p ) k p k = p k = 0 ( 1 p ) k k = p ( 1 p ) k = 0 ( 1 p ) k 1 k = p ( 1 p ) [ d d p ( k = 0 ( 1 p ) k ) ] = p ( 1 p ) d d p ( 1 p ) = 1 p p . {\displaystyle {\begin{aligned}\mathrm {E} (Y)&{}=\sum _{k=0}^{\infty }(1-p)^{k}p\cdot k\\&{}=p\sum _{k=0}^{\infty }(1-p)^{k}k\\&{}=p(1-p)\sum _{k=0}^{\infty }(1-p)^{k-1}\cdot k\\&{}=p(1-p)\left[{\frac {d}{dp}}\left(-\sum _{k=0}^{\infty }(1-p)^{k}\right)\right]\\&{}=p(1-p){\frac {d}{dp}}\left(-{\frac {1}{p}}\right)={\frac {1-p}{p}}.\end{aligned}}}

The interchange of summation and differentiation is justified by the fact that convergent power series converge uniformly on compact subsets of the set of points where they converge.

Let μ = (1 −p)/p be the expected value of Y. Then the cumulants κ n {\displaystyle \kappa _{n}} of the probability distribution of Y satisfy the recursion

κ n + 1 = μ ( μ + 1 ) d κ n d μ . {\displaystyle \kappa _{n+1}=\mu (\mu +1){\frac {d\kappa _{n}}{d\mu }}.}

Expected value examples [edit]

E3) A patient is waiting for a suitable matching kidney donor for a transplant. If the probability that a randomly selected donor is a suitable match is p = 0.1, what is the expected number of donors who will be tested before a matching donor is found?

With p = 0.1, the mean number of failures before the first success is E(Y) = (1 − p)/p =(1 − 0.1)/0.1 = 9.

For the alternative formulation, where X is the number of trials up to and including the first success, the expected value is E(X) = 1/p = 1/0.1 = 10.

For example 1 above, with p = 0.6, the mean number of failures before the first success is E(Y) = (1 − p)/p = (1 − 0.6)/0.6 = 0.67.

Higher-order moments [edit]

The moments for the number of failures before the first success are given by

E ( Y n ) = k = 0 ( 1 p ) k p k n = p Li n ( 1 p ) {\displaystyle {\begin{aligned}\mathrm {E} (Y^{n})&{}=\sum _{k=0}^{\infty }(1-p)^{k}p\cdot k^{n}\\&{}=p\operatorname {Li} _{-n}(1-p)\end{aligned}}}

where Li n ( 1 p ) {\displaystyle \operatorname {Li} _{-n}(1-p)} is the polylogarithm function.

General properties [edit]

  • The probability-generating functions of X and Y are, respectively,
G X ( s ) = s p 1 s ( 1 p ) , G Y ( s ) = p 1 s ( 1 p ) , | s | < ( 1 p ) 1 . {\displaystyle {\begin{aligned}G_{X}(s)&={\frac {s\,p}{1-s\,(1-p)}},\\[10pt]G_{Y}(s)&={\frac {p}{1-s\,(1-p)}},\quad |s|<(1-p)^{-1}.\end{aligned}}}
  • Like its continuous analogue (the exponential distribution), the geometric distribution is memoryless. That means that if you intend to repeat an experiment until the first success, then, given that the first success has not yet occurred, the conditional probability distribution of the number of additional trials does not depend on how many failures have been observed. The die one throws or the coin one tosses does not have a "memory" of these failures. The geometric distribution (that counts the trials rather than the failures) is the only memoryless discrete distribution.
Pr { X > m + n | X > n } = Pr { X > m } {\displaystyle \Pr\{X>m+n|X>n\}=\Pr\{X>m\}} [2]
  • Among all discrete probability distributions supported on {1, 2, 3, ... } with given expected valueμ, the geometric distribution X with parameter p = 1/μ is the one with the largest entropy.[3]
  • The geometric distribution of the number Y of failures before the first success is infinitely divisible, i.e., for any positive integer n, there exist independent identically distributed random variables Y 1, ...,Y n whose sum has the same distribution that Y has. These will not be geometrically distributed unless n = 1; they follow a negative binomial distribution.
  • The decimal digits of the geometrically distributed random variable Y are a sequence of independent (and not identically distributed) random variables.[ citation needed ] For example, the hundreds digit D has this probability distribution:
Pr ( D = d ) = q 100 d 1 + q 100 + q 200 + + q 900 , {\displaystyle \Pr(D=d)={q^{100d} \over 1+q^{100}+q^{200}+\cdots +q^{900}},}
where q = 1 −p, and similarly for the other digits, and, more generally, similarly for numeral systems with other bases than 10. When the base is 2, this shows that a geometrically distributed random variable can be written as a sum of independent random variables whose probability distributions are indecomposable.
  • Golomb coding is the optimal prefix code[ clarification needed ] for the geometric discrete distribution.[4]
  • The sum of two independent Geo(p) distributed random variables is not a geometric distribution. [1]

[edit]

  • The geometric distribution Y is a special case of the negative binomial distribution, with r = 1. More generally, if Y 1, ...,Y r are independent geometrically distributed variables with parameterp, then the sum
Z = m = 1 r Y m {\displaystyle Z=\sum _{m=1}^{r}Y_{m}}
follows a negative binomial distribution with parameters r andp.[5]
  • The geometric distribution is a special case of discrete compound Poisson distribution.
  • If Y 1, ...,Y r are independent geometrically distributed variables (with possibly different success parameters p m ), then their minimum
W = min m 1 , , r Y m {\displaystyle W=\min _{m\in 1,\ldots ,r}Y_{m}\,}
is also geometrically distributed, with parameter p = 1 m ( 1 p m ) . {\displaystyle p=1-\prod _{m}(1-p_{m}).} [6]
  • Suppose 0 <r < 1, and for k = 1, 2, 3, ... the random variable X k has a Poisson distribution with expected value r k /k. Then
k = 1 k X k {\displaystyle \sum _{k=1}^{\infty }k\,X_{k}}
has a geometric distribution taking values in the set {0, 1, 2, ...}, with expected value r/(1 −r).[ citation needed ]
  • The exponential distribution is the continuous analogue of the geometric distribution. If X is an exponentially distributed random variable with parameter λ, then
Y = X , {\displaystyle Y=\lfloor X\rfloor ,}
where {\displaystyle \lfloor \quad \rfloor } is the floor (or greatest integer) function, is a geometrically distributed random variable with parameter p = 1 −e λ (thus λ = −ln(1 −p)[7]) and taking values in the set {0, 1, 2, ...}. This can be used to generate geometrically distributed pseudorandom numbers by first generating exponentially distributed pseudorandom numbers from a uniform pseudorandom number generator: then ln ( U ) / ln ( 1 p ) {\displaystyle \lfloor \ln(U)/\ln(1-p)\rfloor } is geometrically distributed with parameter p {\displaystyle p} , if U {\displaystyle U} is uniformly distributed in [0,1].
  • If p = 1/n and X is geometrically distributed with parameter p, then the distribution of X/n approaches an exponential distribution with expected value 1 as n → ∞, since
Pr ( X / n > a ) = Pr ( X > n a ) = ( 1 p ) n a = ( 1 1 n ) n a = [ ( 1 1 n ) n ] a [ e 1 ] a = e a  as n . {\displaystyle {\begin{aligned}\Pr(X/n>a)=\Pr(X>na)&=(1-p)^{na}=\left(1-{\frac {1}{n}}\right)^{na}=\left[\left(1-{\frac {1}{n}}\right)^{n}\right]^{a}\\&\to [e^{-1}]^{a}=e^{-a}{\text{ as }}n\to \infty .\end{aligned}}}

More generally, if p =λ/n, where λ is a parameter, then as n→ ∞ the distribution of X/n approaches an exponential distribution with rate λ:

Pr ( X > n x ) = lim n ( 1 λ / n ) n x = e λ x {\displaystyle \Pr(X>nx)=\lim _{n\to \infty }(1-\lambda /n)^{nx}=e^{-\lambda x}}

therefore the distribution function of X/n converges to 1 e λ x {\displaystyle 1-e^{-\lambda x}} , which is that of an exponential random variable.

Statistical inference [edit]

Parameter estimation [edit]

For both variants of the geometric distribution, the parameter p can be estimated by equating the expected value with the sample mean. This is the method of moments, which in this case happens to yield maximum likelihood estimates of p.[8] [9]

Specifically, for the first variant let k =k 1, ...,k n be a sample where k i  ≥ 1 for i = 1, ...,n. Then p can be estimated as

p ^ = ( 1 n i = 1 n k i ) 1 = n i = 1 n k i . {\displaystyle {\widehat {p}}=\left({\frac {1}{n}}\sum _{i=1}^{n}k_{i}\right)^{-1}={\frac {n}{\sum _{i=1}^{n}k_{i}}}.\!}

In Bayesian inference, the Beta distribution is the conjugate prior distribution for the parameter p. If this parameter is given a Beta(α,β) prior, then the posterior distribution is

p B e t a ( α + n , β + i = 1 n ( k i 1 ) ) . {\displaystyle p\sim \mathrm {Beta} \left(\alpha +n,\ \beta +\sum _{i=1}^{n}(k_{i}-1)\right).\!}

The posterior mean E[p] approaches the maximum likelihood estimate p ^ {\displaystyle {\widehat {p}}} as α and β approach zero.

In the alternative case, let k 1, ...,k n be a sample where k i  ≥ 0 for i = 1, ...,n. Then p can be estimated as

p ^ = ( 1 + 1 n i = 1 n k i ) 1 = n i = 1 n k i + n . {\displaystyle {\widehat {p}}=\left(1+{\frac {1}{n}}\sum _{i=1}^{n}k_{i}\right)^{-1}={\frac {n}{\sum _{i=1}^{n}k_{i}+n}}.\!}

The posterior distribution of p given a Beta(α,β) prior is[10] [11]

p B e t a ( α + n , β + i = 1 n k i ) . {\displaystyle p\sim \mathrm {Beta} \left(\alpha +n,\ \beta +\sum _{i=1}^{n}k_{i}\right).\!}

Again the posterior mean E[p] approaches the maximum likelihood estimate p ^ {\displaystyle {\widehat {p}}} as α and β approach zero.

For either estimate of p ^ {\displaystyle {\widehat {p}}} using Maximum Likelihood, the bias is equal to

b E [ ( p ^ m l e p ) ] = p ( 1 p ) n {\displaystyle b\equiv \operatorname {E} {\bigg [}\;({\hat {p}}_{\mathrm {mle} }-p)\;{\bigg ]}={\frac {p\,(1-p)}{n}}}

which yields the bias-corrected maximum likelihood estimator

p ^ mle = p ^ mle b ^ {\displaystyle {\hat {p\,}}_{\text{mle}}^{*}={\hat {p\,}}_{\text{mle}}-{\hat {b\,}}}

Computational methods [edit]

Geometric distribution using R [edit]

The R function dgeom ( k , prob ) calculates the probability that there are k failures before the first success, where the argument "prob" is the probability of success on each trial.

For example,

dgeom ( 0 , 0.6 ) = 0.6

dgeom ( 1 , 0.6 ) = 0.24

R uses the convention that k is the number of failures, so that the number of trials up to and including the first success is k + 1.

The following R code creates a graph of the geometric distribution from Y = 0 to 10, with p = 0.6.

                        Y            =            0            :            10            plot            (            Y            ,            dgeom            (            Y            ,            0.6            ),            type            =            "h"            ,            ylim            =            c            (            0            ,            1            ),            main            =            "Geometric distribution for p=0.6"            ,            ylab            =            "Pr(Y=Y)"            ,            xlab            =            "Y=Number of failures before first success"            )          

Geometric distribution using Excel [edit]

The geometric distribution, for the number of failures before the first success, is a special case of the negative binomial distribution, for the number of failures before s successes.

The Excel function NEGBINOMDIST(number_f, number_s, probability_s) calculates the probability of k = number_f failures before s = number_s successes where p = probability_s is the probability of success on each trial. For the geometric distribution, let number_s = 1 success.

For example,

=NEGBINOMDIST(0, 1, 0.6) = 0.6
=NEGBINOMDIST(1, 1, 0.6) = 0.24

Like R, Excel uses the convention that k is the number of failures, so that the number of trials up to and including the first success is k + 1.

See also [edit]

  • Hypergeometric distribution
  • Coupon collector's problem
  • Compound Poisson distribution
  • Negative binomial distribution

References [edit]

  1. ^ a b A modern introduction to probability and statistics : understanding why and how. Dekking, Michel, 1946-. London: Springer. 2005. pp. 48–50, 61–62, 152. ISBN9781852338961. OCLC 262680588. {{cite book}}: CS1 maint: others (link)
  2. ^ Guntuboyina, Aditya. "Fall 2018 Statistics 201A (Introduction to Probability at an advanced level) - All Lecture Notes" (PDF).
  3. ^ Park, Sung Y.; Bera, Anil K. (June 2009). "Maximum entropy autoregressive conditional heteroskedasticity model". Journal of Econometrics. 150 (2): 219–230. doi:10.1016/j.jeconom.2008.12.014.
  4. ^ Gallager, R.; van Voorhis, D. (March 1975). "Optimal source codes for geometrically distributed integer alphabets (Corresp.)". IEEE Transactions on Information Theory. 21 (2): 228–230. doi:10.1109/TIT.1975.1055357. ISSN 0018-9448.
  5. ^ Pitman, Jim. Probability (1993 edition). Springer Publishers. pp 372.
  6. ^ Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David (1 June 1995). "On the minimum of independent geometrically distributed random variables". Statistics & Probability Letters. 23 (4): 313–326. doi:10.1016/0167-7152(94)00130-Z. S2CID 1505801.
  7. ^ "Wolfram-Alpha: Computational Knowledge Engine". www.wolframalpha.com.
  8. ^ casella, george; berger, roger l (2002). statistical inference (2nd ed.). pp. 312–315. ISBN0-534-24312-6.
  9. ^ "MLE Examples: Exponential and Geometric Distributions Old Kiwi - Rhea". www.projectrhea.org . Retrieved 2019-11-17 .
  10. ^ "3. Conjugate families of distributions" (PDF).
  11. ^ "Conjugate prior", Wikipedia, 2019-10-03, retrieved 2019-11-17

External links [edit]

  • Geometric distribution on MathWorld.

tapiabourandeave.blogspot.com

Source: https://en.wikipedia.org/wiki/Geometric_distribution

0 Response to "Describe Geometrically the Set in 3 Where F Fails to Be Continuous"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel