Consistent Estimate
consistent estimate
[kən′sis·tənt ′es·tə·mət]Consistent Estimate
a type of statistical estimate of a parameter of a probability distribution. A consistent estimate has the property that as the number of observations increases, the probability of the estimate deviating from the estimated parameter by more than some assigned number approaches 0. For a more precise definition, let X1X2, Xn be independent observation results whose distribution is a function of an unknown parameter θ, and, for every n, let the function Tn = Tn(X1, . . . , Xn) be an estimate of θ constructed from the first n observations. The sequence of estimates (Tn) is then said to be consistent if, for every arbitrary number ∊ > 0 and for any permissible value of θ,
P{ǀTn – θǀ > ∊}→ 0
as n → ∞. In other words, the estimate is consistent if Tn converges in probability to θ.
Any unbiased estimate Tn of θ (or estimate with ETn → 0) whose variance approaches 0 with increasing n is a consistent estimate of θ by virtue of Chebyshev’s inequality
P(ǀTn – θǀ>∊)<DTn/∊2
Thus, the sample mean
and the sample variance
are consistent estimates of the mathematical expectation and variance, respectively, of a normal distribution.
Although a desirable characteristic of every statistical estimate, consistency pertains only to the asymptotic properties of the estimate. In practical applications with finite sample sizes, the consistency of an estimate does not necessarily mean the estimate is a good one. Criteria exist for selecting from various consistent estimates of some parameter the estimate that exhibits the desired properties.
The concept of a consistent estimate was first proposed by the British mathematician R. Fisher in 1922.
REFERENCES
Cramer, H. Matematicheskie melody statistiki. Moscow, 1975. (Translated from English.)Rao, C. R. Lineinye statisticheskie melody i ikh primeneniia. Moscow, 1968. (Translated from English.)
A. V. PROKHOROV