SSJ
V. 1.2.5.

umontreal.iro.lecuyer.gof
Class FDist

java.lang.Object
  extended by umontreal.iro.lecuyer.gof.FDist

public class FDist
extends Object

This class provides methods to compute (or approximate) the distribution functions of various types of goodness-of-fit test statistics. All the methods in this class return F(x) for some probability distribution. Recall that the distribution function of a continuous random variable X with density f is

F(x) = P[X <= x] = ∫-∞xf (x)dx

while that of a discrete random variable X with mass function f over the set of integers is

F(x) = P[X <= x] = ∑s=-∞xf (s).

Most distributions are implemented only in standardized form here, i.e., with the location parameter set to 0 and the scale parameter set to 1. To shift the distribution by x0 and rescale by c, it suffices to replace x by (x - x0)/c in the argument when calling the function.


Method Summary
static double andersonDarling(int N, double x)
          Returns P[AN2 <= x], where AN2 is the Anderson-Darling statistic for a sample of independent uniforms over (0, 1).
static double cramerVonMises(int N, double x)
          Returns an approximation of P[WN2 <= x], where WN2 is the Cramér-von Mises statistic for a sample of independent uniforms over (0, 1).
static double kolmogorovSmirnov(int N, double x)
          Returns p(x) = P[DN <= x], where DN = max(DN+, DN-) is the two-sided Kolmogorov-Smirnov statistic for a sample of size N.
static double kolmogorovSmirnovPlus(int N, double x)
          Returns p(x) = P[DN+ <= x], the distribution function of the positive Kolmogorov-Smirnov statistic.
static double kolmogorovSmirnovPlusJumpOne(int N, double a, double x)
          Similar to kolmogorovSmirnovPlus but for the case where the distribution function F has a jump of size a at a given point x0, is zero at the left of x0, and is continuous at the right of x0.
static double scan(int N, double d, int m)
          Returns F(m), the distribution function of the scan statistic with parameters N and d, evaluated at m.
static double watsonG(int N, double x)
          Returns an approximation of P[GN <= x], where GN is the Watson statistic defined in watsonU, for a sample of independent uniforms over (0, 1).
static double watsonU(int N, double x)
          Returns P[U2 <= x], where U2 is the Watson statistic in the limit when N -> ∞, for a sample of independent uniforms over (0, 1).
 
Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Method Detail

kolmogorovSmirnovPlus

public static double kolmogorovSmirnovPlus(int N,
                                           double x)
Returns p(x) = P[DN+ <= x], the distribution function of the positive Kolmogorov-Smirnov statistic.

DN+ = sup-∞ < s < ∞[hat(F)N(s) - F(s)]+

is the positive Kolmogorov-Smirnov statistic for a sample of size N whose empirical distribution function is hat(F)N, under the hypothesis that the observations follow a continuous distribution function F. (Recall that x+ represents max(0, x), the positive part of x.) The statistic

DN- = sup-∞ < s < ∞[F(s) - hat(F)N(s)]+

has the same distribution as DN+. Methods for computing these statistics are available in class GofStat. The relative error on p(x) = P[DN+ <= x] is always less than 10-5, and the relative error on 1 - p(x) is less than 10-1 when 1 - p(x) > 10-10. The absolute error on 1 - p(x) is less than 10-11 when 1 - p(x) < 10-10.

Parameters:
N - sample size
x - positive or negative Kolmogorov-Smirnov statistic
Returns:
the distribution function of the statistic evaluated at x

kolmogorovSmirnov

public static double kolmogorovSmirnov(int N,
                                       double x)
Returns p(x) = P[DN <= x], where DN = max(DN+, DN-) is the two-sided Kolmogorov-Smirnov statistic for a sample of size N. The implemented approximation improves when N increase or x goes away from 0. The error on p(x) is less than 1 percent (approximately) for N > 100. For N = 1, the method returns the exact value P[D1 <= x] = 2x - 1 for 1/2 <= x <= 1.

Warning: for 1 < N < 10 or x in the lower tail, the approximation is bad. But the precision is at least 1 decimal digit nearly everywhere.

Parameters:
N - sample size
x - Kolmogorov-Smirnov statistic
Returns:
the distribution function of the statistic evaluated at x

kolmogorovSmirnovPlusJumpOne

public static double kolmogorovSmirnovPlusJumpOne(int N,
                                                  double a,
                                                  double x)
Similar to kolmogorovSmirnovPlus but for the case where the distribution function F has a jump of size a at a given point x0, is zero at the left of x0, and is continuous at the right of x0. Restriction: 0 < a < 1.

Parameters:
N - sample size
a - size of the jump
x - positive or negative Kolmogorov-Smirnov statistic
Returns:
the distribution function of the statistic evaluated at x

cramerVonMises

public static double cramerVonMises(int N,
                                    double x)
Returns an approximation of P[WN2 <= x], where WN2 is the Cramér-von Mises statistic for a sample of independent uniforms over (0, 1). The approximation is based on the distribution function of W2 = limN -> ∞WN2. For N = 10, 20, 40, the error is less than 0.002, 0.001, and 0.0005, respectively, while for N >= 100 it is less than 0.0005. For N -> ∞, we estimate that the method returns at least 6 decimal digits of precision. For N = 1, the method computes the exact distribution: P(W12 <= x) = 2(x - 1/12)1/2 for 1/12 <= x <= 1/3.

Parameters:
N - sample size
x - Cramér-von Mises statistic
Returns:
the distribution function of the statistic evaluated at x

watsonU

public static double watsonU(int N,
                             double x)
Returns P[U2 <= x], where U2 is the Watson statistic in the limit when N -> ∞, for a sample of independent uniforms over (0, 1). Only this limiting distribution (when N -> ∞) is implemented. It is given by

P(U2 <= x)  =  1 + 2∑j=1(- 1)je-2j2π2x

This sum converges extremely fast except for small x, where alternating successive terms give rise to numerical instability. But with the Poisson summation formula, the sum can be transformed to

P(U2 <= x)  =  (2/(πx))1/2  ∑j=0e-(2j+1)2/8  x

which can be used for small x. The absolute difference between the returned value and P[UN2 <= x] is estimated to be less than 0.01 for N >= 8. For the trivial case N = 1, always returns 0.5.

Parameters:
N - sample size
x - Watson statistic
Returns:
the distribution function of the statistic evaluated at x

watsonG

public static double watsonG(int N,
                             double x)
Returns an approximation of P[GN <= x], where GN is the Watson statistic defined in watsonU, for a sample of independent uniforms over (0, 1). The approximation is computed in a similar way as for cramerVonMises. To implement this method, a table of the values of g(x) = limN -> ∞P[GN <= x] and of its derivative was first computed by numerical integration. For x <= 1.5, the method uses this table with cubic spline interpolation. For x > 1.5, it uses the empirical curve g(x) = 1 - e19-20x. A correction of order 1/(N)1/2, obtained empirically from 107 simulation runs with N = 256 and also implemented as an interpolation table with an exponential tail, is then added. The absolute error is estimated to be less than 0.01, 0.005, 0.002, 0.0008, 0.0005, 0.0005, 0.0005 for N = 16, 32, 64, 128, 256, 512, 1024, respectively.

Parameters:
N - sample size
x - Watson statistic
Returns:
the distribution function of the statistic evaluated at x

andersonDarling

public static double andersonDarling(int N,
                                     double x)
Returns P[AN2 <= x], where AN2 is the Anderson-Darling statistic for a sample of independent uniforms over (0, 1). The approximation is computed similarly as for cramerVonMises. To implement this method, an interpolation table of the values of g(x) = limN -> ∞P[AN2 <= x] was first computed by numerical integration. Then a linear correction in 1/N, obtained by simulation, was added. For x <= 5.0, the method approximates gN(x) = P[AN2 <= x] by interpolation. For x > 5.0 (the tail of the distribution), it uses the empirical curve gN(x) =  1 - e-1.06x-0.56 - e-1.06x-1.03/N, which includes an empirical correction in 1/N. The absolute error on gN(x) is estimated to be less than 0.001 for N > 6. For N = 2, 3, 4, 6, it is estimated to be less than 0.04, 0.01, 0.005, 0.002, respectively. For N = 1, the method returns the exact value, gN(x) = (1 - 4e-x-1)1/2 for x >= ln(4) - 1.

Parameters:
N - sample size
x - Anderson-Darling statistic
Returns:
the distribution function of the statistic evaluated at x

scan

public static double scan(int N,
                          double d,
                          int m)
Returns F(m), the distribution function of the scan statistic with parameters N and d, evaluated at m. For a description of this statistic and its distribution, see scan, which computes its complementary distribution bar(F)(m) = 1 - F(m - 1).

Parameters:
N - sample size ( >= 2)
d - length of the test interval (∈(0, 1))
m - scan statistic
Returns:
the distribution function of the statistic evaluated at m

SSJ
V. 1.2.5.

To submit a bug or ask questions, send an e-mail to Pierre L'Ecuyer.