Abstract
Background
The spatial and spacetime scan statistics are commonly applied for the detection of geographical disease clusters. Monte Carlo hypothesis testing is typically used to test whether the geographical clusters are statistically significant as there is no known way to calculate the null distribution analytically. In Monte Carlo hypothesis testing, simulated random data are generated multiple times under the null hypothesis, and the pvalue is r/(R + 1), where R is the number of simulated random replicates of the data and r is the rank of the test statistic from the real data compared to the same test statistics calculated from each of the random data sets. A drawback to this powerful technique is that each additional digit of pvalue precision requires ten times as many replicated datasets, and the additional processing can lead to excessive run times.
Results
We propose a new method for obtaining more precise pvalues with a given number of replicates. The collection of test statistics from the random replicates is used to estimate the true distribution of the test statistic under the null hypothesis by fitting a continuous distribution to these observations. The choice of distribution is critical, and for the spatial and spacetime scan statistics, the extreme value Gumbel distribution performs very well while the gamma, normal and lognormal distributions perform poorly. From the fitted Gumbel distribution, we show that it is possible to estimate the analytical pvalue with great precision even when the test statistic is far out in the tail beyond any of the test statistics observed in the simulated replicates. In addition, Gumbelbased rejection probabilities have smaller variability than Monte Carlobased rejection probabilities, suggesting that the proposed approach may result in greater power than the true Monte Carlo hypothesis test for a given number of replicates.
Conclusions
For large data sets, it is often advantageous to replace computer intensive Monte Carlo hypothesis testing with this new method of fitting a Gumbel distribution to random data sets generated under the null, in order to reduce computation time and obtain much more precise pvalues and slightly higher statistical power.
Background
Introduction
Geographic cluster detection and evaluation are important in disease surveillance. One frequently used method for cluster detection is the spatial scan statistic [13] and the related spacetime scan statistic [4]. This method has been used to study the geography of infectious diseases such as malaria [5], vector borne diseases such as West Nile Virus [6], many different forms of cancer [711], low birth weight [12], syndromic surveillance [1317], and bovine spongiform encephalopathy [18], among many other diseases.
The spatial scan statistic is found by moving a scanning window across the geographical region of interest, generating a large collection of window locations and sizes that meet predefined criteria. A likelihood ratio is calculated for the data corresponding to each window location and size and the spatial scan statistic is the maximum of these likelihood ratios. The window location and size with the maximum likelihood ratio is the most likely cluster; that is, the cluster that is least likely to have occurred by chance [1,2]. Except for the simplest scenarios, there is no known closedform theoretical distribution for the spatial scan statistic. Therefore, pvalues for scan statistics are usually obtained using Monte Carlo hypothesis testing [19].
In Monte Carlo hypothesis testing, a large number of random replicates of the observed data are generated under the null hypothesis. Monte Carlo pvalues are asymptotically equivalent to pvalues from exact permutation tests as the number of random replicates increases, but the key property of Monte Carlo hypothesis testing pvalues is that they maintain the correct alpha level, exactly, as long as the number of replicates plus one is a multiple of 1/α [1921]. Monte Carlo hypothesis testing can therefore be useful when theoretical distributions are unknown and the number of permutations prohibits a full enumeration. One major drawback to the approach is that small pvalues can only be obtained through a very large number of Monte Carlo replicates, which may be computer intensive and time consuming. For the spatial and spacetime scan statistics, Monte Carlo hypothesis testing requires the calculation of the likelihood ratio for each location and size of the scanning window, for each replicated data set. Thus, the approach can be computer intensive for very large data sets.
In disease surveillance, the spacetime scan statistic is sometimes calculated on a daily basis, to continuously monitor a disease in near realtime [13,22]. These clusters may then be reported to local, state, or federal public health officials for potential investigation. Using a conventional 0.05 αlevel would on average result in one false rejection of the null hypothesis every 20 days. Because of limited resources, health officials are not able to investigate a lot of false alarms [13,22]. To control the number of false rejections at a more tenable level, one might instead use an αlevel of 1/365 = 0.00274 or 1/3650 = 0.000274, corresponding to one expected false positive every year or every ten years, respectively, for daily analyses. So, instead of 999 replicates for an alpha level of 0.05, we may want to use 99,999 replicates or more for an alpha level of 0.000274, keeping approximately the same ratio. If multiple diseases are under surveillance, this may require a high computational burden with millions of random replicates to be simulated each day when Monte Carlo hypothesis testing is used.
In this article, we propose a way to do hypothesis testing for very small alpha levels with fewer calculations. The approach we take is to find a distribution which closely approximates the distribution of the test statistics that were generated under the null hypothesis, which themselves reflect the distribution of the scan statistic under the null. To do this we generate a relatively small number of random simulated replicates under the null hypothesis. We then use them to estimate parameters for a distribution with a wellcharacterized functional form. If this distribution fits the sample distribution well, we can use it as an estimate of the distribution of the spatial or spacetime scan statistic under the null and use it to generate arbitrarily small pvalues. Because we are interested in small pvalues, it is particularly important that the estimate is good in the tail of the distribution.
We note that although this paper is focused on the spatial and spacetime scan statistics, the general methodology that we propose in this article can easily be applied to other test statistics that rely on Monte Carlo hypothesis testing.
Scan statistics
The spatial scan statistic is used to identify potentially unusual clustering of events on a map. Events may, for example, be cases of disease incidence, prevalence or mortality. Suppose that there are p geographical coordinate pairs marked on a map, each representing a region. The analysis is conditioned on the total number of events, and under the null hypothesis, each event is independently and randomly located in a region with probability proportional to the population in the region, or to some covariateadjusted population based denominator. How best to adjust for covariates is a critical issue which we do not consider in this paper.
We look at all unique subsets of events that lie within a collection of scanning windows to detect clusters. Although any shape scanning window may be used, we use circles throughout this paper. Consider all circles, C_{i,r}, where i = 1,..., p indicates the coordinates around which a circle may be centered, and r indicates its radius, which ranges from 0 to some prespecified maximum. Based on the observed and expected number of events inside and outside the circle, calculate the likelihood ratio for each distinct circle [1,2]. The circle with the maximum likelihood ratio is the most likely cluster, that is, the cluster that is least likely to have occurred by chance. For computational simplicity, the logarithm of the likelihood ratio is typically used instead of the ratio itself, and the loglikelihood ratio associated with this circle is defined as the scan statistic. Likelihoods can be calculated under different probability models, such as binomial or Poisson.
The spacetime scan statistic is analogously used to identify clusters in regions of space and time. Envisioning each discrete moment of time as a separate map, and the set of times as a stack of maps, the circles mentioned above can extend through the maps, making cylinders that are the potential clusters. The cylinder with the maximum likelihood ratio is the most likely cluster, and its loglikelihood ratio is the spacetime scan statistic. In spacetime models, we consider Poisson as well as spacetime permutationbased probability models [4,23].
For this study, we used the SaTScan™ [24] statistical software program, which calculates the scan statistic and implements Monte Carlo hypothesis testing to calculate a pvalue. SaTScan™ allows the user to vary many parameters including the maximum cluster size, the probability model, and the number of Monte Carlo replicates.
Monte Carlo hypothesis testing
When the underlying distribution for the test statistic is unknown it is not possible to calculate a standard analytical pvalue. When it is still possible, however, to generate data under the null hypothesis, then Monte Carlo hypothesis testing can be used to calculate Monte Carlo based pvalues, as proposed by Dwass [19]. To do this, one first calculates the test statistic from the real data. Then, a large number of random data sets are generated according to the null hypothesis, and the test statistic is calculated for each of these data sets. If one creates R random replicates of the data and r1 of those replicates have a test statistic which is greater than or equal to the test statistic from the real data, so that r is the rank of the test statistics among the real data, then the Monte Carlo based pvalue of the observed test statistic is r/(1+R). If the test statistic from the real data set is among the highest 5 percent from the random data sets, then we can reject the null hypothesis at the α = 0.05 level of statistical significance.
As pointed out by several statisticians [1921,25], a nice feature of Monte Carlo hypothesis testing is that the correct αlevel can be maintained exactly. This is simply done by choosing R so that α(1+R) is an integer. For example, if α = 0.05, then the probability to reject the null hypothesis is exactly 0.05 when R = 19, 99, 999, or 9999 random replicates. Following Bernard [19], suppose R = 19. Under the null hypothesis, the one real and 19 random data sets are generated in exactly the same way, so they are all generated from the same probability distribution. This, in turn, means that the ordering of the 20 test statistics is completely random, so that any single one is equally likely to be the highest, 2^{nd }highest, 3^{rd }highest, and so on, as well as equally likely to be the lowest. Hence, under the null hypothesis, the probability that the test statistic from the real data set has the highest value is 1/20 = 0.05, exactly. If it does have the highest test statistic, the Monte Carlo based pvalue is p = r/(1 + R) = 1/(1 + 19) = 1/20 = 0.05, and since p ≤ α = 0.05, the null hypothesis is rejected.
Since the correct alpha level is maintained exactly whether R is small or large, one may think that the choice does not matter, but that is not the case, as fewer replicates means lower statistical power [2527]. Hence, more replicates are always better. For α = 0.05, 999 replicates gives very good power, but for smaller alpha levels, an increasingly higher number is needed [21,25].
One drawback with Monte Carlo hypothesis testing is that the pvalue can never be smaller than 1/(1 + R). For example, with R = 999, the pvalue is never less than 0.001. In most applications, with a 0.01 or 0.05 αlevel, that is not a problem, as it is not necessary to differentiate between pvalues of, say, 0.001 and 0.00001. A relatively small number of replicates will be sufficient. However, in the context of daily analyses in realtime disease surveillance, a cluster with p ≤ 0.05 will by chance happen once every 20 days, on average. That is too often, and the goal is to detect clusters of disease that are very unusual, and only the most unusual clusters will be investigated further. Pvalues on the order of 0.0001 or even smaller may be required before an investigation is launched. These pvalues require at least 9999 Monte Carlo replicates and even more are needed to ensure good statistical power [21,25]. The number of Monte Carlo replicates required is determined by the desired precision of the pvalue, and each additional decimal place requires 10 times the number of Monte Carlo replicates and hence about 10 times the computing time.
The Gumbel distribution
The spatial scan statistic described above is the maximum value taken over many circle locations and sizes, so the collection of these statistics generated from the Monte Carlo replicates is a distribution of maximum values. Since our technique involves finding a distribution which closely matches the distribution of the replicated statistics, it is natural to consider one of the extreme value distributions as one possible candidate to approximate the desired distribution. The Gumbel distribution is a distribution of extreme values, either maxima or minima. Here, we limit ourselves to distributions of maxima since the scan statistic is a maximum. The cdf for the Gumbel distribution of maxima is and the pdf is . Both μ and β can be estimated from a sample of observations using method of moments estimators as follows:
where is the sample mean and s is the sample standard deviation [28,29].
Methods
To evaluate whether it is possible to obtain approximate small pvalues with only a limited number of Monte Carlo replicates, we performed computer simulations fitting different probability distributions to the sample test statistics from the random data sets generated under the null. For our baseline setup, we use a map of 245 counties and county equivalents in the Northeast United States, with each county represented by its censusdefined centroid [24]. Under the null hypothesis, the number of cases in each county is Poisson distributed. Conditioning on a total of 600 cases, the cases were randomly and independently assigned to a county with probability proportional to the 1994 female population in that county [30]. The maximum circle size of the scan statistic was set to 50% of the population.
First, we generated 100,000,000 Monte Carlo replicates of the data under the null hypothesis. The maximum loglikelihood ratio among all distinct circles is the statistic reported from each replicate. These 100,000,000 statistics generated our "gold standard" distribution of loglikelihood ratios, which we treat as if it were the actual distribution of the statistic under the null. Using this distribution, we find the 'true' loglikelihood ratio corresponding to a given αlevel by finding the loglikelihood ratio for which the rank divided by 100,000,000 gives the desired αlevel. For example, the loglikelihood ratio with a rank of 1,000,000 corresponds to an αlevel of 0.01, since 1,000,000/100,000,000 = 0.01.
Using the same parameter settings, we also generated sets of 999 Monte Carlo replicates of the data. We used the 999 maximum loglikelihood ratios obtained from the Monte Carlo replicates to fit normal, gamma, lognormal and Gumbel distributions to the data to see if any of them would approximate the true distribution of these loglikelihood ratios. The first three were chosen because they are three of the most commonly used continuous distributions, without any deeper rationale. The extreme value Gumbel distribution was chosen since the test statistic is a maximum taken over many possible circles.
For the normal, gamma, and lognormal distributions we used maximum likelihood parameter estimates, and we used the method of moments estimators of the Gumbel distribution parameters. For each set of 999 replicates, we get a different set of parameter estimates for each distribution. Figure 1 shows an example of a histogram from 1 set of 999 loglikelihoods with the estimated normal, lognormal, gamma, and Gumbel distributions. Even if the choice of distribution is the correct one, there is some error in the parameter estimates that plays a role in the ability of the approach to accurately estimate the pvalue. While we used the moments estimator for the Gumbel distribution, which is very easy to compute, it is possible that the maximum likelihood estimates would have given slightly better results. The approach hinges on the ability of the estimated parameters to approximate the far tail of the true distribution of the statistic under the null, using only a relatively small number of randomly generated data sets. If the tail of the proposed distribution corresponds well to the far tail of the "gold standard" empirical distribution based on 100,000,000 random data sets, then the approach will be successful.
Figure 1. Histogram of 999 loglikelihoods from 1 set of Monte Carlo replicates with estimated normal, lognormal, gamma, and Gumbel distributions.
The idea now is to use the fitted distribution function to obtain a pvalue. The pvalue is calculated by finding the area under this distribution that is to the right of the observed test statistic. For this to work, it is important that the right tail of this function is similar to the right tail of the true distribution that is represented by the gold standard distribution from the 100,000,000 replicates. In order to check this, we used the cumulative distribution function (cdf) of each fitted distribution to find the critical value of the loglikelihood ratio corresponding to the nominal αlevel. We then ranked each critical value among the 100,000,000 loglikelihood ratios in the gold standard distribution to find the true probability of rejecting the null at that critical value. We call this the rejection probability, and for the test to be unbiased, the expected value of this rejection probability must be equal to the nominal (desired) αlevel. For each type of distribution, we did this 1000 times which resulted in 1000 critical values and, therefore, 1000 rejection probabilities. The average of these rejection probabilities is an estimate of the true (actual) αlevel, which is then compared with the nominal αlevel.
Here we present a formal description of this process; a schematic diagram is shown in Figure 2. Let be the probability density function (pdf) of the distribution, d, obtained by using the loglikelihood ratios generated from the Monte Carlo replicates to estimate the parameters for d. Here we use d = normal, lognormal, gamma, and Gumbel. Let be the associated cdf. For the nominal αlevel, , and each distribution, we first find the critical value for which ; that is, we find . Note that is the value for which the area under and to the right of is . Now, let be the true pdf of the loglikelihood ratios and let be the corresponding cdf. Then is the rejection probability associated with from distribution d. Of course, is unknown, and here we use the observed 100,000,000 replicates as a proxy. Effectively, where llr_{i }is the i^{th }loglikelihood ratio in the goldstandard distribution. The average of the 1000 's is our estimate of the true αlevel, when the nominal αlevel is . Note that the αlevel found using Monte Carlo hypothesis testing, which we denote is proven theoretically to be correct, so that .
Figure 2. Schematic for finding the rejection probability. First find the critical value from the fitted distribution, then use that value to find , the 'true' probability of rejecting the null hypothesis in the 'gold standard' pdf obtained from the 100,000,000 replicates under the null.
In addition to the baseline setup, we repeated the same experiment with different numbers of cases, different maps, different probability models, different maximum circle sizes, and different numbers of Monte Carlo replicates. We also repeated the experiment adding a temporal dimension. The combinations that we used are summarized in Table 1. United States 3digit zip code populations were obtained from the 1990 United States Census [31]. For each set of conditions in Table 1, we performed the experiment using 99, 999, and 9999 Monte Carlo replicates to generate the fitted distribution. We chose 5 nominal αlevels ( = 0.05, 0.01, 0.001, 0.0001, 0.00001). All scan statistics were calculated using the SaTScan™ software.
Table 1. Combinations of settings used; bold indicates baseline settings.
Results
αlevels
The most important evaluation criterion in classical hypothesis testing is to ensure that the αlevel (type I error) is correct. For the baseline experimental setup, each histogram in Figure 3 shows 1000 rejection probabilities, one for each set of 999 Monte Carlo replicates generated under the null, for each fitted distribution. The mean of these rejection probabilities is an estimate of the true αlevel achieved through this process. Note that in order to maintain the correct αlevel, it is enough for the mean of the distribution to equal α, while the variance around α is irrelevant. Therefore, to maintain the correct αlevel, it is sufficient that the rejection probabilities are centered around the desired αlevel. Among the distributions assessed here, the rejection probabilities from the Gumbel approximation are centered around the nominal αlevels; for the other distributions the rejection probabilities are centered to the right of the nominal αlevel. Thus, using any of these distributions other than the Gumbel distribution to approximate the underlying distribution of the spatial scan statistic results in anticonservatively biased αlevels, or in other words, pvalues that are too small. This can also be seen in Figure 4, where we show a plot of the ratio of the estimated true αlevel to the nominal αlevel for each distribution. Figure 4 shows that the Gumbel approximation has very little bias. When 999 or 9999 Monte Carlo replicates were used the slight bias is conservative, whereas the bias from all other distributions is large and anticonservative.
Figure 3. Histograms of the rejection probabilities obtained from each of the fitted distributions. The vertical gray lines indicate the nominal αlevel. For α = 0.05, α = 0.01, and α = 0.001, the scales are the same within those columns with the scale marked at the bottom of each column. For α = 0.0001 and α = 0.00001, the scale is different for the normal distribution than for the other 3 distributions, as is indicated at the bottom of the histograms.
Figure 4. Ratio of estimated αlevels to nominal αlevels for 4 distributions and different numbers of Monte Carlo replicates used to estimate the parameters for each distribution.
The above results are all based on the baseline experimental setting. For the other settings, the true αlevels are presented in Tables 2 and 3 for the Gumbel distribution, showing the robustness of this method. Overall, the Gumbel approximation works quite well. The αlevels are slightly conservative whenever 999 or 9999 Monte Carlo replicates were used, and slightly anticonservative with 99 replicates. The bias becomes increasingly conservative with an increasing number of replicates, and this conservatism is more pronounced in the spacetime settings. The worst spatial results using the Gumbel distribution were for the very extreme scenarios when there are only 6 disease cases in all of the northeastern United States, when there is a nonnegligible conservative bias. This is probably due to the extremely small sample size, and can be viewed as a worst case scenario. When the maximum scanning window size was limited to contain at most one county, there is a nonnegligible anticonservative bias. In this onecounty experiment, every single county is evaluated as a potential cluster, but two or more counties are never combined to form a cluster. We are still taking the maxima when calculating the test statistic but it is a maxima over only 245 rather than tens of thousands of potential clusters. This may explain why the Gumbel extreme value distribution does not work as well in this scenario.
Table 2. Estimated αlevels for the Gumbel approximation for different parameters, corresponding to five nominal αlevels.
Table 3. Estimated αlevels for the Gumbel approximation for different parameters for the spacetime scan, corresponding to five nominal αlevels.
We also evaluated the other three distributions using all of the settings, and the bias was similar to, and as bad as, the results shown in Figures 3 and 4 (data not shown).
Statistical power
Statistical power is another important evaluation criterion. While the variance in the probabilities depicted in Figure 3 do not influence the αlevel, a larger variance will slightly reduce the statistical power of the test, as implied by the proof of Jöckel [27]. We informally compared the power for the Gumbel approximation to the power obtained from Monte Carlo hypothesis testing by looking at the variance of the rejection probabilities used to calculate αlevels. Using the parameter set from the baseline experimental settings, Figure 5 shows the same type of histograms of the rejection probabilities as in Figure 3, but only for the Gumbel approximation and from Monte Carlo hypothesis testing, and using variable numbers of Monte Carlo replicates. The figure suggests that the variance in the rejection probabilities from the Gumbel approximation is smaller than the variance in the rejection probabilities from Monte Carlo hypothesis testing. Numerically, the ratio of the standard deviation of the rejection probabilities from the Gumbel approximation to the standard deviation of the rejection probabilities from Monte Carlo hypothesis testing is less than 1 when the same number of replicates are used for both methods, indicating that the scan statistic has greater power when the Gumbel approximation is used than with traditional Monte Carlo hypothesis testing. When we use 9999 replicates for Monte Carlo hypothesis testing and only 999 replicates for the Gumbel approximation this ratio is about 1; the same is true if 999 replicates are used for Monte Carlo hypothesis testing and 99 replicates are used for the Gumbel approximation. This suggests that in this example, 10 times as many replicates are required in order to get about the same power with Monte Carlo hypothesis testing as with the Gumbel approximation. Approximately the same relationship was observed in the spacetime and other spatial settings.
Figure 5. Histograms of the rejection probabilities obtained using the Gumbel approximation and from Monte Carlo hypothesis testing based on 99, 999, or 9,999 Monte Carlo replicates.
Discussion
We have shown that the Gumbel distribution can be used to obtain approximate pvalues for the spatial and spacetime scan statistics with great accuracy in the far tail of the distribution. This can be done using far less computation than required by the traditional method based on Monte Carlo hypothesis testing. As a rule of thumb, we suggest using at least 999 random Monte Carlo replicates to estimate the parameters of the Gumbel distribution, when possible, but the approach also works with a smaller number of replicates.
A key question is then when to use Monte Carlo hypothesis testing versus Gumbel based pvalues. If the primary interest is in 0.05 and 0.01 alpha levels, or if the data set is small so that it is easy to generate and calculate the test statistic for hundreds of thousands of simulated replicas, then traditional Monte Carlo hypothesis testing works well, and the benefit of Gumbel based pvalues is at most marginal. However, there are several instances in which the Gumbel approximations offer a clear advantage.
If the same number of replicates is used, then the Gumbel approximation has higher power than Monte Carlo hypothesis testing. When the number of replicates divided by the desired alpha level is large, the difference in power is marginal, but when it is small, there is a clear advantage of the Gumbel approximation. More specifically, the Gumbel approximation with onetenth the number of replicates used by Monte Carlo hypothesis testing provides approximately the same statistical power, while using onetenth of the computing time. Although there is some bias with the Gumbel approximation, the bias is small and, in most cases, conservative.
The most important benefit of the Gumbel approximation is its ability to calculate very small pvalues with a modest number of simulated replicates. For example, as shown in Figure 4, pvalues on the order of 0.00001 can be conservatively calculated with only 999 random replicates by using the Gumbel approximation, while it would require more than 99,999 replicates to get the same precision from Monte Carlo hypothesis testing.
The attempts to calculate pvalues with the normal, lognormal and gamma distributions all resulted in anticonservatively biased αlevels. The bias from these approximations was so large that we do not recommend their use to approximate pvalues for spatial or spacetime scan statistics.
The circular purely spatial scan statistic and the spacetime scan statistic are only two examples of the many types of scan statistics. Other types include the elliptical shaped spatial scan statistics [32], nonparametric irregular shaped spatial scan statistics [3335], as well as spatial and spacetime scan statistics for ordinal [36] and exponential data [37,38]. While we have not tested the Gumbel approximation for other types of scan statistics, these statistics are all maxima and generating pvalues for any of them relies on Monte Carlo hypothesis testing. It would be reasonable, then, to evaluate whether pvalues for these other scan statistics could also be approximated with the Gumbel distribution.
The method used here of fitting a distribution to the statistics obtained from the Monte Carlo replicates can be applied to any other application in which Monte Carlo hypothesis testing is used and where very small pvalues are required or where computing time is limited. There is no reason to expect the Gumbel distribution to work well in all situations, however. In this particular example it makes sense intuitively because the scan statistic generated in each replicate is a maximum over many circles and the Gumbel distribution is a distribution of maxima. Other applications may lend themselves naturally to a different choice of distribution.
To summarize, in applications in which the precision of small pvalues is not important, we suggest using Monte Carlo hypothesis testing to obtain the pvalues for the spatial scan statistic. In applications in which the precision of pvalues is important or where each replicate takes a long time to complete, the Gumbel based pvalues are often advantageous for reasons of both computational speed and statistical power. To facilitate its use, Gumbel based pvalues have been added to version 9 of the freely available SaTScan software, which can be downloaded from http://www.satscan.org webcite.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
AA programmed the simulations, analyzed the simulation results, and drafted the manuscript. KK and MK conceived of the idea, provided guidance, and helped draft the manuscript. All authors read and approved the final manuscript.
Acknowledgements
This research was funded by grant #RO1CA095979 from the National Cancer Institute and by Models of Infectious Disease Agent Study (MIDAS) grant #U01GM076672 from the National Institute of General Medical Sciences.
References

Kulldorff M: A Spatial Scan Statistic.
Commun Statist  Theory Meth 1997, 26(6):14811496. Publisher Full Text

Kulldorff M: Prospective time periodic geographical disease surveillance using a scan statistic.
J R Statist Soc A 2001, 164(1):6172. Publisher Full Text

Naus JI: Clustering of Points in Two Dimensions.
Biometrika 1965, 52:263267. Publisher Full Text

Kulldorff M, et al.: A SpaceTime Permutation Scan Statistic for Disease Outbreak Detection.
PLoS Med 2005, 2(3):e59. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Gaudart J, et al.: Oblique decision trees for spatial pattern detection: optimal algorithm and application to malaria risk.
BMC Medical Research Methodology 2005, 5(1):22. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Gosselin P, et al.: The Integrated System for Public Health Monitoring of West Nile Virus (ISPHMWNV): a realtime GIS for surveillance and decisionmaking.
International Journal of Health Geographics 2005, 4(1):21. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Fukuda Y, et al.: Variations in societal characteristics of spatial disease clusters: examples of colon, lung and breast cancer in Japan.
International Journal of Health Geographics 2005, 4(1):16. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Ozonoff A, et al.: Cluster detection methods applied to the Upper Cape Cod cancer data.
Environmental Health: A Global Access Science Source 2005, 4(1):19. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Sheehan TJ, DeChello L: A spacetime analysis of the proportion of late stage breast cancer in Massachusetts, 1988 to 1997.
International Journal of Health Geographics 2005, 4(1):15. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

DeChello L, Sheehan TJ: Spatial analysis of colorectal cancer incidence and proportion of latestage in Massachusetts residents: 19951998.
International Journal of Health Geographics 2007, 6(1):20. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Klassen A, Kulldorff M, Curriero F: Geographical clustering of prostate cancer grade and stage at diagnosis, before and after adjustment for risk factors.
International Journal of Health Geographics 2005, 4(1):1. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Ozdenerol E, et al.: Comparison of spatial scan statistic and spatial filtering in estimating low birth weight clusters.
International Journal of Health Geographics 2005, 4(1):19. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Kleinman KP, et al.: A modeladjusted spacetime scan statistic with an application to syndromic surveillance.
Epidemiol Infect 2005, 133:409419. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Nordin JD, et al.: Simulated Anthrax Attacks and Syndromic Surveillance.
Emerging Infectious Diseases 2005, 11(9):13941398. PubMed Abstract  Publisher Full Text

Yih WK, et al.: AmbulatoryCare Diagnoses as Potential Indicators of Outbreaks of Gastrointestinal Illness  Minnesota.
MMWR 2005, 54(Supplement):157162. PubMed Abstract  Publisher Full Text

Besculides M, et al.: Evaluation of School Absenteeism Data for Early Outbreak Detection  New York City, 2001  2002.

Heffernan R, et al.: Syndromic Surveillance in Public Health Practice, New York City.
Emerging Infectious Diseases 2004, 10(5):858864. PubMed Abstract  Publisher Full Text

Sheridan HA, et al.: A temporalspatial analysis of bovine spongiform encephalopathy in Irish cattle herds, from 1996 to 2000.
Canadian Journal of Veterinary Research 2005, 69(1):1925. PubMed Abstract  PubMed Central Full Text

Dwass M: Modified randomization tests for nonparametric hypothesis.
Ann Math Statist 1957, 28:181187. Publisher Full Text

Barnard G: New Methods Of QualityControl.
Journal of the Royal Statistical Society Series AGeneral 1963, 126(2):255258. Publisher Full Text

Besag J, Diggle P: Simple Monte Carlo Tests for Spatial Pattern.
Applied Statistics 1977, 26(3):327333. Publisher Full Text

Kleinman K, Lazarus R, Platt R: A Generalized Linear Mixed Models Approach for Detecting Incident Clusters of Disease in Small Areas, with an Application to Biological Terrorism.
Am J Epidemiol 2004, 159(3):217224. PubMed Abstract  Publisher Full Text

Kulldorff M, et al.: Evaluating cluster alarms: a spacetime scan statistic and brain cancer in Los Alamos, New Mexico.
Am J Public Health 1998, 88(9):13771380. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Kulldorff M, Information Management Services, Inc: SaTScan™ v5.1: Software for the spatial and spacetime scan statistics.
2005. Publisher Full Text

Marriott FHC: Barnard's Monte Carlo Tests: How Many Simulations?
Journal of the Royal Statistical Society. Series C (Applied Statistics) 1979, 28(1):7577.

Hope A: A Simplified Monte Carlo Significance Test Procedure.
Journal of the Royal Statistical Series BStatistical Methodology 1968, 30(3):582598.

Jöckel KH: Finite Sample Properties and Asymptotic Efficiency of Monte Carlo Tests.

Gumbel EJ: Statistics of Extremes. In . Mineola, NY: Dover; 2004:375.

Coles S: An Introduction to Statistical Modeling of Extreme Values. In Springer Series in Statistics. London: SpringerVerlag; 2001:208.

Kulldorff M, et al.: Breast Cancer Clusters in the Northeast United States: A Geographic Analysis.
Am J Epidemiol 1997, 146(2):161170. PubMed Abstract  Publisher Full Text

U.S. Census Bureau[http://www.census.gov/] webcite

Kulldorff M, et al.: An elliptic spatial scan statistic.
Statistics in Medicine 2006, 25(22):39293943. PubMed Abstract  Publisher Full Text

Duczmal L, Assunção R: A simulated annealing strategy for the detection of arbitrarily shaped spatial clusters.
Computational Statistics & Data Analysis 2004, 45(2):269286.

Tango T, Takahashi K: A flexibly shaped spatial scan statistic for detecting clusters.
International Journal of Health Geographics 2005, 4(1):11. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Assunção R, et al.: Fast detection of arbitrarily shaped disease clusters.
Statistics in Medicine 2006, 25(5):723742. PubMed Abstract  Publisher Full Text

Jung I, Kulldorff M, Klassen AC: A spatial scan statistic for ordinal data.
Statistics in Medicine 2007, 26(7):15941607. PubMed Abstract  Publisher Full Text

Huang L, Kulldorff M, Gregario D: A Spatial Scan Statistic for Survival Data.
Biometrics 2007, 63:109118. PubMed Abstract  Publisher Full Text

Cook AJ, Gold DR, Li Y: Spatial Cluster Detection for Censored Outcome Data.
Biometrics 2007, 63:540549. PubMed Abstract  Publisher Full Text