|
An ''F''-test is any statistical test in which the test statistic has an ''F''-distribution under the null hypothesis. It is most often used when comparing statistical models that have been fitted to a data set, in order to identify the model that best fits the population from which the data were sampled. Exact "''F''-tests" mainly arise when the models have been fitted to the data using least squares. The name was coined by George W. Snedecor, in honour of Sir Ronald A. Fisher. Fisher initially developed the statistic as the variance ratio in the 1920s. ==Common examples of ''F''-tests== Common examples of the use of'' F-''tests are, for example, the study of the following cases: * The hypothesis that the means of a given set of normally distributed populations, all having the same standard deviation, are equal. This is perhaps the best-known ''F''-test, and plays an important role in the analysis of variance (ANOVA). * The hypothesis that a proposed regression model fits the data well. See Lack-of-fit sum of squares. * The hypothesis that a data set in a regression analysis follows the simpler of two proposed linear models that are nested within each other. In addition, some statistical procedures, such as Scheffé's method for multiple comparisons adjustment in linear models, also use F-tests. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「F-test」の詳細全文を読む スポンサード リンク
|