|
In statistics, econometrics, epidemiology and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships when controlled experiments are not feasible or when a treatment is not successfully delivered to every unit in a randomized experiment. Instrumental variable methods allow consistent estimation when the explanatory variables (covariates) are correlated with the error terms of a regression relationship. Such correlation may occur when the dependent variable causes at least one of the covariates ("reverse" causation), when there are relevant explanatory variables which are omitted from the model, or when the covariates are subject to measurement error. In this situation, ordinary linear regression generally produces biased and inconsistent estimates. However, if an ''instrument'' is available, consistent estimates may still be obtained. An instrument is a variable that does not itself belong in the explanatory equation and is correlated with the endogenous explanatory variables, conditional on the other covariates. In linear models, there are two main requirements for using an IV: * The instrument must be correlated with the endogenous explanatory variables, conditional on the other covariates. * The instrument cannot be correlated with the error term in the explanatory equation (conditional on the other covariates), that is, the instrument cannot suffer from the same problem as the original predicting variable. ==Definitions== The theory of instrumental variables was first derived by Philip G. Wright, possibly in co-authorship with his son Sewall Wright, in his 1928 book ''The Tariff on Animal and Vegetable Oils''. Traditionally, an instrumental variable is defined as a variable ''Z'' that is correlated with the independent variable ''X'' and uncorrelated with the "error term" U in the equation : However, this definition suffers from ambiguities in concepts such as "error term" and "independent variable," and has led to confusion as to the meaning of the equation itself, which was wrongly labeled "regression." General definitions of instrumental variables, using counterfactual and graphical formalism, were given by Pearl (2000; p. 248). The graphical definition requires that ''Z'' satisfy the following conditions: : where stands for ''d''-separation 〔Bayesian network〕 and where ''Y''''x'' stands for the value that ''Y'' would attain had ''X'' been ''x'' and stands for independence. If there are additional covariates ''W'' then the above definitions are modified so that ''Z'' qualifies as an instrument if the given criteria hold conditional on ''W''. The essence of Pearl's definition is: # The equations of interest are "structural," not "regression." # The error term ''U'' stands for all exogenous factors that affect ''Y'' when ''X'' is held constant. # The instrument ''Z'' should be independent of ''U.'' # The instrument ''Z'' should not affect ''Y'' when ''X'' is held constant (exclusion restriction). # The instrument ''Z'' should not be independent of ''X.'' These conditions do not rely on specific functional form of the equations and are applicable therefore to nonlinear equations, where ''U'' can be non-additive (see Non-parametric analysis). They are also applicable to a system of multiple equations, in which ''X'' (and other factors) affect ''Y'' through several intermediate variables. Note that an instrumental variable need not be a cause of ''X''; a proxy of such cause may also be used, if it satisfies conditions 1-5.〔 Note also that the exclusion restriction (condition 4) is redundant; it follows from conditions 2 and 3. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Instrumental variable」の詳細全文を読む スポンサード リンク
|