|
The Gauss–Newton algorithm is used to solve non-linear least squares problems. It is a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required. Non-linear least squares problems arise for instance in non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations. The method is named after the mathematicians Carl Friedrich Gauss and Isaac Newton. == Description == Given ''m'' functions r = (''r''1, …, ''r''''m'') (often called residuals) of ''n'' variables ''β'' = (''β''1, …, ''β''''n''), with ''m'' ≥ ''n'', the Gauss–Newton algorithm iteratively finds the minimum of the sum of squares〔Björck (1996)〕 : Starting with an initial guess for the minimum, the method proceeds by the iterations : where, if r and ''β'' are column vectors, the entries of the Jacobian matrix are : and the symbol denotes the matrix transpose. If ''m'' = ''n'', the iteration simplifies to : which is a direct generalization of Newton's method in one dimension. In data fitting, where the goal is to find the parameters ''β'' such that a given model function ''y'' = ''f''(''x'', ''β'') best fits some data points (''x''''i'', ''y''''i''), the functions ''r''''i'' are the residuals : Then, the Gauss-Newton method can be expressed in terms of the Jacobian Jf of the function ''f'' as : 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Gauss–Newton algorithm」の詳細全文を読む スポンサード リンク
|