翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Gauss–Newton : ウィキペディア英語版
Gauss–Newton algorithm
The Gauss–Newton algorithm is used to solve non-linear least squares problems. It is a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.
Non-linear least squares problems arise for instance in non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations.
The method is named after the mathematicians Carl Friedrich Gauss and Isaac Newton.
== Description ==
Given ''m'' functions r = (''r''1, …, ''r''''m'') (often called residuals) of ''n'' variables ''β'' = (''β''1, …, ''β''''n''), with ''m'' ≥ ''n'', the Gauss–Newton algorithm iteratively finds the minimum of the sum of squares〔Björck (1996)〕
: S(\boldsymbol \beta)= \sum_^m r_i^2(\boldsymbol \beta).
Starting with an initial guess \boldsymbol \beta^ for the minimum, the method proceeds by the iterations
: \boldsymbol \beta^ = \boldsymbol \beta^ - \left(\mathbf^\mathsf \mathbf \right)^ \mathbf ^\mathsf \mathbf(\boldsymbol \beta^)
where, if r and ''β'' are column vectors, the entries of the Jacobian matrix are
: (\mathbf)_ = \frac,
and the symbol ^\mathsf denotes the matrix transpose.
If ''m'' = ''n'', the iteration simplifies to
: \boldsymbol \beta^ = \boldsymbol \beta^ - \left( \mathbf \right)^ \mathbf(\boldsymbol \beta^)
which is a direct generalization of Newton's method in one dimension.
In data fitting, where the goal is to find the parameters ''β'' such that a given model function ''y'' = ''f''(''x'', ''β'') best fits some data points (''x''''i'', ''y''''i''), the functions ''r''''i'' are the residuals
: r_i(\boldsymbol \beta)= y_i - f(x_i, \boldsymbol \beta).
Then, the Gauss-Newton method can be expressed in terms of the Jacobian Jf of the function ''f'' as
: \boldsymbol \beta^ = \boldsymbol \beta^ + \left(\mathbf^\mathsf \mathbf \right)^ \mathbf ^\mathsf\mathbf(\boldsymbol \beta^).

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Gauss–Newton algorithm」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.