翻訳と辞書 |
Bcpnn A Bayesian Confidence Neural Network (BCPNN) is an artificial neural network inspired by Bayes' theorem: node activations represent probability ("confidence") in the presence of input features or categories, synaptic weights are based on estimated correlations and the spread of activation corresponds to calculating posteriori probabilities. It was originally proposed by Anders Lansner and Örjan Ekeberg at KTH. The basic network is a feedforward neural network with continuous activation. This can be extended to include spiking units and hypercolumns, representing mutually exclusive or interval coded features. This network has been used for classification tasks and data mining, for example for discovery of adverse drug reactions. The units can also be connected as a recurrent neural network (losing the strict interpretation of their activations as probabilities)〔Lansner, A., A recurrent bayesian ANN capable of extracting prototypes from unlabeled and noisy examples. In Artificial neural Networks, 1991. Espoo, Finland: Elsevier, Amsterdam〕 but becoming a possible abstract model of biological neural networks and memory.〔Anders Sandberg, Bayesian Attractor Neural Network Models of Memory, Ph.D. dissertation Stockholm University, Department of Numerical Analysis and Computer Science, June 2003, TRITA-NA-0310, ISBN 91-7265-684-0〕 == References ==
抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Bcpnn」の詳細全文を読む
スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース |
Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.
|
|