# Artificial neurons

## Redirected from Artificial neuron

*34,202*pages on

this wiki

## Ad blocker interference detected!

### Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.

Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |

Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |

**Other fields of psychology:**
AI ·
Computer ·
Consulting ·
Consumer ·
Engineering ·
Environmental ·
Forensic ·
Military ·
Sport ·
Transpersonal ·
Index

An **artificial neuron** is a mathematical function conceived as a crude model, or abstraction of biological neurons. Artificial neurons are the constitutive units in an artificial neural network. Depending on the specific model used, it can receive different names, such as **semi-linear unit**, **Nv neuron**, **binary neuron**, **linear threshold function** or **McCulloch–Pitts (MCP) neuron **. The artificial neuron receives one or more inputs (representing the one or more dendrites) and sums them to produce an output (representing a biological neuron's axon). Usually the sums of each node are weighted, and the sum is passed through a non-linear function known as an activation function or transfer function. The transfer functions usually have a sigmoid shape, but they may also take the form of other non-linear functions, piecewise linear functions, or step functions. They are also often monotonically increasing, continuous, differentiable and bounded.

The artificial neuron transfer function should not be confused with a linear system's transfer function.

## Basic structure Edit

For a given artificial neuron, let there be *m* + 1 inputs with signals *x*_{0} through *x*_{m} and weights *w*_{0} through *w*_{m}. Usually, the *x*_{0} input is assigned the value +1, which makes it a *bias* input with *w*_{k0} = *b*_{k}. This leaves only *m* actual inputs to the neuron: from *x*_{1} to *x*_{m}.

The output of *k*th neuron is:

Where (phi) is the transfer function.

The output is analogous to the axon of a biological neuron, and its value propagates to input of the next layer, through a synapse. It may also exit the system, possibly as part of an output vector.

It has no learning process as such. It cannot decide its own, here weights are calculated and accordingly the threshold value is calculated.

## HistoryEdit

The first artificial neuron was the Threshold Logic Unit (TLU) first proposed by Warren McCulloch and Walter Pitts in 1943. As a transfer function, it employed a threshold, equivalent to using the Heaviside step function. Initially, only a simple model was considered, with binary inputs and outputs, some restrictions on the possible weights, and a more flexible threshold value. Since the beginning it was already noticed that any boolean function could be implemented by networks of such devices, what is easily seen from the fact that one can implement the AND and OR functions, and use them in the disjunctive or the conjunctive normal form.

Researchers also soon realized that cyclic networks, with feedbacks through neurons, could define dynamical systems with memory, but most of the research concentrated (and still does) on strictly feed-forward networks because of the smaller difficulty they present.

One important and pioneering artificial neural network that used the linear threshold function was the perceptron, developed by Frank Rosenblatt. This model already considered more flexible weight values in the neurons, and was used in machines with adaptive capabilities. The representation of the threshold values as a bias term was introduced by Widrow in 1960^{[How to reference and link to summary or text]}.

In the late 1980s, when research on neural networks regained strength, neurons with more continuous shapes started to be considered. The possibility of differentiating the activation function allows the direct use of the gradient descent and other optimization algorithms for the adjustment of the weights. Neural networks also started to be used as a general function approximation model.

## Types of transfer functionsEdit

The transfer function of a neuron is chosen to have a number of properties which either enhance or simplify the network containing the neuron. Crucially, for instance, any multilayer perceptron using a *linear* transfer function has an equivalent single-layer network; a non-linear function is therefore necessary to gain the advantages of a multi-layer network.

Below, *u* refers in all cases to the weighted sum of all the inputs to the neuron, i.e. for *n* inputs,

where **w** is a vector of *synaptic weights* and **x** is a vector of inputs.

### Step functionEdit

The output *y* of this transfer function is binary, depending on whether the input meets a specified threshold, *θ*. The "signal" is sent, i.e. the output is set to one, if the activation meets the threshold.

This function is used in perceptrons and often shows up in many other models. It performs a division of the space of inputs by a hyperplane. It is specially useful in the last layer of a network intended to perform binary classification of the inputs. It can be approximated from other sigmoidal functions by assigning large values to the weights.

### Linear combinationEdit

In this case, the output unit is simply the weighted sum of its inputs plus a *bias* term. A number of such linear neurons perform a linear transformation of the input vector. This is usually more useful in the first layers of a network. A number of analysis tools exist based on linear models, such as harmonic analysis, and they can all be used in neural networks with this linear neuron. The bias term allows us to make affine transformations to the data.

See: Linear transformation, Harmonic analysis, Linear filter, Wavelet, Principal component analysis, Independent component analysis, Deconvolution.

### SigmoidEdit

A fairly simple non-linear function, a Sigmoid function such as the logistic function also has an easily calculated derivative, which can be important when calculating the weight updates in the network. It thus makes the network more easily manipulable mathematically, and was attractive to early computer scientists who needed to minimize the computational load of their simulations. It is commonly seen in multilayer perceptrons using a backpropagation algorithm.

See: Sigmoid function

## Pseudocode algorithmEdit

The following is a simple pseudocode implementation of a single TLU which takes boolean inputs (true or false), and returns a single boolean output when activated. An object-oriented model is used. No method of training is defined, since several exist. If a purely functional model were used, the class TLU below would be replaced with a function TLU with input parameters threshold, weights, and inputs that returned a boolean value.

classTLUdefined as:data memberthreshold:numberdata memberweights: list ofnumbersof sizeXfunction memberfire( inputs: list ofbooleansof sizeX ):booleandefined as:variableT:number T←0for eachiin1toX:ifinputs(i)istrue:T←T + weights(i)end ifend for eachifT > threshold:returntrueelse:returnfalseend ifend functionend class

## Spreadsheet example Edit

Input | Initial | Output | Final | |||||||||||

Threshold | Learning Rate | Sensor values | Desired output | Weights | Calculated | Sum | Network | Error | Correction | Weights | ||||

TH | LR | X1 | X2 | Z | w1 | w2 | C1 | C2 | S | N | E | R | W1 | W2 |

X1 x w1 | X2 x w2 | C1+C2 | IF(S>TH,1,0) | Z-N | LR x E | R+w1 | R+w2 | |||||||

0.5 | 0.2 | 0 | 0 | 0 | 0.1 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0.3 |

0.5 | 0.2 | 0 | 1 | 1 | 0.1 | 0.3 | 0 | 0.3 | 0.3 | 0 | 1 | 0.2 | 0.3 | 0.5 |

0.5 | 0.2 | 1 | 0 | 1 | 0.3 | 0.5 | 0.3 | 0 | 0.3 | 0 | 1 | 0.2 | 0.5 | 0.7 |

0.5 | 0.2 | 1 | 1 | 1 | 0.5 | 0.7 | 0.5 | 0.7 | 1.2 | 1 | 0 | 0 | 0.5 | 0.7 |

0.5 | 0.2 | 0 | 0 | 0 | 0.5 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0.7 |

0.5 | 0.2 | 0 | 1 | 1 | 0.5 | 0.7 | 0 | 0.7 | 0.7 | 1 | 0 | 0 | 0.5 | 0.7 |

0.5 | 0.2 | 1 | 0 | 1 | 0.5 | 0.7 | 0.5 | 0 | 0.5 | 0 | 1 | 0.2 | 0.7 | 0.9 |

0.5 | 0.2 | 1 | 1 | 1 | 0.7 | 0.9 | 0.7 | 0.9 | 1.6 | 1 | 0 | 0 | 0.7 | 0.9 |

0.5 | 0.2 | 0 | 0 | 0 | 0.7 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0.7 | 0.9 |

0.5 | 0.2 | 0 | 1 | 1 | 0.7 | 0.9 | 0 | 0.9 | 0.9 | 1 | 0 | 0 | 0.7 | 0.9 |

0.5 | 0.2 | 1 | 0 | 1 | 0.7 | 0.9 | 0.7 | 0 | 0.7 | 1 | 0 | 0 | 0.7 | 0.9 |

0.5 | 0.2 | 1 | 1 | 1 | 0.7 | 0.9 | 0.7 | 0.9 | 1.6 | 1 | 0 | 0 | 0.7 | 0.9 |

Supervised neural network training for an OR gate.

Note: Initial weight equals final weight of previous iteration.

## LimitationsEdit

Artificial neurons of simple types, such as the McCulloch–Pitts model, are sometimes characterized as "caricature models", in that they are intended to reflect one or more neurophysiological observations, but without regard to realism.^{[1]}

## See also Edit

## References Edit

- ↑
F. C. Hoppensteadt and E. M. Izhikevich (1997).
*Weakly connected neural networks*, Springer.

## Further reading Edit

- McCulloch, W. and Pitts, W. (1943).
*A logical calculus of the ideas immanent in nervous activity.*Bulletin of Mathematical Biophysics, 7:115 - 133. - A.S. Samardak, A. Nogaret, N. B. Janson, A. G. Balanov, I. Farrer and D. A. Ritchie. "Noise-Controlled Signal Transmission in a Multithread Semiconductor Neuron" // Phys.Rev.Lett. 102 (2009) 226802, [1]

## External links Edit

- [2] A good general overview

This page uses Creative Commons Licensed content from Wikipedia (view authors). |