Wikia

Psychology Wiki

BCM theory

Talk0
34,135pages on
this wiki
Revision as of 08:17, May 19, 2010 by Dr Joe Kiff (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Biological: Behavioural genetics · Evolutionary psychology · Neuroanatomy · Neurochemistry · Neuroendocrinology · Neuroscience · Psychoneuroimmunology · Physiological Psychology · Psychopharmacology (Index, Outline)


BCM theory, BCM synaptic modification, or the BCM rule, named for Elie Bienenstock, Leon Cooper, and Paul Munro, is a physical theory of learning in the visual cortex developed in 1981. Due to its successful experimental predictions, the theory is arguably the most accurate model of synaptic plasticity to date[How to reference and link to summary or text].

Development Edit

In 1949, Donald Hebb proposed a working mechanism for memory and computational adaption in the brain called Hebbian learning, or the maxim that cells that fire together, wire together. This law formed the basis of the brain as the modern neural network, theoretically capable of Turing complete computational complexity[How to reference and link to summary or text], and thus became a standard materialist model for the mind.

However, Hebb's rule has problems, namely that it has no mechanism for connections to get weaker and no upper bound for how strong they can get. In other words, the model is unstable, both theoretically and computationally. Later modifications gradually improved Hebb's rule, normalizing it and allowing for decay of synapses, where no activity or unsynchronized activity between neurons results in a loss of connection strength. New biological evidence brought this activity to a peak in the 1970s, where theorists formalized various approximations in the theory, such as the use of firing frequency instead of potential in determining neuron excitation, and the assumption of ideal and, more importantly, linear synaptic integration of signals. That is, there is no unexpected behavior in the adding of input currents to determine whether or not a cell will fire.

These approximations resulted in the basic form of BCM below in 1979, but the final step came in the form of mathematical analysis to prove stability and computational analysis to prove applicability, culminating in Bienenstock, Cooper, and Munro's 1982 paper.

Since then, experiments have shown evidence for BCM behavior in both the visual cortex and the hippocampus, the latter of which plays an important role in the formation and storage of memories. Both of these areas are well-studied experimentally, but both theory and experiment have yet to establish conclusive synaptic behavior in other areas of the brain. Furthermore, a biological mechanism for synaptic plasticity in BCM has yet to be established.[1]

Theory Edit

The basic BCM rule takes the form

\frac{d m_j(t)}{d t}=\phi(\textbf{c}(t))d_j(t)-\epsilon m_j(t),

where m_j is the synaptic weight of the jth synapse, d_j is that synapse's input current, \textbf{c} is the weighted presynaptic output vector, \phi is the postsynaptic activation function that changes sign at some output threshold \theta_M, and \epsilon is the (often negligible) time constant of uniform decay of all synapses.

This model is merely a modified form of the Hebbian learning rule, \dot{m_j}=c d_j, and requires a suitable choice of activation function, or rather, the output threshold, to avoid the Hebbian problems of instability. This threshold was derived rigorously in BCM noting that with c(t)=\textbf{m}(t)\cdot\textbf{d}(t) and the approximation of the average output \bar{\textbf{c}}(t) \approx \textbf{m}(t)\bar{d}(t), for one to have stable learning it is sufficient that

\sgn\phi(c,\bar{c})=\sgn\left(c-\left(\frac{\bar{c}}{c_0}\right)^p\bar{c}\right) ~~ \textrm{for} ~~ c>0, and
\phi(0,\bar{c})=0 ~~ \textrm{for} ~ \textrm{all} ~ \bar{c},

or equivalently, that the threshold \theta_M(\bar{c}) = (\bar{c}/c_0)^p\bar{c}, where p and c_0 are fixed positive constants.[2]

When implemented, the theory is often taken such that

\phi(c,\bar{c})=c(c-\theta_M) ~~ \textrm{and} ~~ \theta_M=\langle c^2 \rangle = \frac{1}{\tau}\int_{-\infty}^t c^2(t^\prime)e^{-(t-t^\prime)/\tau}d t^\prime,

where angle brackets are a time average and \tau is the time constant of selectivity.

The model has drawbacks, as it requires both long-term potentiation and long-term depression, or increases and decreases in synaptic strength, something which has not been observed in all cortical systems. Further, it requires a variable activation threshold and depends strongly on stability of the selected fixed points c_0 and p. However, the model's strength is that it incorporates all these requirements from independently-derived rules of stability, such as normalizability and a decay function with time proportional to the square of the output.[3]

Experiment Edit

The first major experimental confirmation of BCM came in 1992 in investigating LTP and LTD in the hippocampus. The data showed qualitative agreement with the final form of the BCM activation function.[4] This experiment was later replicated in the visual cortex, where BCM was originally designed to model[5] This work provided further evidence of the necessity for a variable threshold function for stability in Hebbian-type learning (BCM or others).

Experimental evidence has been non-specific to BCM until Rittenhouse et al. confirmed BCM's prediction of synapse modification in the visual cortex when one eye is selectively closed. Specifically,

\log\left(\frac{m_{\rm closed}(t)}{m_{\rm closed}(0)}\right) \sim -\overline{n^2}t,

where \overline{n^2} describes the variance in spontaneous activity or noise in the closed eye and t is time since closure. Experiment agreed with the general shape of this prediction and provided an explanation for the dynamics of monocular eye closure versus binocular eye closure.[6] The experimental results are far from conclusive, but so far have favored BCM over competing theories of plasticity.

Applications Edit

While the algorithm of BCM is too complicated for large-scale parallel distributed processing, it has been put to use in lateral networks with some success.[7] Furthermore, some existing computational network learning algorithms have been made to correspond to BCM learning.[8]

References Edit

  1. Cooper, L.N. (2000). Memories and memory: A physicist's approach to the brain. International Journal of Modern Physics A 15 (26): 4069–4082.
  2. Bienenstock, Elie L., Leon Cooper, Paul Munro (January 1982). Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. The Journal of Neuroscience 2 (1): 32–48.
  3. Intrator, Nathan The BCM theory of synaptic plasticity. Neural Computation. School of Computer Science, Tel-Aviv University. URL accessed on 2007-11-11.
  4. Dudek, Serena M., Mark Bear (1992). Homosynaptic long-term depression in area CAl of hippocampus and effects of N-methyl-D-aspartate receptor blockade. Proc. Natl. Acad. Sci. 89 (10): 4363–4367.
  5. Kirkwood, Alfredo, Marc G. Rioult, Mark F. Bear (1996). Experience-dependent modification of synaptic plasticity in rat visual cortex. Nature 381: 526–528.
  6. Rittenhouse, Cynthia D., Harel Z. Shouval, Michael A. Paradiso, Mark F. Bear (1999). Monocular deprivation induces homosynaptic long-term depression in visual cortex. Nature 397: 347.
  7. Intrator, Nathan BCM Learning Rule, Comp Issues. Neural Computation. School of Computer Science, Tel-Aviv University. URL accessed on 2007-11-11.
  8. Baras, Dorit, Ron Meir (2007). Reinforcement Learning, Spike-Time-Dependent Plasticity, and the BCM Rule. Neural Computation 19 (19): 2245–2279. 2561.

External links Edit

This page uses Creative Commons Licensed content from Wikipedia (view authors).

Around Wikia's network

Random Wiki