Psychology Wiki
Register
Advertisement

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Other fields of psychology: AI · Computer · Consulting · Consumer · Engineering · Environmental · Forensic · Military · Sport · Transpersonal · Index


A cognitive architecture is a blueprint for intelligent agents. It proposes (artificial) computational processes that act like certain cognitive systems, most often, like a person, or acts intelligent under some definition. It is a superset of general agent architectures. The term architecture implies an approach that attempts to model not only behavior, but also structural properties of the modelled system.

Characterization[]

Common among researchers on cognitive architectures is the belief that understanding (human, animal or machine) cognitive processes means being able to implement them in a working system, though opinions differ as to what form such a system can have: some researchers assume that it will necessarily be a symbolic computational system whereas others argue for alternative models such as connectionist systems or dynamical systems. Cognitive architectures can be characterized by certain properties or goals, as follows, though there is not general agreement on all aspects:

  1. Implementation of not just various different aspects of cognitive behavior but of cognition as a whole (Holism, e.g. Unified theory of cognition). This is in contrast to cognitive models, which focus on a particular competence, such as a kind of problem solving or a kind of learning.
  2. The architecture often tries to reproduce the behavior of the modelled system (human), in a way that timely behavior (reaction times) of the architecture and modelled cognitive systems can be compared in detail. Other cognitive limitations are often modeled as well, e.g. limited working memory, attention or issues due to cognitive load.
  3. Robust behavior in the face of error, the unexpected, and the unknown. (see Graceful degradation).
  4. Learning (not for all cognitive architectures)
  5. Parameter-free: The system does not depend on parameter tuning (in contrast to Artificial neural networks) (not for all cognitive architectures)
  6. Some early theories such as Soar and ACT-R originally focused only on the 'internal' information processing of an intelligent agent, including tasks like reasoning, planning, solving problems, learning concepts. More recently many architectures (including Soar, ACT-R, PreAct, ICARUS, CLARION, FORR) have expanded to include perception, action, and also affective states and processes including motivation, attitudes, and emotions.
  7. On some theories the architecture may be composed of different kinds of sub-architectures (often described as 'layers' or 'levels') where the layers may be distinguished by types of function, types of mechanism and representation used, types of information manipulated, or possibly evolutionary origin. These are hybrid architectures (e.g., CLARION).
  8. Some theories allow different architectural components to be active concurrently, whereas others assume a switching mechanism that selects one component or module at a time, depending on the current task. Concurrency is normally required for an architecture for an animal or robot that has multiple sensors and effectors in a complex and dynamic environment, but not in all robotic paradigms.
  9. Most theories assume that an architecture is fixed and only the information stored in various subsystems can change over time (e.g. Langley et al., below), whereas others allow architectures to grow, e.g. by acquiring new subsystems or new links between subsystems (e.g. Minsky and Sloman, below).

It is important to note that cognitive architectures don't have to follow a top-down approach to cognition (cf. Top-down and bottom-up design).


Distinctions[]

Cognitive architectures can be symbolic, connectionist, or hybrid. Some cognitive architecures or models base on a set of generic rules, as, e.g., the Information Processing Language (such as e.g. SOAR based on the unified theory of cognition, or similarly ACT). Many of these architectures are based on a the-mind-is-like-a-computer analogy. In contrast subsymbolic processing specifies no such rules a priori and relies on emergent properties of processing units (e.g. nodes). A further distinction is whether the architecture is centralized with a neural correlate of a processor at its core, or decentralized (distributed). The decentralized flavor, has become popular under the name of parallel distributed processing in mid-1980s and connectionism, a prime example being neural networks. A further design issue is additionally a decision between holistic and atomism, or (more concrete) modular in structure. By analogy, this extends to issues of knowledge representation.

In traditional AI, intelligence is often programmed from above: the programmer is the creator, and makes something and imbues it with its intelligence. Biologically-inspired computing, on the other hand, takes sometimes a more bottom-up, decentralised approach; bio-inspired techniques often involve the method of specifying a set of simple generic rules or a set of simple nodes, from the interaction of which emerges the overall behavior. It is hoped to build up complexity until the end result is something markedly complex (see complex systems).

Some significant cognitive architectures[]

Main article: Comparison of cognitive architectures

See also[]

External links[]

This page uses Creative Commons Licensed content from Wikipedia (view authors).
Advertisement