Psychology Wiki

David Premack

34,203pages on
this wiki
Add New Page
Talk0 Share

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Professional Psychology: Debating Chamber · Psychology Journals · Psychologists

David Premack (born October 26, 1925) is currently emeritus professor of psychology at the University of Pennsylvania. He was educated at the University of Minnesota at a propitious time. Logical positivism was in full bloom. The departments of Psychology and Philosophy were closely allied. Herbert Feigl, Wilfred Sellars, and Paul Meehl led the philosophy seminars. Group Dynamics, led by Leon Festinger and Stanley Schacter was at its intellectual peak.


Premack started in primate research in 1954 at the Yerkes Primate Biology Laboratory at Orange Park outside Jacksonville, Florida. His first two chimpanzee subjects, Sarah and Gussie, started at the University of Missouri and traveled with him to the University of California, Santa Barbara, and then to the University of Pennsylvania, where he had up to nine chimpanzee subjects.

Premack's first publication (1959) was a new theory of reinforcement. It argued that the more probable response in any pair of responses could reinforce the less probable response--demonstrating that reinforcement is a relative, not an absolute property.[1]

This theory predicts six conditions, all of which have been confirmed:

  1. Reinforcement is a relative property. Responses A, B, C have a descending rank order of probability. A will therefore reinforce both B and C. C will reinforce neither. This suggests that reinforcement is an absolute property. However, B corrects this view. B will reinforce C, but not A. B is both a reinforcer and not a reinforcer. Reinforcement is therefore a relative property.[2]
  2. Reinforcement is a reversible property. When drinking is more probable than running, drinking reinforces running. When the probabilities are reversed, running reinforces drinking.[3]
  3. Historically, consummatory responses, eating and drinking, have served exclusively as reinforcers, but consummatory responses are, like any other response, subject to reinforcement.[1][3]
  4. Reinforcement and punishment, traditionally contrasted as opposites, are in fact equivalent except for sign. If response A leads contingently to B, and B is more probable than A, A will increase in frequency (reinforcement); conversely, if A leads contingently to B, and B is less probable than A, A will decrease in frequency (punishment). The major contrast is not between reward and punishment; but between reward and punishment as contrasted with freedom. Freedom is the condition in which stimuli are freely (not contingently) available to an individual.[4][5]
  5. When motorized running is more probable than lever pressing but less probable than drinking, then running reinforces lever pressing and punishes drinking. In other words, the same response can be both a reinforcer and a punisher - at the same time and for the same individual.[5][6].
  6. The equivalence of reinforcement and punishment is further suggested in this interesting fact: rats are either sensitive to both reinforcement and punishment, or insensitive to both; they are never sensitive to one but insensitive to the other.[6]

Premack introduced the concept of Theory of Mind, with Guy Woodruff, in 1978.[7]

Premack's analysis of same/different led him and his associates to show that chimpanzees can do analogies. Sameness/difference is not a relation between objects (e.g., A same A, A different B) or properties, it is a relation between relations: For example: consider the relation between AA and BB, CD and EF on the one hand; and AA and CD on the other. AA and BB are both instances of same; the relation between them is "same." CD and EF are both instances different; the relation between them therefore is "same." AA is an instance of same, and CD an instance of different; the relation between them is "different." This analysis set the stage for teaching chimpanzees the word "same" for AA, and "different" for CD. When taught these words, chimpanzees spontaneously formed simple analogies between: physically similar relations (e.g., small circle is to large circle as small triangle is to large triangle), and functionally similar relations (e.g., key is to lock as can opener is to can).[8]

A nonverbal method for testing causal inference designed by Premack made it possible to show that both young children and chimpanzees are capable of causal inference.[9]

Premack demonstrated that chimpanzees can reassemble a disassembled face, placing the eyes, nose, mouth in the appropriate place. In addition he showed that chimpanzees are capable of symbolic behavior. After viewing themselves in a mirror wearing, on different occasions, a hat, glasses, necklace, and given the picture of a face, 48 hours later, the chimpanzees applied clay to the top of the head (hat), to the eyes (glasses), and the throat (necklace) respectively.[10]

Premack further argued that young children divide the world into two kinds of objects, those that move only when acted upon by other objects, and those that are self-propelled and move on their own.

He argued that infants attribute intentionality to self-propelled objects that show goal-directed action. Further that infants attribute value to the interaction of intentional objects, e.g. positive value to gentle actions such as one object caressing another, negative value to harsh actions such as one object hitting another. In addition infants assign positive value when one object helps another to achieve its goal, negative value when one object hinders another from achieving its goal. Finally, he and Ann Premack argued: infants equate caressing with helping (despite their physical dissimilarity); and equate hitting with hindering (despite their physical dissimilarity.[11][12]

The preeminence of evolutionary theory has given unjustified credence to Darwin's claim that there are no major differences in the intelligence of animals and humans. The claim is distinctly false. Human competence is domain general, capable of serving indeterminately many goals; animal competence is a narrow adaptation, serving only one goal. For instance, humans teach all possible activities (different ones in different cultures), whereas meerkats and cats, two of very few animals that teach at all, teach one activity: how to eat dangerous food such as scorpions in the one case, and how to stalk mice in the other.[13] What explains the domain-generality of human competence? Human competence is composed of an interweaving of multiple evolutionarily-independent components; animal competence of a single evolutionary component.[14]

Premack debated of the nature of linguistic performance in apes with Jean Piaget and Noam Chomsky at the Centre Royaumont pour une Science de l'Homme, during one of the last moments when Jacques Monod could participate in intellectual debates shortly before his death.

Premack is best known within the field of behavioral psychology for formulating what is known as Premack's principle

Refutation of Response DeprivationEdit

Taken from correspondence between Premack and John Staddon from Duke University, dated March 16, 1979:

Dear John:

A long time ago I promised to explain what is wrong with response deprivation, and now in doing so I see that my hesitation has come from the fact that the point is so simple I'm reluctant to write it out. But since people as able as you are misled by response deprivation (RD), I advance into the obvious.

Take this example. Parameters are such that if left to its own resources a rat will drink for 300 seconds and run for only 100. We arrange a contingency such that the rat must drink to run. On the face of it, it may seem that the probability view must predict no reinforcement. But that is not true, and in fact no judgment could be made without additional information.

We arrange a contingency in which the rat must drink for 75 units in order to run for 5. These are not the only values that will illustrate my point; any values that will meet the response deprivation condition will illustrate my point; the ones I choose are simply convenient. I outline the trial-by-trial consequence of this 75/5 contingency below.

300 - 75 = 225
100 - 5 = 95
225 - 75 = 150
           = 90
150 - 75 = 75 
           = 85

We see that by trial three, running has become the more probable response, and hence that, from the probability view, running should reinforce drinking. The probability view predicts in addition (1) the trial from which the reinforcement effect should begin, and (2) the ordinal magnitude of the reinforcement effect.

All else equal, the latter should be proportional to the duration of the contingent response that is unexpended, or residual, at the time when the contingent response becomes more probable than the instrumental one. In the present example, that residual magnitude is 85 seconds. If the instrumental requirement were reduced, say from 75 to 50 units of drinking, then the residual running would fall to 75 seconds, and it would fall still lower--to 45 seconds--if the instrumental requirement were reduced further, the 25 seconds, etc. And the reinforcement produced by the contingency should decline accordingly. All of this is a matter of simple arithmetic and I leave it to your quantitative skills to state the matter more formally if you like.

But whether stated formally or informally, the main point is quite simple: it is impossible to realize the response deprivation condition without assuring that the contingent response, though less probable than the instrumental one at the start of the session, becomes more probable before the session ends. The response deprivation condition is no more than a way of using conditioning parameters--instrumental requirement relative to contingent allotment--to arrange a within-session reversal of the response probabilities. Hence, any confirmation of the response deprivation prediction is of necessity a confirmation of the probability view. (In addition, I can show cases where the reverse is not true, and hence that the two positions are not equivalent; but that goes beyond what I want to show here. Here it is sufficient to show merely that, contrary to the impression given by response deprivation, there are no cases in which less probable responses reinforce more probable ones.)

Incidentally, that reverses can be made in probability with a consequent reversal in what will reinforce what is, of course, no novelty. We showed that in the case of run and drink, and in a second evidently less well-known case described in the enclosed reprint (this is not a reversal between two responses but a reversal in the magnitude of the effect that two responses have relative to a common one). The reversal engendered by response deprivation differs only in that it occurs within-, rather than between-, sessions, and of course uses conditioning parameters rather than maintenance ones. How the reversal works is clear enough: the initially more probable event is drained off at a sufficiently greater rate than the less probable one, so that after some point in the session, it becomes the less probable event.

I warned you that it was obvious.


David Premack

Ten days later (March 26, 1979), Premack made further clarifications on his argument in a follow-up letter to Staddon:

Dear John:

I'm not too pleased with my recent letter to you, and decided that that "obvious" point can be made clearer. Return to our original example, viz., total expected duration of drink 300 seconds, that of run 100 seconds, and in all cases a contingency such that the rat must drink (for some predetermined duration) in order to run (for some predetermined duration). Let's call these two durations instrumental and contingent time respectively. In my first letter contingent time was 5 seconds and instrumental time was varied--25, 50, 75 seconds. I'll use the same values again, doing the arithmetic out in the open again below.


The important point for our purposes is that an inversion--between the probabilities of the instrumental and contingent responses--occurs on trials 3, 6, and 12 in the three cases respectively. We can use these differences to test the following simple prediction from the probability view: if the contingency is discontinues before the inversion; no reinforcement will take place, irrespective of the total frequency of contingent trials. Thus, if the contingency is discontinues after trials 2, 5, and 11 in cases 1, 2, and 3, no reinforcement should occur.

To show this in an interesting way, we might do the following. Arrange three sessions for case 3, 11 trials per session, a total of 33 trials; seven sessions for case 2, 5 trials per session, a total of 35 trials; and sixteen sessions for case 1, 2 trials per session, a total of 32 trials, making the same prediction of no reinforcement in each case.

From the probability point of view we can predict the trial from which reinforcement should occur, or that if the contingency is discontinued before the inversion no reinforcement should occur; these are two ways of saying very nearly the same thing. But the latter may be a little clearer than the former, or the two together clearer than either alone.


David Premack


See alsoEdit



Book ChaptersEdit

  • Premack, A.J. and Premack, D.B. (1983)The Mind of an Ape
  • Premack, D.B.(2002) Original Intelligence: The Architecture of the Human Mind


  • Premack, D.B.& Woodruff,G.(1959)"Does the chimpanzee have a theory of mind?" The Behavioral and Brain Sciences 3,615-636
  • Premack, D.B. (1976)Intelligence in Apes and Man (1976)
  • Premack, D.B. (1985)"Gavagai! : or the Future History of the Animal Language Controversy".Cognition 19,207-296
  • Premack, D.B. (1972)"Language in chimpanzees?" Science 172,808-822
  • Premack, A.J. and Premack, D.B. Teaching language to an ape, Scientific American 227: 92-9.
  • Premack, D.B. (1959) Toward empirical behaviour laws: Part I Positive reinforcement, Psychological Review 66: 219-33.
  • Premack, D.B. (1962) Reversibility of the reinforcement relation, Science 136: 255-7.
  • Premack, D.B. (1983) Animal cognition, Annual Review of Psychology 34: 351-62.

External linksEdit

This page uses Creative Commons Licensed content from Wikipedia (view authors).

Cite error: <ref> tags exist, but no <references/> tag was found

Ad blocker interference detected!

Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.

Also on Fandom

Random Wiki