# Theory of conjoint measurement

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
34,190pages on
this wiki

The theory of conjoint measurement (also known as conjoint measurement or additive conjoint measurement) is a general, formal theory of continuous quantity. It was independently discovered by the French economist Gérard Debreu (1960) and by the American mathematical psychologist R. Duncan Luce and statistician John Tukey (Luce & Tukey 1964).

The theory concerns the situation where at least two natural attributes, A and X, non-interactively relate to a third attribute, P. It is not required that A, X or P are known to be quantities. Via specific relations between the levels of P, it can be established that P, A and X are continuous quantities. Hence the theory of conjoint measurement can be used to quantify attributes in empirical circumstances where it is not possible to combine the levels of the attributes using a side-by-side operation or concatenation. The quantification of psychological attributes such as attitudes, cognitive abilities and utility is therefore logically plausible. This means that the scientific measurement of psychological attributes is possible. That is, like physical quantities, a magnitude of a psychological quantity may possibly be expressed as the product of a real number and a unit magnitude.

Application of the theory of conjoint measurement in psychology, however, has been limited. It has been argued that this is due to the high level of formal mathematics involved (e.g., Cliff 1992) and that the theory cannot account for the "noisy" data typically discovered in psychological research (e.g., Perline, Wright & Wainer 1979). It has been argued that the Rasch model is a stochastic variant of the theory of conjoint measurement (e.g., Brogden 1977; Embretson & Reise 2000; Fischer 1995; Keats 1967; Kline 1998; Scheiblechner 1999), however, this has been disputed (e.g., Karabatsos, 2001; Kyngdon, 2008). Order restricted methods for conducting probabilistic tests of the cancellation axioms of conjoint measurement have been developed in the past decade (e.g., Karabatsos, 2001; Davis-Stober, 2009).

The theory of conjoint measurement is (different but) related to conjoint analysis, which is a statistical-experiments methodology employed in marketing to estimate the parameters of additive utility functions. Different multi-attribute stimuli are presented to respondents, and different methods are used to measure their preferences about the presented stimuli. The coefficients of the utility function are estimated using alternative regression-based tools.

## Historical overviewEdit

In the 1930s, the British Association for the Advancement of Science established the Ferguson Committee to investigate the possibility of psychological attributes being measured scientifically. The British physicist and measurement theorist Norman Robert Campbell was an influential member of the committee. In its Final Report (Ferguson, et al., 1940), Campbell and the Committee concluded that because psychological attributes were not capable of sustaining concatenation operations, such attributes could not be continuous quantities. Therefore, they could not be measured scientifically. This had important ramifications for psychology, the most significant of these being the creation in 1946 of the operational theory of measurement by Harvard psychologist Stanley Smith Stevens. Stevens' non-scientific theory of measurement is widely held as definitive in psychology and the behavioural sciences generally (Michell 1999).

Whilst the German mathematician Otto Hölder (1901) anticipated features of the theory of conjoint measurement, it was not until the publication of Luce & Tukey's seminal 1964 paper that the theory received its first complete exposition. Luce & Tukey's presentation was algebraic and is therefore considered more general than Debreu's (1960) topological work, the latter being a special case of the former (Luce & Suppes 2002). In the first article of the inaugural issue of the Journal of Mathematical Psychology, Luce & Tukey 1964 proved that via the theory of conjoint measurement, attributes not capable of concatenation could be quantified. N.R. Campbell and the Ferguson Committee were thus proven wrong. That a given psychological attribute is a continuous quantity is a logically coherent and empirically testable hypothesis.

Appearing in the next issue of the same journal were important papers by Dana Scott (1964), who proposed a hierarchy of cancellation conditions for the indirect testing of the solvability and Archimedean axioms, and David Krantz (1964) who connected the Luce & Tukey work to that of Hölder (1901).

Work soon focused on extending the theory of conjoint measurement to involve more than just two attributes. Krantz 1968 and Amos Tversky (1967) developed what became known as polynomial conjoint measurement, with Krantz 1968 providing a schema with which to construct conjoint measurement structures of three or more attributes. Later, the theory of conjoint measurement (in its two variable, polynomial and n-component forms) received a thorough and highly technical treatment with the publication of the first volume of Foundations of Measurement, which Krantz, Luce, Tversky and philosopher Patrick Suppes cowrote (Krantz et al. 1971).

Shortly after the publication of Krantz, et al., (1971), work focused upon developing an "error theory" for the theory of conjoint measurement. Studies were conducted into the number of conjoint arrays that supported only single cancellation and both single and double cancellation (Arbuckle & Larimer 1976; McClelland 1977). Later enumeration studies focused on polynomial conjoint measurement (Karabatsos & Ullrich 2002; Ullrich & Wilson 1993). These studies found that it is highly unlikely that the axioms of the theory of conjoint measurement are satisfied at random, provided that more than three levels of at least one of the component attributes has been identified.

Joel Michell (1988) later identified that the "no test" class of tests of the double cancellation axiom was empty. Any instance of double cancellation is thus either an acceptance or a rejection of the axiom. Michell also wrote at this time a non-technical introduction to the theory of conjoint measurement (Michell 1990) which also contained a schema for deriving higher order cancellation conditions based upon Scott's (1964) work. Using Michell's schema, Ben Richards (Kyngdon & Richards, 2007) discovered that some instances of the triple cancellation axiom are "incoherent" as they contradict the single cancellation axiom. Moreover, he identified many instances of the triple cancellation which are trivially true if double cancellation is supported.

The axioms of the theory of conjoint measurement are not stochastic; and given the ordinal constraints placed on data by the cancellation axioms, order restricted inference methodology must be used (Iverson & Falmagne 1985). George Karabatsos and his associates (Karabatsos, 2001; Karabatsos & Sheu 2004) developed a Bayesian Markov chain Monte Carlo methodology for psychometric applications. Karabatsos & Ullrich 2002 demonstrated how this framework could be extended to polynomial conjoint structures. Karabatsos (2005) generalised this work with his multinomial Dirichlet framework, which enabled the probabilistic testing of many non-stochastic theories of mathematical psychology. More recently, Clintin Davis-Stober (2009) developed a frequentist framework for order restricted inference that can also be used to test the cancellation axioms.

Perhaps the most notable (Kyngdon, 2011) use of the theory of conjoint measurement was in the prospect theory proposed by the Israeli - American psychologists Daniel Kahneman and Amos Tversky (Kahneman & Tversky, 1979). Prospect theory was a theory of decision making under risk and uncertainty which accounted for choice behaviour such as the Allais Paradox. David Krantz wrote the formal proof to prospect theory using the theory of conjoint measurement. In 2002, Kahneman received the Nobel Memorial Prize in Economics for prospect theory (Birnbaum, 2008).

## Measurement and quantificationEdit

### The classical / standard definition of measurementEdit

In physics and metrology, the standard definition of measurement is the estimation of the ratio between a magnitude of a continuous quantity and a unit magnitude of the same kind (de Boer, 1994/95; Emerson, 2008). For example, the statement "Peter's hallway is 4m long" expresses a measurement of an hitherto unknown length magnitude (the hallway's length) as the ratio of the unit (the metre in this case) to the length of the hallway. The real number "4" is a real number in the strict mathematical sense of this term.

For some other quantities, it is easier or has been convention to estimate ratios between attribute differences. Consider temperature, for example. In the familiar everyday instances, temperature is measured using instruments calibrated in either the Fahrenheit or Celsius scales. What are really being measured with such instruments are the magnitudes of temperature differences. For example, Anders Celsius defined the unit of the Celsius scale to be 1/100th of the difference in temperature between the freezing and boiling points of water at sea level. A midday temperature measurement of 20 degrees Celsius is simply the ratio of the Celsius unit to the midday temperature.

Formally expressed, a scientific measurement is:

$Q = r \times [Q]$

where Q is the magnitude of the quantity, r is a real number and [Q] is a unit magnitude of the same kind.

This classical/standard definition of measurement does not take into account that measurement in one physical realm is affected by other physical realms as demonstrated by the Heisenberg uncertainty principle, and Einstein's theories of Special and General Relativity. For instance, we know from Boyle's law that measurement of volume is affected by temperature, pressure, etc. A gallon of gasoline measured in winter, will expand in volume by summer and vice versa. The definition of temperature in degrees Celsius itself is based upon the boiling temperature of water AT SEA LEVEL, but do we usually account for this in our measurement of temperature? We also know from Einstein's theories that length is not constant for any object in motion, and all objects in the universe are under varying motion. Similarly for time. Therefore it is not possible for any measurement (physical or psychological) to be the ratio between a magnitude of a continuous quantity and a unit magnitude of the same kind.

### Extensive and intensive quantityEdit

Length is a quantity for which natural concatenation operations exist. That is, we can combine in a side by side fashion lengths of rigid steel rods, for example, such that the additive relations between lengths is readily observed. If we have four 1m lengths of such rods, we can place them end to end to produce a length of 4m. Quantities capable of concatenation are known as extensive quantities and include mass, time, electrical resistance and plane angle. These are known as base quantities in physics and metrology.

Temperature is a quantity for which there is an absence of concatenation operations. We cannot pour a volume of water of temperature 40 degrees Celsius into another bucket of water at 20 degrees Celsius and expect to have a volume of water with a temperature of 60 degrees Celsius. Temperature is therefore an intensive quantity.

Psychological attributes, like temperature, are considered to be intensive as no way of concatenating such attributes has been found. But this is not to say that such attributes are not quantifiable. The theory of conjoint measurement provides a theoretical means of doing this.

### TheoryEdit

Consider two natural attributes A, and X. It is not known that either A or X is a continuous quantity, or that both of them are. Let a, b, and c represent three independent, identifiable levels of A; and let x, y and z represent three independent, identifiable levels of X. A third attribute, P, consists of the nine ordered pairs of levels of A and X. That is, (a, x), (b, y),..., (c, z) (see Figure 1). The quantification of A, X and P depends upon the behaviour of the relation holding upon the levels of P. These are relations are presented as axioms in the theory of conjoint measurement.

#### Single cancellation or independence axiomEdit

The single cancellation axiom is as follows. The relation upon P satisfies single cancellation if and only if for all a and b in A, and x in X, (a, x) > (b, x) is implied for every w in X such that (a, w) > (b, w). Similarly, for all x and y in X and a in A, (a, x) > (a, y) is implied for every d in A such that (d, x) > (d, y). What this means is that if any two levels, a, b, are ordered, then this order holds irrespective of each and every level of X. The same holds for any two levels, x and y of X with respect to each and every level of A.

Single cancellation is so-called because a single common factor of two levels of P cancel out to leave the same ordinal relationship holding on the remaining elements. For example, a cancels out of the inequality (a, x) > (a, y) as it is common to both sides, leaving x > y. Krantz, et al., (1971) originally called this axiom independence, as the ordinal relation between two levels of an attribute is independent of any and all levels of the other attribute. However, given that the term independence causes confusion with statistical concepts of independence, single cancellation is the preferable term. Figure One is a graphical representation of one instance of single cancellation.

Satisfaction of the single cancellation axiom is necessary, but not sufficient, for the quantification of attributes A and X. It only demonstrates that the levels of A, X and P are ordered. Informally, single cancellation does not sufficiently constrain the order upon the levels of P to quantify A and X. For example, consider the ordered pairs (a, x), (b, x) and (b, y). If single cancellation holds then (a, x) > (b, x) and (b, x) > (b, y). Hence via transitivity (a, x) > (b, y). The relation between these latter two ordered pairs, informally a left-leaning diagonal, is determined by the satisfaction of the single cancellation axiom, as are all the "left leaning diagonal" relations upon P.

#### Double cancellation axiomEdit

Single cancellation does not determine the order of the "right-leaning diagonal" relations upon P. Even though by transitivity and single cancellation it was established that (a, x) > (b, y), the relationship between (a, y) and (b, x) remains undetermined. It could be that either (b, x) > (a, y) or (a, y) > (b, x) and such ambiguity cannot remain unresolved.

The double cancellation axiom concerns a class of such relations upon P in which the common terms of two antecedent inequalities cancel out to produce a third inequality. Consider the instance of double cancellation graphically represented by Figure Two. The antecedent inequalities of this particular instance of double cancellation are:

$\left(a, y\right) > \left(b, x\right)$

and

$\left(b, z\right)> \left(c, y\right)$.

Given that:

$\left(a, y\right) > \left(b, x\right)$

is true if and only if $a + y > b + x$; and

$\left(b, z\right) > \left(c, y\right)$

is true if and only if $b + z > c + y$, it follows that:

$a + y + b + z > b + x + c + y$.

Cancelling the common terms results in:

$\left(a, z\right)> \left(c, x\right)$.

Hence double cancellation can only obtain when A and X are quantities.

Double cancellation is satisfied if and only if the consequent inequality does not contradict the antecedent inequalities. For example, if the consequent inequality above was:

$\left(a, z\right)< \left(c, x\right)$, or alternatively,

$\left(a, z\right)= \left(c, x\right)$,

then double cancellation would be violated (Michell 1988) and it could not be concluded that A and X are quantities.

Double cancellation concerns the behaviour of the "right leaning diagonal" relations on P as these are not logically entailed by single cancellation. (Michell 2009) discovered that when the levels of A and X approach infinity, then the number of right leaning diagonal relations is half of the number of total relations upon P. Hence if A and X are quantities, half of the number of relations upon P are due to ordinal relations upon A and X and half are due to additive relations upon A and X (Michell 2009).

The number of instances of double cancellation is contingent upon the number of levels identified for both A and X. If there are n levels of A and m of X, then the number of instances of double cancellation is n! × m!. Therefore, if n = m = 3, then 3! × 3! = 6 × 6 = 36 instances in total of double cancellation. However, all but 6 of these instances are trivially true if single cancellation is true, and if anyone of these 6 instances is true, then all of them are true. One such instance is that shown in Figure Two. (Michell 1988) calls this a Luce — Tukey instance of double cancellation. If single cancellation has been tested upon a set of data first and is established, then only the Luce — Tukey instances of double cancellation need to be tested. For n levels of A and m of X, the number of Luce — Tukey double cancellation instances is $\tbinom{n}{3}$$\tbinom{m}{3}$. For example, if n = m = 4, then there are 16 such instances. If n = m = 5 then there are 100. The greater the number of levels in both A and X, the less probable it is that the cancellation axioms are satisfied at random (Arbuckle & Larimer 1976; McClelland 1977) and the more stringent test of quantity the application of conjoint measurement becomes.

#### Solvability and Archimedean axiomsEdit

The single and double cancellation axioms by themselves are not sufficient to establish continuous quantity. Other conditions must also be introduced to ensure continuity. These are the solvability and Archimedean conditions.

Solvability means that for any three elements of a, b, x and y, the fourth exists such that the equation a x = b y is solved, hence the name of the condition. Solvability essentially is the requirement that each level P has an element in A and an element in X. Solvability reveals something about the levels of A and X — they are either dense like the real numbers or equally spaced like the integers (Krantz et al. 1971).

The Archimedean condition is as follows. Let I be a set of consecutive integers, either finite or infinite, positive or negative. The levels of A form a standard sequence if and only if there exists x and y in X where xy and for all integers i and i + 1 in I:

$\left(a_i, x\right) = \left(a_{i+1}, y\right)$.

What this basically means is that if x is greater than y, for example, there are levels of A which can be found which makes two relevant ordered pairs, the levels of P, equal.

The Archimedean condition argues that there is no infinitely greatest level of P and so hence there is no greatest level of either A or X. This condition is a definition of continuity given by the ancient Greek mathematician Archimedes whom wrote that "Further, of unequal lines, unequal surfaces, and unequal solids, the greater exceeds the less by such a magnitude as, when added to itself, can be made to exceed any assigned magnitude among those which are comparable with one another " (On the Sphere and Cylinder, Book I, Assumption 5). Archimedes recognised that for any two magnitudes of a continuous quantity, one being lesser than the other, the lesser could be multiplied by a whole number such that it equalled the greater magnitude. Euclid stated the Archimedean condition as an axiom in Book V of the Elements, in which Euclid presented his theory of continuous quantity and measurement.

As they involve infinitistic concepts, the solvability and Archimedean axioms are not amenable to direct testing in any finite empirical situation. But this does not entail that these axioms cannot be empirically tested at all. Scott's (1964) finite set of cancellation conditions can be used to indirectly test these axioms; the extent of such testing being empirically determined. For example, if both A and X possess three levels, the highest order cancellation axiom within Scott's (1964) hierarchy that indirectly tests solvability and Archimedeaness is double cancellation. With four levels it is triple cancellation (Figure 3). If such tests are satisfied, the construction of standard sequences in differences upon A and X are possible. Hence these attributes may be dense as per the real numbers or equally spaced as per the integers (Krantz et al. 1971). In other words, A and X are continuous quantities.

## Relation to the scientific definition of measurementEdit

Satisfaction of the conditions of conjoint measurement means that measurements of the levels of A and X can be expressed as either ratios between magnitudes or ratios between magnitude differences. It is most commonly interpreted as the latter, given that most behavioural scientists consider that their tests and surveys "measure" attributes on so-called "interval scales" (Kline 1998). That is, they believe tests do not identify absolute zero levels of psychological attributes.

Formally, if P, A and X form an additive conjoint structure, then there exist functions from A and X into the real numbers such that for a and b in A and x and y in X:

$\left(a, x \right)\succsim\left(b, y \right)\iff \phi_A \left(a\right) + \phi_X \left(x\right)\geqslant\phi_A \left(b\right) + \phi_X \left(y\right)$.

If $\phi'_A \,$ and $\phi'_X \,$ are two other real valued functions satisfying the above expression, there exist $\alpha > 0, \beta_A \,$ and $\beta_X \,$ real valued constants satisfying:

$\phi'_A = \alpha \phi_A + \beta_A \,$ and $\phi'_X = \alpha \phi_X + \beta_X \,$.

That is, $\phi'_A, \phi_A, \phi'_X \,$ and $\phi_X \,$ are measurements of A and X unique up to affine transformation (i.e. each is an interval scale in Stevens’ (1946) parlance). The mathematical proof of this result is given in (Krantz et al. 1971, pp. 261–6).

This means that the levels of A and X are magnitude differences measured relative to some kind of unit difference. Each level of P is a difference between the levels of A and X. However, it is not clear from the literature as to how a unit could be defined within an additive conjoint context. van der Ven 1980 proposed a scaling method for conjoint structures but he also did not discuss the unit.

The theory of conjoint measurement, however, is not restricted to the quantification of differences. If each level of P is a product of a level of A and a level of X, then P is another different quantity whose measurement is expressed as a magnitude of A per unit magnitude of X. For example, A consists of masses and X consists of volumes, then P consists of densities measured as mass per unit of volume. In such cases, it would appear that one level of A and one level of X must be identified as a tentative unit prior to the application of conjoint measurement.

If each level of P is the sum of a level of A and a level of X, then P is the same quantity as A andX. For example, A and X are lengths so hence must be P. All three must therefore be expressed in the same unit. In such cases, it would appear that a level of either A or X must be tentatively identified as the unit. Hence it would seem that application of conjoint measurement requires some prior descriptive theory of the relevant natural system.

## Applications of Conjoint MeasurementEdit

Empirical applications of the theory of conjoint measurement have been sparse (Cliff 1992; Michell 2009).

Levelt, Riemersma & Bunt 1972 applied the theory to the psychophysics of binaural loudness. They found the double cancellation axiom was rejected. Gigerenzer & Strube 1983 conducted a similar investigation and replicated Levelt, et al.' (1972) findings.

Michell 1990 applied the theory to L.L. Thurstone's (1927) theory of paired comparisons, multidimensional scaling and Coombs' (1964) theory of unidimensional unfolding. He found support of the cancellation axioms only with Coombs' (1964) theory. However, the statistical techniques employed by Michell (1990) in testing Thurstone's theory and multidimensional scaling did not take into consideration the ordinal constraints imposed by the cancellation axioms (van der Linden 1994).

(Johnson 2001), Kyngdon (2006), Michell (1994) and (Sherman 1993) tested the cancellation axioms of upon the interstimulus midpoint orders obtained by the use of Coombs' (1964) theory of unidimensional unfolding. Coombs' theory in all three studies was applied to a set of six statements. These authors found that the axioms were satisfied, however, these were applications biased towards a positive result. With six stimuli, the probability of an interstimulus midpoint order satisfying the double cancellation axioms at random is .5874 (Michell, 1994). This is not an unlikely event. Kyngdon & Richards (2007) employed eight statements and found the interstimulus midpoint orders rejected the double cancellation condition.

Perline, Wright & Wainer 1979 applied conjoint measurement to item response data to a convict parole questionnaire and to intelligence test data gathered from Danish troops. They found considerable violation of the cancellation axioms in the parole questionnaire data, but not in the intelligence test data. Moreover, they recorded the supposed "no - test" instances of double cancellation. Interpreting these correctly as instances in support of double cancellation (Michell, 1988), the results of Perline, Wright & Wainer 1979 are better than what they believed.

Stankov & Cregan 1993 applied conjoint measurement to performance on sequence completion tasks. The columns of their conjoint arrays (X) were defined by the demand placed upon working memory capacity through increasing numbers of working memory place keepers in letter series completion tasks. The rows were defined by levels of motivation (A), which consisted in different amount of times available for compelting the test. Their data (P) consisted of completion times and average number of series correct. They found support for the cancellation axioms, however, their study was biased by the small size of the conjoint arrays (3 × 3 is size) and by statistical techniques that did not take into consideration the ordinal restrictions imposed by the cancellation axioms.

Kyngdon (2011) used Karabatsos' (2001) order restricted inference framework to test a conjoint matrix of reading item response proportions (P) where the examinee reading ability comprised the rows of the conjoint array (A) and the difficulty of the reading items formed the columns of the array (X). The levels of reading ability were identified via raw total test score and the levels of reading item difficulty were identified by the Lexile Framework for Reading (Stenner et al. 2006). Kyngdon found that satisfaction of the cancellation axioms was obtained only through permutation of the matrix in a manner inconsistent with the putative Lexile measures of item difficulty. Kyngdon also tested simulated ability test response data using polynomial conjoint measurement. The data were generated using Humphry's extended frame of reference Rasch model (Humphry & Andrich 2008). He found support of distributive, single and double cancellation consistent with a distributive polynomial conjoint structure in three variables (Krantz & Tversky 1971).

## ReferencesEdit

• (1976). The number of two-way tables satisfying certain additivity axioms. Journal of Mathematical Psychology 12: 89–100.
• (2008). New paradoxes of risky decision making. Psychological Review 115 (2): 463–501.
• (December 1977)The Rasch model, the law of comparative judgement and additive conjoint measurement. Psychometrika 42 (4): 631–4.
• (1992). Abstract measurement theory and the revolution that never happened. Psychological Science 3 (3): 186–190.
• (1964) A Theory of Data, New York: Wiley.Template:Pn
• (February 2009)Analysis of multinomial models under inequality constraints: applications to measurement theory. Journal of Mathematical Psychology 53 (1): 1–13.
• (1960) "Topological methods in cardinal utility theory" Mathematical Methods in the Social Sciences, 16–26, Stanford University Press.
• (2000) Item response theory for psychologists, Erlbaum.Template:Pn
• (2008). On quantity calculus and units of measurement. Metrologia 45 (2): 134–138.
• (1995) "Derivations of the Rasch model" Rasch models: Foundations, recent developments, and applications, 15–38, New York: Springer.
• (1983). Are there limits to binaural additivity of loudness?. Journal of Experimental Psychology: Human Perception and Performance 9: 126–136.
• (September 1988)Two-group classification and latent trait theory: scores with monotone likelihood ratio. Psychometrika 53 (3): 383–392.
• (1901). Die Axiome der Quantität und die Lehre vom Mass. Berichte uber die Verhandlungen der Koeniglich Sachsischen Gesellschaft der Wissenschaften zu Leipzig, Mathematisch-Physikaliche Klasse 53: 1–46. (Part 1 translated by (September 1996)The axioms of quantity and the theory of measurement. Journal of Mathematical Psychology 40 (3): 235–252.
• (2008). Understanding the unit in the Rasch model. Journal of Applied Measurement 9 (3): 249–264.
• (1985). Statistical issues in measurement. Mathematical Social Sciences 10 (2): 131–153.
• (2001). Controlling the effect of stimulus context change on attitude statements using Michell's binary tree procedure. Australian Journal of Psychology 53: 23–28.
• (1979). Prospect theory: an analysis of decision under risk. Econometrica 47 (2): 263–291.
• (2001). The Rasch model, additive conjoint measurement, and new models of probabilistic measurement theory. Journal of Applied Measurement 2 (4): 389–423.
• (February 2005)The exchangeable multinomial model as an approach for testing axioms of choice and measurement. Journal of Mathematical Psychology 49 (1): 51–69.
• (2004). Bayesian order constrained inference for dichotomous models of unidimensional non-parametric item response theory. Applied Psychological Measurement 28 (2): 110–125.
• (2002). Enumerating and testing conjoint measurement models. Mathematical Social Sciences 43 (3): 485–504.
• (July 1964)Conjoint measurement: the Luce — Tukey axiomatisation and some extensions. Journal of Mathematical Psychology 1 (2): 248–277.
• (1968) "A survey of measurement theory" Mathematics of the Decision Sciences: Part 2, 314–350, Providence, Rhode Island: American Mathematical Society.
• (1967). Test theory. Annual Review of Psychology 18: 217–238.
• (1998) The New Psychometrics: Science, psychology and measurement, London: Routledge.Template:Pn
• (1971) Foundations of Measurement, Vol. I: Additive and polynomial representations, New York: Academic Press.
• (1971). Conjoint measurement analysis of composition rules in psychology. Psychological Review 78 (2): 151–169.
• (2006). An empirical study into the theory of unidimensional unfolding. Journal of Applied Measurement 7 (4): 369–393.
• (2008). The Rasch model from the perspective of the representational theory of measurement. Theory & Psychology 18: 89–109.
• DOI:10.1348/2044-8317.002004
This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
• (2007). Attitudes, order and quantity: deterministic and direct probabilistic tests of unidimensional unfolding. Journal of Applied Measurement 8 (1): 1–34.
• (May 1972)Binaural additivity of loudness. British Journal of Mathematical and Statistical Psychology 25 (1): 51–68.
• (2002) "Representational measurement theory" Stevens’ handbook of experimental psychology: Vol. 4. Methodology in experimental psychology, 3rd, 1–41, New York: Wiley.
• (January 1964)Simultaneous conjoint measurement: a new scale type of fundamental measurement. Journal of Mathematical Psychology 1 (1): 1–27.
• (June 1977)"A note on Arbuckle and Larimer: the number of two way tables satisfying certain additivity axioms". Journal of Mathematical Psychology 15 (3): 292–5.
• (June 1994)Measuring dimensions of belief by unidimensional unfolding. Journal of Mathematical Psychology 38 (2): 224–273.
• (December 1988)Some problems in testing the double cancellation condition in conjoint measurement. Journal of Mathematical Psychology 32 (4): 466–473.
• (1990) An Introduction to the Logic of Psychological Measurement, Hillsdale NJ: Erlbaum.Template:Pn
• (February 2009). The psychometricians' fallacy: Too clever by half?. British Journal of Mathematical and Statistical Psychology 62 (1): 41–55.
• (1979). The Rasch model as additive conjoint measurement. Applied Psychological Measurement 3 (2): 237–255.
• (September 1999)Additive conjoint isotonic probabilistic models (ADISOP). Psychometrika 64 (3): 295–316.
• (July 1964)Measurement models and linear inequalities. Journal of Mathematical Psychology 1 (2): 233–247.
• (April 1994)The effect of change in context in Coombs's unfolding theory. Australian Journal of Psychology 46 (1): 41–47.
• (1993). Quantitative and qualitative properties of an intelligence test: series completion. Learning and Individual Differences 5 (2): 137–169.
• (2006). How accurate are Lexile text measures?. Journal of Applied Measurement 7 (3): 307–322.
• (1946). On the theory of scales of measurement. Science 103 (2684): 667–680.
• (2009) Luce's challenge: Quantitative models and statistical methodology.Template:Full
• (1927). A law of comparative judgement. Psychological Review 34 (4): 278–286.
• (1967). A general theory of polynomial conjoint measurement. Journal of Mathematical Psychology 4: 1–20.
• (December 1993) A note on the exact number of two and three way tables satisfying conjoint measurement and additivity axioms. Journal of Mathematical Psychology 37 (4): 624–8.
• (March 1994)Review of Michell (1990). Psychometrika 59 (1): 139–142.
• (1980) Introduction to Scaling, New York: Wiley.Template:Pn