Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Philosophy Index: Aesthetics · Epistemology · Ethics · Logic · Metaphysics · Consciousness · Philosophy of Language · Philosophy of Mind · Philosophy of Science · Social and Political philosophy · Philosophies · Philosophers · List of lists
Occam's razor (sometimes spelled Ockham's razor) is a principle attributed to the 14th-century English logician and Franciscan friar William of Ockham. The principle states that the explanation of any phenomenon should make as few assumptions as possible, eliminating, or "shaving off," those that make no difference in the observable predictions of the explanatory hypothesis or theory. The principle is often expressed in Latin as the lex parsimoniae ("law of parsimony" or "law of succinctness"):
entia non sunt multiplicanda praeter necessitatem,
which translates to:
entities should not be multiplied beyond necessity.
This is often paraphrased as "All things being equal, the simplest solution tends to be the best one." In other words, when multiple competing theories are equal in other respects, the principle recommends selecting the theory that introduces the fewest assumptions and postulates the fewest hypothetical entities. It is in this sense that Occam's razor is usually understood.
William Ockham (c. 1285–1349) … is remembered as an influential nominalist, but his popular fame as a great logician rests chiefly on the maxim known as Occam's razor Entia non sunt multiplicanda praeter necessitatem or "Entities should not be multiplied unnecessarily." The term razor refers to the act of shaving away unnecessary assumptions to get to the simplest explanation. No doubt this represents correctly the general tendency of his philosophy, but it has not so far been found in any of his writings. His nearest pronouncement seems to be Numquam ponenda est pluralitas sine necessitate, which occurs in his theological work on the Sentences of Peter Lombard (Quaestiones et decisiones in quattuor libros Sententiarum Petri Lombardi (ed. Lugd., 1495), i, dist. 27, qu. 2, K). In his Summa Totius Logicae, i. 12, Ockham cites the principle of economy, Frustra fit per plura quod potest fieri per pauciora.
— Thorburn, 1918, pp. 352-3; Kneale and Kneale, 1962, p. 243.
The origins of what has come to be known as Occam's razor are traceable to the works of earlier philosophers such as John Duns Scotus (1265–1308), Thomas Aquinas (c. 1225–1274), and even Aristotle (384–322 BC) (Charlesworth 1956). The term "Ockham's razor" first appeared in 1852 in the works of Sir William Rowan Hamilton (1805–1865), long after Ockham's death circa 1349. Ockham did not invent this "razor," so its association with him may be due to the frequency and effectiveness with which he used it (Ariew 1976). And though he stated the principle in various ways, the most popular version was written not by himself but by John Ponce of Cork in 1639 (Thorburn 1918).
The most-cited version of the Razor to be found in Ockham's work is Numquam ponenda est pluralitas sine necessitate or Plurality ought never be posed without necessity.
Aesthetic and practical considerationsEdit
Prior to the 20th century, it was a commonly-held belief that nature itself was simple and that simpler theories about nature were thus more likely to be true; this notion was deeply rooted in the aesthetic value simplicity holds for human thought and the justifications presented for it often drew from theology. Thomas Aquinas made this argument in the 13th century, writing, "If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices" (Pegis 1945).
The common form of the razor, used to distinguish between equally explanatory theories, can be supported by appeals to the practical value of simplicity. Theories exist to give accurate explanations of phenomena, and simplicity is a valuable aspect of an explanation because it makes the explanation easier to understand and work with. Thus, if two theories are equally accurate and neither appears more probable than the other, the simple one is to be preferred over the complicated one, because simplicity is practical. In computer science, for instance, tractability itself can be affected, such as with sorting algorithms.
One way a theory or a principle could be justified is empirically; that is to say, if simpler theories were to have a better record of turning out to be correct than more complex ones, that would corroborate Occam's razor. However, this type of justification has several complications.
First of all, even assuming that simpler theories have been more successful, this observation provides little insight into exactly why this is, and thus leaves open the possibility that the factor behind the success of these theories was not their simplicity but rather something that causally correlates with it (see Correlation vs. Causation). Second, Occam's Razor is not a theory; it is a heuristic maxim for choosing among theories, and attempting to choose between it and some alternative as if they were theories of the regular sort invokes circular logic. We rely on the razor when we justify induction; by attempting to in turn rely on induction when we justify the razor, we are begging the question.
There are many different ways of making inductive inferences from past data concerning the success of different theories throughout the history of science; inferring that "simpler theories are, other things being equal, generally better than more complex ones" is just one way of many, and only seems more plausible to us because we are already assuming the razor to be true (see e.g. Swinburne 1997). Inductive justification for Occam's razor being a dead-end game, we have the choice of either accepting it as an article of faith based on pragmatist considerations or attempting deductive justification.
Karl Popper argues that a preference for simple theories need not appeal to practical or aesthetic considerations. Our preference for simplicity may be justified by his falsifiability criterion: We prefer simpler theories to more complex ones "because their empirical content is greater; and because they are better testable" (Popper 1992). In other words, a simple theory applies to more cases than a more complex one, and is thus more easily refuted.
The philosopher of science Elliott Sober once argued along the same lines as Popper, tying simplicity with "informativeness": The simpler theory is the more informative theory, in the sense that less information is required in order to answer one's questions (Sober 1975). He has since rejected this account of simplicity, purportedly because it fails to provide an epistemic justification for simplicity. He now expresses views to the effect that simplicity considerations (and considerations of parsimony in particular) do not count unless they reflect something more fundamental. Philosophers, he suggests, may have made the error of hypostatizing simplicity (i.e. endowed it with a sui generis existence), when it has meaning only when embedded in a specific context (Sober 1992). If we fail to justify simplicity considerations on the basis of the context in which we make use of them, we may have no non-circular justification:
Just as the question 'why be rational?' may have no non-circular answer, the same may be true of the question "why should simplicity be considered in evaluating the plausibility of hypotheses".
— Sober 2001
Jerrold Katz has outlined a deductive justification of Occam's razor:
If a hypothesis, H, explains the same evidence as a hypothesis G, but does so by postulating more entities than G, then, other things being equal, the evidence has to bear greater weight in the case of H than in the case of G, and hence the amount of support it gives H is proportionately less than it gives G.
— Katz 1998
Richard Swinburne argues for simplicity on logical grounds:
[...] other things being equal -- the simplest hypothesis proposed as an explanation of phenomena is more likely to be the true one than is any other available hypothesis, that its predictions are more likely to be true than those of any other available hypothesis, and that it is an ultimate a priori epistemic principle that simplicity is evidence for truth.
— Swinburne 1997
He maintains that we have an innate bias towards simplicity and that simplicity considerations are part and parcel of common sense. Since our choice of theory cannot be determined by data (see Underdetermination and Quine-Duhem thesis), we must rely on some criterion to determine which theory to use. Since it is absurd to have no logical method by which to settle on one hypothesis amongst an infinite number of equally data-compliant hypotheses, we should choose the simplest theory.
...either science is irrational [in the way it judges theories and predictions probable] or the principle of simplicity is a fundamental synthetic a priori truth."
— Swinburne 1997
Science by Razor alone?Edit
The aforementioned problem of underdetermination poses a serious obstacle to applications of the scientific method. The primary activity of science — formulating theories and selecting the most promising ones — is impossible without a way of choosing among an arbitrarily large number of theories, all of which fit with the evidence equally well. If any one principle could single-handedly reduce all these infinite possibilities to find the one best theory, at first glance one might deduce that the whole of scientific method simply follows from it, and thus that it alone would be sufficient to power the whole process of hypothesis formulation and rejection scientists undertake.
It is true that Occam's razor has become a basic tool for those who follow the scientific method, and is by far the most popular tool invoked to justify one underdetermined theory over another (if not the only one). However, there is more to the scientific method than analyzing data - processes of collecting data, pre-existing mind frames, well-accepted hypotheses and even axioms that may or may not actually correspond with reality, and the vague nature of scientific community consensus all play a very significant role in the process of scientific inquiry, perhaps more significant in practice than many of the finer points of inductive logic (Thomas Kuhn outright rejected induction as the main driving force of the scientific method altogether in favor of paradigm shifts). Aside from that, the common statement of "the simplest explanation tends to be the best" cannot be properly evaluated for scientific purposes unless sharpened into a particular brand by a significant degree of formal precision; it is certainly possible to formulate a set of ground rules for the procedure and operation of such a razor that will be utterly useless or sorely lacking when tackling a particular set of data (see below, "probability theory").
Occam's razor is not equivalent to the idea that "perfection is simplicity". Albert Einstein probably had this in mind when he wrote in 1933 that "The supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience" often paraphrased as "Theories should be as simple as possible, but no simpler." It often happens that the best explanation is much more complicated than the simplest possible explanation because it requires fewer assumptions. In light of this, the popular rephrasing of the razor - "The simplest explanation is the best one" - can lead to a gross oversimplification when the word simple is taken at face value.
There are two senses in which Occam's razor can be seen at work in the history of science. One is ontological reduction by elimination and the other is by intertheoretic competition.
In the former case the following are examples of reduction by elimination: The impetus of Aristotelian Physics, the angelic motors of medieval celestial mechanics, the four humors of ancient and medieval medicine, demonic possession as an explanation of mental illness, phlogiston theory from premodern chemistry, and vital spirits of premodern biology.
In the latter case there are three examples from the history of science where the simpler of two competing theories each of which explains all the observed phenomena has been chosen over its ontologically bloated competitor: the Copernican heliocentric model of celestial mechanics over the Ptolemaic geocentric model, the mechanical theory of heat over the Caloric theory, and the Einsteinian theory of electromagnetism over the luminiferous aether theory.
- In the first example, the Copernican model is said to have been chosen over the Ptolemaic due to its greater simplicity. The Ptolemaic model, in order to explain the apparent retrograde motion of Mercury relative to Venus, posited the existence of epicycles within the orbit of Mercury. The Copernican model (as expanded by Kepler) was able to account for this motion by displacing the Earth from the center of the solar system and replacing it with the sun as the orbital focus of planetary motions while simultaneously replacing the circular orbits of the Ptolemaic model with elliptical ones. In addition the Copernican model excluded any mention of the crystalline spheres that the planets were thought to be embedded in according the Ptolemaic model. In a single stroke the Copernican model reduced by a factor of two the ontology of Astronomy.
- According to the Caloric theory of heat, heat is a weightless substance that can travel from one object to another. This theory arose from the study of cannon boring and the invention of the steam engine. It was while studying cannon boring that Count Rumford made observations that conflicted with the Caloric theory and he formulated his mechanical theory to replace it. The Mechanical theory eliminated the Caloric and was ontologically simpler than its predecessor.
- During the 19th century, physicists believed that light required a medium of transmission much as sound waves do. It was hypothesized that a universal aether was such a medium and much effort was expended to detect it. In one of the most famous negative experiments in the history of science, the Michelson-Morley experiment failed to find any evidence of its existence. Then when Einstein constructed his theory of special relativity without any reference to the Aether this subsequently became the accepted view, thus providing another example of a theory chosen in part for its greater ontological simplicity.
Biologists or philosophers of biology use Occam's razor in either of two contexts both in evolutionary biology: the units of selection controversy and Systematics. George C. Williams in his book Adaptation and Natural Selection (1966) argues that the best way to explain altruism among animals is based on low level (i.e. individual) selection as opposed to high level group selection. Altruism is defined as behavior that is beneficial to the group but not to the individual, and group selection is thought by some to be the evolutionary mechanism that selects for altruistic traits. Others posit individual selection as the mechanism which explains altruism solely in terms of the behaviors of individual organisms acting in their own self interest without regard to the group. The basis for Williams's contention is that of the two, individual selection is the more parsimonious theory. In doing so he is invoking a variant of Occam's razor known as Lloyd Morgan's Canon: "In no case is an animal activity to be interpreted in terms of higher psychological processes, if it can be fairly interpreted in terms of processes which stand lower in the scale of psychological evolution and development" (Morgan 1903).
However, more recent work by biologists, such as Richard Dawkins's The Selfish Gene, has revealed that Williams's view is not the simplest and most basic. Dawkins argues the way evolution works is that the genes that are propagated in most copies will end up determining the development of that particular species, i.e., natural selection turns out to select specific genes, and this is really the fundamental underlying principle, that automatically gives individual and group selection as emergent features of evolution.
Zoology provides an example. Musk oxen, when threatened by wolves, will form a circle with the males on the outside and the females and young on the inside. This as an example of a behavior by the males that seems to be altruistic. The behavior is disadvantageous to them individually but beneficial to the group as a whole and was thus seen by some to support the group selection theory.
However, a much better explanation immediately offers itself once one considers that natural selection works on genes. If the male musk ox runs off, leaving his offspring to the wolves, his genes will not be propagated. If however he takes up the fight his genes will live on in his offspring. And thus the "stay-and-fight" gene prevails. This is an example of kin selection. An underlying general principle thus offers a much simpler explanation, without retreating to special principles as group selection.
Systematics is the branch of biology that attempts to establish genealogical relationships among organisms. It is also concerned with their classification. There are three primary camps in systematics; cladists, pheneticists, and evolutionary taxonomists. The cladists hold that genealogy alone should determine classification and pheneticists contend that similarity over propinquity of descent is the determining criterion while evolutionary taxonomists claim that both genealogy and similarity count in classification.
It is among the cladists that Occam's razor is to be found, although their term for it is cladistic parsimony. Cladistic parsimony (or maximum parsimony) is a method of phylogenetic inference in the construction of cladograms. Cladograms are branching, tree-like structures used to represent lines of descent based on one or more evolutionary change(s). Cladistic parsimony is used to support the hypothesis(es) that require the fewest evolutionary changes. For some types of tree, it will consistently produce the wrong results regardless of how much data is collected (this is called long branch attraction). For a full treatment of cladistic parsimony see Elliott Sober's Reconstructing the Past: Parsimony, Evolution, and Inference (1988). For a discussion of both uses of Occam's razor in Biology see Elliott Sober's article Let's Razor Ockham's Razor (1990).
Other methods for inferring evolutionary relationships use parsimony in a more traditional way. Likelihood methods for phylogeny use parsimony as they do for all likelihood tests, with hypotheses requiring few differing parameters (i.e., numbers of different rates of character change or different frequencies of character state transitions) being treated as null hypotheses relative to hypotheses requiring many differing parameters. Thus, complex hypotheses must predict data much better than do simple hypotheses before researchers reject the simple hypotheses. Recent advances employ information theory, a close cousin of likelihood, which uses Occam's Razor in the same way.
Francis Crick has commented on potential limitations of Occam's razor in biology. He advances the argument that because biological systems are the products of (an on-going) natural selection, the mechanisms are not necessarily optimal in an obvious sense. He cautions: "While Ockham's razor is a useful tool in the physical sciences, it can be a very dangerous implement in biology. It is thus very rash to use simplicity and elegance as a guide in biological research."
When discussing Occam's razor in contemporary medicine, doctors and philosophers of medicine speak of diagnostic parsimony. Diagnostic parsimony advocates that when diagnosing a given injury, ailment, illness, or disease a doctor should strive to look for the fewest possible causes that will account for all the symptoms. While diagnostic parsimony might often be beneficial, credence should also be given to the counter-argument modernly known as Hickam's dictum, which put succinctly states that "Patients can have as many diseases as they damn well please". It is often statistically more likely that a patient has several common diseases, rather than having a single rarer disease which explains the myriad of their symptoms. Also, independently of statistical likelihood, some patients do in fact turn out to have multiple diseases, which by common sense nullifies the approach of insisting to explain any given collection of symptoms with one disease. These misgivings emerge from simple probability theory, which is already taken into account in many modern variations of the razor; and from the fact that the loss function is much greater in medicine than in most of general science, namely loss of a person's health and potentially life, and thus it is better to test and pursue all reasonable theories even if there is some theory that appears the most likely.
Diagnostic parsimony and the counter-balance it finds in Hickam's dictum have very important implications in medical practice. Any set of symptoms could be indicative of a range of possible diseases and disease combinations; though at no point is a diagnosis rejected or accepted just on the basis of one disease appearing more likely than another, the continuous flow of hypothesis formulation, testing and modification benefits greatly from estimates regarding which diseases (or sets of diseases) are relatively more likely to be responsible for a set of symptoms, given the patient's environment, habits, medical history and so on. For example, if a hypothetical patient's immediately apparent symptoms include fatigue and cirrhosis and they test negative for Hepatitis C, their doctor might formulate a working hypothesis that the cirrhosis was caused by their drinking problem, and then seek symptoms and perform tests to formulate and rule out hypotheses as to what has been causing the fatigue; but if the doctor were to further discover that the patient's breath inexplicably smells of garlic and they are suffering from pulmonary edema, they might decide to test for the relatively rare condition of Selenium poisoning.
Prior to effective anti-retroviral therapy for HIV it was frequently stated that the most obvious implication of Occam's razor, that of cutting down the number of postulated diseases to a minimum, does not apply to patients with AIDS - as they frequently did have multiple infectious processes going on at the same time. While the probability of multiple diseases being higher certainly reduces the degree to which this kind of analysis is useful, it does not go all the way to invalidating it altogether - even in such a patient, it would make more sense to first test a theory postulating three diseases to be the cause of the symptoms than a theory postulating seven.
Philosophy of mindEdit
Probably the first person to make use of the principle was Ockham himself. He writes "The source of many errors in philosophy is the claim that a distinct signified thing always corresponds to a distinct word in such a way that there are as many distinct entities being signified as there are distinct names or words doing the signifying." (Summula Philosophiae Naturalis III, chap. 7, see also Summa Totus Logicae Bk I, C.51). We are apt to suppose that a word like "paternity" signifies some "distinct entity", because we suppose that each distinct word signifies a distinct entity. This leads to all sorts of absurdities, such as "a column is to the right by to-the-rightness", "God is creating by creation, is good by goodness, is just by justice, is powerful by power", "an accident inheres by inherence", "a subject is subjected by subjection", "a suitable thing is suitable by suitability", "a chimera is nothing by nothingness", "a blind thing is blind by blindness", " a body is mobile by mobility". We should say instead that a man is a father because he has a son (Summa C.51).
Another application of the principle is to be found in the work of George Berkeley (1685–1753). Berkeley was an idealist who believed that all of reality could be explained in terms of the mind alone. He famously invoked Occam's razor against Idealism's metaphysical competitor, materialism, claiming that matter was not required by his metaphysic and was thus eliminable.
In the 20th century Philosophy of Mind, Occam's razor found a champion in J. J. C. Smart, who in his article "Sensations and Brain Processes" (1959) claimed Occam's razor as the basis for his preference of the mind-brain identity theory over mind body dualism. Dualists claim that there are two kinds of substances in the universe: physical (including the body) and mental, which is nonphysical. In contrast identity theorists claim that everything is physical, including consciousness, and that there is nothing nonphysical. The basis for the materialist claim is that of the two competing theories, dualism and mind-brain identity, the identity theory is the simpler since it commits to fewer entities. Smart was criticized for his use of the razor and ultimately retracted his advocacy of it in this context.
Many scientists, however, claim that this is exactly reversed. Erwin Schrödinger wrote that "Consciousness is the singular for which there is no plural," thus placing consciousness first and everything, including the physical universe, within the realm of consciousness. Dr. Amit Goswami, a physics teacher and author of numerous books, including The Self Aware Universe: How Consciousness Creates the Material World, argues that "consciousness is the ground of all being."
Paul Churchland (1984) cites Occam's razor as the first line of attack against dualism, but admits that by itself it is inconclusive. The deciding factor for Churchland is the greater explanatory prowess of a materialist position in the Philosophy of Mind as informed by findings in neurobiology.
Dale Jacquette (1994) claims that Occam's razor is the rationale behind eliminativism and reductionism in the philosophy of mind. Eliminativism is the thesis that the ontology of folk psychology including such entities as "pain", "joy", "desire", "fear", etc., are eliminable in favor of an ontology of a completed neuroscience.
Probability Theory and StatisticsEdit
One intuitive justification of Occam's Razor's admonition against unnecessary hypotheses is a direct result of basic probability theory. By definition, all assumptions introduce possibilities for error; If an assumption does not improve the accuracy of a theory, its only effect is to increase the probability that the overall theory is wrong.
There are various papers in scholarly journals deriving formal versions of Occam's razor from probability theory and applying it in statistical inference, and also of various criteria for penalizing complexity in statistical inference. Recent papers have suggested a connection between Occam's razor and Kolmogorov complexity.
One of the problems with the original formulation of the principle is that it only applies to models with the same explanatory power (i.e. prefer the simplest of equally good models). A more general form of Occam's razor can be derived from Bayesian model comparison and Bayes factors, which can be used to compare models that don't fit the data equally well. These methods can sometimes optimally balance the complexity and power of a model. Generally the exact Ockham factor is intractable but approximations such as Akaike Information Criterion, Bayesian Information Criterion, Variational Bayes and Laplace Approximation are used. Many artificial intelligence researchers are now employing such techniques.
William H. Jefferys and James O. Berger (1991) generalise and quantify the original formulation's "assumptions" concept as the degree to which a proposition is unnecessarily accommodating to possible observable data. The model they propose balances the precision of a theory's predictions against their sharpness - theories which sharply made their correct predictions are preferred over theories which would have accommodated a wide range of other possible results. This, again, reflects the mathematical relationship between key concepts in Bayesian inference (namely marginal probability, conditional probability and posterior probability).
The statistical view leads to a more rigorous formulation of the razor than previous philosophical discussions. In particular, it shows that 'simplicity' must first be defined in some way before the razor may be used, and that this definition will always be subjective. For example, in the Kolmogorov-Chaitin Minimum description length approach, the subject must pick a Turing machine whose operations describe the basic operations believed to represent 'simplicity' by the subject. However one could always choose a Turing machine with a simple operation that happened to construct one's entire theory and would hence score highly under the razor. This has led to two opposing views of the objectivity of Occam's razor.
The Turing machine can be thought of as embodying a Bayesian prior belief over the space of rival theories. Hence Occam's razor is not an objective comparison method, and merely reflects the subject's prior beliefs. One's choice of exactly which razor to use is culturally relative.
The minimum instruction set of a Universal Turing machine requires approximately the same length description across different formulations, and is small compared to the Kolmogorov complexity of most practical theories. For instance John Tromp's minimal universal interpreters, based on the Lambda Calculus and Combinatory logic are 210 and 272 bits respectively. Marcus Hutter has used this consistency to define a "natural" Turing machine of small size as the proper basis for excluding arbitrarily complex instruction sets in the formulation of razors.
One possible conclusion from mixing these concepts - Kolmogorov complexity and Occam's Razor - is that an ideal data compressor would also be a scientific explanation/formulation generator. A compressed logarithm table would be output as the logarithm formula and the data set's intervals; a compressed falling object's space x time graphic would be output as the gravity formula for the involved objects, and so forth. Though such a compressor does not seem practical to manufacture in the foreseeable future, the implications of its conception would be dramatic.
The principle is most often expressed as Entia non sunt multiplicanda praeter necessitatem, or "Entities should not be multiplied beyond necessity", but this sentence was written by later authors and is not found in Ockham's surviving writings. This also applies to non est ponenda pluritas sine necessitate, which translates literally into English as "pluralities ought not be posited without necessity". It has inspired numerous expressions including "parsimony of postulates", the "principle of simplicity", the "KISS principle" (Keep It Simple, Stupid), and in some medical schools "When you hear hoofbeats, think horses, not zebras".
Other common restatements are:
Entities are not to be multiplied without necessity.
The simplest answer is usually the correct answer.
A restatement of Occam's razor, in more formal terms, is provided by information theory in the form of minimum message length (MML). Tests of Occam's razor on decision tree models which initially appeared criticial have been shown to actually work fine when re-visited using MML. Other criticisms of Occam's razor and MML (e.g., a binary cut-point segmentation problem) have again been rectified when - crucially - an inefficient coding scheme is made more efficient.
"When deciding between two models which make equivalent predictions, choose the simpler one," makes the point that a simpler model that doesn't make equivalent predictions is not among the models that this criterion applies to in the first place. 
Leonardo da Vinci (1452–1519) lived after Ockham's time and has a variant of Occam's razor. His variant short-circuits the need for sophistication by equating it to simplicity.
Simplicity is the ultimate sophistication.
Occam's razor is now usually stated as follows:
Of two equivalent theories or explanations, all other things being equal, the simpler one is to be preferred.
As this is ambiguous, Isaac Newton's version may be better:
We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
In the spirit of Occam's razor itself, the rule is sometimes stated as:
The simplest explanation is usually the best.
Another common statement of it is:
The simplest explanation that covers all the facts.
This is an over-simplification, or at least a little misleading. See above, "In science".
This rephrasing has several faults, the worst being that Occam's razor is only supposed to be used to choose between two scientific theories which are otherwise equally predictive. The second problem with the "simplest is best" equation is that Occam's razor never claims to choose the 'best' theory, but only proposes simplicity as the deciding factor in choosing between two otherwise equal theories. It's possible that, given more information, the more complex theory might turn out to be correct the majority of the time. Occam's razor makes no explicit claims as to whether or not this will happen, but prompts us to use the simpler theory until we have reason to do otherwise.
The earliest versions of the razor clearly imply that if a more complex theory is "necessary" then it need not be invalid. Perhaps a better way to state it is: "a correct theory of phenomena is only as complex as is necessary — and no more so — to explain said phenomena."
Controversial aspects of the RazorEdit
Occam's razor is not an embargo against the positing of any kind of entity, or a recommendation of the simplest theory come what may. (Note that simplest theory is something like "only I exist" or "nothing exists"). Simpler theories are preferable other things being equal. The other things in question are the evidential support for the theory. Therefore, according to the principle, a simpler but less correct theory should not be preferred over a more complex but more correct one.
For instance, Classical physics is simpler than subsequent theories, be preferred over more complicated theories but should not be preferred over them because it is demonstrably wrong in some respects. It is the first requirement of a theory that it works, that its predictions are correct and it has not been falsified. Occam's razor is used to adjudicate between theories that have already passed these tests, and which are moreover equally well-supported by the evidence.
Another contentious aspect of the Razor is that a theory can become more complex in terms of its structure (or syntax), while its ontology (or semantics) becomes simpler, or vice versa. The theory of relativity is often given as an example.
Occam's razor has met some opposition from people who have considered it too extreme or rash. Walter of Chatton was a contemporary of William of Ockham (1287–1347) who took exception to Occam's razor and Ockham's use of it. In response he devised his own anti-razor: "If three things are not enough to verify an affirmative proposition about things, a fourth must be added, and so on". Although there have been a number of philosophers who have formulated similar anti-razors since Chatton's time, Chatton's anti-razor has not had an equivalent amount of success as Occam's Razor.
Anti-razors have also been created by Gottfried Wilhelm Leibniz (1646–1716), Immanuel Kant (1724–1804), and Karl Menger (1902-1985). Leibniz's version took the form of a principle of plenitude, as Arthur Lovejoy has called it, the idea being that God created the most varied and populous of possible worlds. Kant felt a need to moderate the effects of Occam's Razor and thus created his own counter-razor: "The variety of beings should not rashly be diminished." Karl Menger found mathematicians to be too parsimonious with regard to variables so he formulated his Law Against Miserliness which took one of two forms: "Entities must not be reduced to the point of inadequacy" and "It is vain to do with fewer what requires more". See Ockham's Razor and Chatton's Anti-Razor (1984) by Armand Maurer. A less serious, but (some might say) even more extremist anti-razor is 'Pataphysics, the "science of imaginary solutions" invented by Alfred Jarry (1873–1907). Perhaps the ultimate in anti-reductionism, Pataphysics seeks no less than to view each event in the universe as completely unique, subject to no laws but its own. Variations on this theme were subsequently explored by the Argentinian writer Jorge Luis Borges in his story/mock-essay Tlön, Uqbar, Orbis Tertius. There is also Crabtree's Bludgeon, which takes a cynical view that 'No set of mutually inconsistent observations can exist for which some human intellect cannot conceive a coherent explanation, however complicated.'
- ↑ ["But Ockham's razor does not say that the more simple a hypothesis, the better." http://www.skepdic.com/occam.html Skeptic's Dictionary]
- ↑ "when you have two competing theories which make exactly the same predictions, the one that is simpler is the better."Usenet Phyics FAQs
- ↑ "Today, we think of the principle of parsimony as a heuristic device. We don't assume that the simpler theory is correct and the more complex one false. We know from experience that more often than not the theory that requires more complicated machinations is wrong. Until proved otherwise, the more complex theory competing with a simpler explanation should be put on the back burner, but not thrown thrown onto the trash heap of history until proven false.The Skeptic's dictionary
- ↑ "While these two facets of simplicity are frequently conflated, it is important to treat them as distinct. One reason for doing so is that considerations of parsimony and of elegance typically pull in different directions. Postulating extra entities may allow a theory to be formulated more simply, while reducing the ontology of a theory may only be possible at the price of making it syntactically more complex." Stanford Encyclopedia of Philosophy
- Ariew, Roger (1976). Ockham's Razor: A Historical and Philosophical Analysis of Ockham's Principle of Parsimony, Champaign-Urbana, University of Illinois.
- Charlesworth, M. J. (1956). Aristotle's Razor. Philosophical Studies (Ireland) 6: 105–112.
- Churchland, Paul M. (1984). Matter and Consciousness, Cambridge, Massachusetts: MIT Press. ISBN 0-262-53050-3.
- Crick, Francis H. C. (1988). What Mad Pursuit: A Personal View of Scientific Discovery, New York, New York: Basic Books. ISBN 0-465-09138-5.
- Dawkins, Richard (1990). The Selfish Gene, Oxford University Press. ISBN 0-465-09138-5.
- Duda, Richard O.; Peter E. Hart, David G. Stork (2000). Pattern Classification, 2nd edition, 487-489, Wiley-Interscience. ISBN 0-471-05669-3.
- Epstein, Robert (1984). The Principle of Parsimony and Some Applications in Psychology. Journal of Mind Behavior 5: 119–130.
- Hoffmann, Ronald, Vladimir I. Minkin, Barry K. Carpenter (1997). Ockham's Razor and Chemistry. HYLE—International Journal for the Philosophy of Chemistry 3: 3–28.
- Jacquette, Dale (1994). Philosophy of Mind, 34–36, Engleswoods Cliffs, New Jersey: Prentice Hall. ISBN 0-13-030933-8.
- Jaynes, Edwin Thompson (1994). "Model Comparison and Robustness" Probability Theory: The Logic of Science.
- Jefferys, William H., Berger, James O. (1991). Ockham's Razor and Bayesian Statistics (Preprint available as "Sharpening Occam's Razor on a Bayesian Strop)",. American Scientist 80: 64-72.
- Katz, Jerrold (1998). Realistic Rationalism, MIT Press.
- Kneale, William; Martha Kneale (1962). The Development of Logic, 243, London: Oxford University Press. ISBN 0-19-824183-6.
- MacKay, David J. C. (2003). Information Theory, Inference and Learning Algorithms, Cambridge University Press. ISBN 0-521-64298-1.
- Maurer, A. (1984). Ockham's Razor and Chatton's Anti-Razor. Medieval Studies 46: 463–475.
- McDonald, William (2005). Søren Kierkegaard. Stanford Encyclopedia of Philosophy. URL accessed on 2006-04-14.
- Menger, Karl (1960). A Counterpart of Ockham's Razor in Pure and Applied Mathematics: Ontological Uses. Synthese 12: 415.
- Morgan, C. Lloyd (1903). "Other Minds than Ours" An Introduction to Comparative Psychology, 2nd edition, 59, London: W. Scott. URL accessed 2006-04-15.
- Nolan, D. (1997). Quantitative Parsimony. British Journal for the Philosophy of Science 48 (3): 329–343.
- Pegis, A. C., translator (1945). Basic Writings of St. Thomas Aquinas, 129, New York: Random House.
- Popper, Karl (1992). "7. Simplicity" The Logic of Scientific Discovery, 2nd edition, 121-132, London: Routledge.
- Rodríguez-Fernández, J. L. (1999). Ockham's Razor. Endeavour 23: 121–125.
- Schmitt, Gavin C. (2005). Ockham's Razor Suggests Atheism. URL accessed on 2006-04-15.
- Smart, J. J. C. (1959). Sensations and Brain Processes. Philosophical Review 68: 141–156.
- Sober, Elliott (1975). Simplicity, Oxford: Oxford University Press.
- Sober, Elliott (1981). The Principle of Parsimony. British Journal for the Philosophy of Science 32: 145–156.
- Sober, Elliott (1990). "Let's Razor Ockham's Razor" Dudley Knowles Explanation and its Limits, 73-94, Cambridge: Cambridge University Press. ISBN 0-521-39598-4.
- Sober, Elliott (2001). What is the Problem of Simplicity?. URL accessed on 2006-04-15.
- Swinburne, Richard (1997). Simplicity as Evidence for Truth, Milwaukee, Wisconsin: Marquette University Press.
- Thorburn, W. M. (1918). The Myth of Occam's Razor. Mind 27 (107): 345-353.
- Williams, George C. (1966). Adaptation and natural selection: A Critique of some Current Evolutionary Thought, Princeton, New Jersey: Princeton University Press. ISBN 0-691-02357-3.
- What is Occam's Razor? This essay distinguishes Occam's Razor (used for theories with identical predictions) from the Principle of Parsimony (which can be applied to theories with different predictions).
- Skeptic's Dictionary: Occam's Razor
- Ockham's Razor, an essay at The Galilean Library on the historical and philosophical implications by Paul Newall.
- The Razor in the Toolbox: The history, use, and abuse of Occam’s Razor. - By Robert Novella
- NIPS 2001 Workshop "Foundations of Occam's Razor and parsimony in learning"
- "We Must Choose The Simplest Physical Theory: Levin-Li-Vitányi Theorem And Its Potential Physical Applications"
- Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay, includes an introductory chapter on the automatic Occam's razor that is embodied by Bayesian model comparison.
- "Message Length as an Effective Ockham's Razor in Decision Tree Induction", by S. Needham and D. Dowe, Proc. 8th International Workshop on AI and Statistics (2001), pp253-260. (Shows how Ockham's razor works fine when interpreted as Minimum Message Length (MML).) Re efficiency and reliability of coding schemes, see also pp272-273 of (Comley and Dowe, Chapter 11, MIT Press, 2005).
- Lloyd's MML pages describe how Minimum Message Length induction extends Ockham's razor for differing hypotheses.
- An extensive bibliography of publications related to Occam's Razor.
- Occam's sword [http://www.wikinfo.org/index.php/Occam%27s_sword
- Simplicity at Stanford Encyclopedia of Philosophy
-  ABC Radio National program in which speakers are allowed to explicate at length on topics without the moderation of an interviewer. (Podcast available)
- HolyHell.net has an interactive decision-making page which employs Occam's Razor. Although satire, it does explain the concept relatively well.
- 'The Myth of Occam's Razor' - Thorburn's original 1918 paper.
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|