Actions

History and Philosophy of science

From OPOSSEM


Objectives[edit]

Introduction[edit]

The concept of a "science" of social behavior is relatively new, although, as we shall see, in some forms it dates back many centuries. The style of "social science" for which the OPOSSEM project was designed, however, goes back only about five decades, when the widespread availability of computers in governments and large universities, along with data sets that could be easily processed by computers, made statistical analysis routine. That analysis, however, built on earlier developments in social scientific research going back to the late 19th century, and concepts of "scientific" research generally that went back to the beginning of the 17th century.

One of the problems in dealing with social behavior scientifically is that "science" is not a requirement for understanding most of the social behavior that we encountered on a day-to-day basis. Humans are social animals, and a degree of understanding of some forms of social behavior is present even in very young infants. As children mature, they deal with increasingly complex forms of social behavior, first within the family, then in interactions with other children, to structured behaviors in schools, sports and other activities, and finally to the fantastically complex social hierarchies of high school and usually some introduction to a work environment. Along the way—typically beginning around the age of 14 — they begin to observe political and economic behavior in the larger society, and begin to formulate attitudes about this, for example identifying as politically liberal or conservative.

All of this can be done in the course of normal development without any appeal to a "social science", though in fact individuals gain a great deal of unsystematic knowledge through formal schooling and through various forms of entertainment. But the formal scientific study is not a requirement for becoming a fully functioning member of society. In contrast, one would not expect to design an apartment building without a knowledge of physics and mechanics or to work as an industrial chemist without having formally studied chemistry. That said, even those fields were not 'scientific' until relatively recently:: the great cathedrals of medieval Europe were built without a systematic knowledge of mechanics or gravity; the Romans devised cements that have allowed structures to survive 2000 year

The concept of a systemic scientific approach to the study of human behavior dates, for the most part, to the past 150 years or so, generally coinciding with the increases in communications brought about by industrialization and increases in literacy, as well as the spectacular technological changes which occurred as a consequence of the application of scientific principles to natural systems. As the scientific method was applied to increasingly complex phenomena, the possibility of extending this to human behavior arose repeatedly, though it took a great deal of time before those methods were successfully adapted, and approaches are still evolving.

The philosophy of the social sciences varies between disciplines, and in particular economics and psychology developed mathematical traditions (and for psychology, experimental traditions) considerably earlier than did political science and sociology. The discussion here is focused primarily on those two disciplines — which for the most part have developed in parallel.


Science: A Brief History[edit]

Scholasticism to the Enlightenment[edit]

To a limited extent, scientific approaches to social behavior can be traced in the West back to the Greeks, specifically to Aristotle (384 BCE – 322 BCE) and his Politics, which provides a very systematic categorization of political institutions.<ref>http://plato.stanford.edu/entries/aristotle-politics/</ref> In addition, Aristotle or his students may have undertaken a systemic empirical collection of constitutions of the Greek city-states as background research for this effort, though that work is now lost. For the next two-thousand or so years, however, systematic theorizing was generally ignored in favor of narrative history and philosophy, although late in this period one finds theoretical treatments of social and political behavior in the works of ibn-Khaldun (1332-1406) and Machiavelli (1469-1527).

The scientific approach in its modern form emerges during the first half of the 17th century, beginning with the work of the English philosopher and politician Francis Bacon (1561-1626).<ref>http://plato.stanford.edu/entries/francis-bacon/</ref> Bacon's Novum OrganumThe New Instrument—presented a relatively complete framework for what would become the modern scientific method. In complete contrast to the then prevailing scholastic approach, which emphasized the importance of ancient authorities, rhetorical technique, and elegant theory over the importance of empirical observations.<ref> Though not, it seems, to the point of endlessly debating "how many angels can dance on a pinhead", which appears to be a fabrication from the early modern period specifically designed to discredit the scholastic approach; [1][2]</ref>, Bacon emphasized the importance of empirical observation. Bacon also proposed that science should be both public—rather than retained as private secrets—and used for the public good, and even proposed government financing of scientific inquiry (a proposal which was not successful).

On the European continent, the foundations of the modern scientific method were further extended by the philosopher René Descartes (1596-1650).<ref>http://plato.stanford.edu/entries/descartes/, http://www.renedescartes.com/</ref> Descartes contributions are wide ranging: he has been characterized as the "father of modern philosophy" and made very substantial contributions to mathematics, notably the use of "Cartesian" X-Y coordinates to display mathematical relationship that are found throughout this book. From the perspective of the development of science, however, three contributions should be noted. First, Descartes focused on the importance of deductive arguments from first principles. Second, like Bacon, he opposed the scholastic emphasis on appeals of authority, though unlike Bacon, he also was skeptical about depending solely on observations, which could be flawed. Third, Descartes emphasized the importance of systematic methods, rather than intuition, in the development of knowledge.

Baconian induction and Cartesian deduction combined in the greatest accomplishment of 17th century science, the work of Issac Newton (1643-1727), particularly his universal laws of gravitation. These were based on a combination of the meticulous empirical observations of astronomers such as Tycho Brahe, which had brought into question the classical sun-centered, Ptolemaic system of astronomy, and the empirical regularities described by Kepler's laws of planetary motion. Newton's deductive approach was able, in a single equation, to account for behaviors as diverse as the motions of the planets around the sun and, yes, an apple falling from a tree. This work—along with Newton's other scientific accomplishments in areas such as optics, mechanics, and the invention of calculus—was considered a triumph of the new scientific approaches and the application of reason, and as such provided the foundations for the Age of Enlightenment.

[And the Scholastics? Well, they persisted. For quite a long time. In the universities. Scholastic influence was so strong that the Enlightenment occurred, for the most part, outside of the existing academic structures. For example, the Royal Society of London for Improving Natural Knowledge was founded in 1660, as was the French Academy of Sciences in 1666 in part to provide on alternative venue to the universities where the new scientific methods, specifically those of Bacon, could be sympathetically discussed, and similar institutions would develop across Europe.<ref>The new methods were not completely excluded, particularly those involving mathematics: Newton's genius was rewarded by granting him the Lucasian Professorship in Mathematics, though that richly endowed chair had been established only a few years earlier and in its stipulation that the holder not be active in the Church may have been an effort by a wealthy individual to break down the established conservatism of the academy. But generally it is the relative absence of academic research in scientific developments from 1600 to the late 19th century that strikes a contemporary observer as distinct from our time.</ref> Enlightenment ideas also gained a strong foothold in the university systems of Scotland and Germany. But the university system had largely been developed, from the 12th century onward, by scholastics, and they successfully defended this against the inroads of the scientific approach until late 19th century, when Wilhelm von Humboldt's German model of the research university spread to newly industrializing areas such as the United States and Japan.<ref>and scholasticism, it could be argued, survives to this very day in the norms of high school debate, those nerds and nerdettes so mocked for their reams of citations—yes, that's you, political science majors— are the intellectual descendents, across nearly a thousand years, of the likes of Peter Abelard. Though not, we would hope, necessarily susceptible to all of Abelard's various and legendary personality quirks.</ref>


Enlightenment philosophers chose a short history of scientific predecessors — Galileo, Boyle, and Newton principally — as the guides and guarantors of their applications of the singular concept of the natural world and the philosophical concept of natural Law to every physical and social field of the day. In this respect, the lessons of history and the social structures built upon it could be discarded.<ref>Cassels, Alan. Ideology and International Relations in the Modern World. p2.</ref> It was Newton's conception of the Universe based upon natural and rationally understandable laws that became one of the seeds for Enlightenment ideology.<ref>"Although it was just one of the many factors in the Enlightenment, the success of Newtonian physics in providing a mathematical description of an ordered world clearly played a big part in the flowering of this movement in the eighteenth century" John Gribbin (2002) Science: A History 1543–2001, p 241</ref> John Locke and Voltaire applied concepts of natural law to political systems advocating intrinsic rights; the physiocrats—a group of economists who believed that the wealth of nations was derived solely from the value of "land agriculture" or "land development"—and Smith applied natural conceptions of psychology and self-interest to economic systems; and sociologists criticized the current social order for trying to fit history into natural models of progress. Roger Chartier describes it as follows:

This movement [from the intellectual to the cultural/social] implies casting doubt on two ideas: first, that practices can be deduced from the discourses that authorize or justify them; second, that it is possible to translate into the terms of an explicit ideology the latent meaning of social mechanisms.<ref>Roger Chartier, The Cultural Origins of the French Revolution (1991), 18.</ref>

The Nineteenth Century[edit]

Pre-WWII Social Science[edit]

The Behavioral Revolution[edit]

The Development of Statistics in the Social Sciences[edit]

The association of "statistics" with the social sciences goes back to the origins of the term" "statistics" originally referred to the systematic collection of information of interest to "states", particularly demographic and economic data. The "Nuova Cronica", a 14th century history of Florence by the Florentine banker and official Giovanni Villani, includes much statistical information on population, ordinances, commerce and trade, education, and religious facilities and has been described as the first introduction of statistics as a positive element in history,<ref>Villani, Giovanni. Encyclopædia Britannica. Encyclopædia Britannica 2006 Ultimate Reference Suite DVD. Retrieved on 2008-03-04.</ref> though neither the term nor the concept of statistics as a specific field yet existed. The term statistics is ultimately derived from the [Latin] statisticum collegium ("council of state") and the Italian word statista ("statesman" or "politician"). The German Statistik, first introduced by Gottfried Achenwall (1749), originally designated the analysis of data about the state, signifying the "science of state" (then called political arithmetic in English). It acquired the meaning of the collection and classification of data generally in the early 19th century and was introduced into English in 1791 by [Sir John Sinclair] when he published the first of 21 volumes titled Statistical Account of Scotland.<ref>Ball, Philip (2004). Critical Mass. Farrar, Straus and Giroux. p. 53. ISBN 0374530416. </ref>

By the middle of the 19th century, the issue of public health in the rapidly expanding cities of industrializing Great Britain provided early examples of applied statistics. Florence Nightingale, who gained fame for her innovations in nursing during the Crimean War was also an early innovation in the collection and use of statistics, and became a pioneer in the visual presentation of information and [graphics].<ref>Lewi, Paul J. (2006). Speaking of Graphics. http://www.datascope.be/sog.htm. </ref> In 1854, the London physician John Snow famously used a map of individual cholera cases to illustrate how these clustered around a pump on Broad Street and made a solid use of statistics to illustrate the connection between the quality of the source of water and the incidence of cholera. Snow's study is considered a major event in the history of public health, and geography, and can be regarded as the founding event of the science of epidemiology.

The mathematical methods of statistics emerged from probability theory, which can be dated to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's The Doctrine of Chances (1718) treated the subject as a branch of mathematics. The relation between applied statistics and probability theory developed rather late, however, as the original application of probability theory generally focused on gambling, a core interest of the aristocratic patrons of many early mathematicians. However, 1800, astronomy used probability models and statistical theories, particularly the method of least squares, which was invented by Adrien-Marie Legendre and Karl Friedrich Gauss, who was also responsible for the discovery—or at least the popularization<ref>The normal distribution had been anticipated by by de Moivre in 1738, both LePlace and the Irish-American mathematician Robert Adrian did work on it simultaneously, and in Adrian case, independently, with Guass. See [3]</ref>—of the normal ("Gaussian") distribution and the method of maximum likelihood. In the modern era, the work of Andrey Kolmogorov has been instrumental in formulating the fundamental model of probability theory, which is now used throughout the theoretical field of "mathematical statistics"..

A theory of statistical inference was developed by the American chemist and philosopher Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883), two publications that emphasized the importance of randomization-based inference in statistics. In one study, Peirce randomly assigned volunteers to a blinded, repeated-measures design] to evaluate their ability to discriminate weights.<ref name="smalldiff">Charles Sanders Peirce and Joseph Jastrow (1885). "On Small Differences in Sensation". Memoirs of the National Academy of Sciences 3: pp. 73–83. http://psychclassics.yorku.ca/Peirce/small-diffs.htm. </ref><ref name="telepathy">Hacking, Ian (September 1988). "Telepathy: Origins of Randomization in Experimental Design". Isis 79 (A Special Issue on Artifact and Experiment): pp. 427–451. JSTOR 234674.MR1013489. http://www.jstor.org/stable/234674. </ref><ref name="stigler">Stephen M. Stigler (November 1992). "A Historical View of Statistical Concepts in Psychology and Educational Research". American Journal of Education 101 (1): pp. 60–70. </ref><ref name="dehue">Trudy Dehue (December 1997). "Deception, Efficiency, and Random Groups: Psychology and the Gradual Origination of the Random Group Design". Isis 88 (4): pp. 653–673. </ref>, and contributed the first English-language publication on an optimal design for regression analysis.<ref>Peirce, C. S. (1876). "Note on the Theory of the Economy of Research". Coast Survey Report: pp. 197–201. , actually published 1879, NOAA PDF Eprint.
Reprinted in Collected Papers 7, paragraphs 139–157, also in Writings 4, pp. 72–78, and in Peirce, C. S. (July–August 1967). "Note on the Theory of the Economy of Research". Operations Research 15 (4): pp. 643–648. doi:10.1287/opre.15.4.643. http://www.jstor.org/stable/168276. </ref>

By the late 19th century, statistics were increasingly use in industrial and particularly agricultural research. In the first two decades of the 20th century, the collection of frequentist" methods including confidence intervals and significance tests were developed by Ronald A. Fisher, Jerzy Neyman and Egon Pearson. By the end of the 1920s, these had largely displaced the earlier "Bayesian" approach <ref> The word Bayesian appeared in the 1930s, and by the 1960s it became the term preferred by those dissatisfied with the limitations of frequentist statistics.</ref><ref name="Miller Earliest Uses">Jeff Miller, "Earliest Known Uses of Some of the Words of Mathematics (B)"</ref>, which, as we will discuss below, provided a more common-sensical approach to the interpretation of data, but which was not computationally tractable given the technology of the time.

The use of statistics to study social behavior increased substantially in response to the increased complexity of industrialized economies, particularly in dealing with the Great Depression and the industrial mobilization for World War II. The collection of a wide variety of information by governments became increasingly widespread and was assisted by some technological innovations, notably the "punch card" which allowed large amounts of information to be recorded using holes in paper cards, which could then be mechanically sorted and counted. In addition, the mass media began the systematic practice of election forecasting with Literary Digest starting a mail survey to predict the presidential election in 1916 . The Literary Digest poll, which was subject to sampling bias, famously incorrectly predicted that Franklin Roosevelt would lose the election, but because the pollsters used a telephone survey and only wealthy people were able to afford phones at that time, and 12 years later polling errors led the Chicago Daily Tribune, a pro-Republican newspaper, to famously print the banner headline DEWEY DEFEATS TRUMAN (Truman in fact led Dewey by almost five percentage points in the popular vote and a 3-to-2 margin in the Electoral College), but the forecasting community responded to these errors and over time their methods gradually improved.

The ability to analyze such information using complex mathematical techniques was limited by the fact that calculations could not be automated beyond the level of mechanical calculators. This situation began to change in the 1960s with spread of digital computers in government and research universities. Much of the original funding for digital computing had come from the military during WWII, where computers were used both for codebreaking and to automated the tedious process of calculating artillery tables. However, by 1951—only five years after the development of the first general-purpose computer, ENIAC— the very first installation of a commercial computer, the Univac I, occurred at the U.S. Census Bureau <ref>http://www.census.gov/history/www/census_then_now/census_facilities/bowie_computer_center.html</ref>. As the cost of computers declined and their capacity increased, statistical packages were developed to automate the once time-consuming process of statistical calculations, and by the 1960s these were widespread; for example the statistical package SPSS ("Statistical Package for the Social Sciences") was released in its first version in 1968 after being developed by Norman H. Nie and C. Hadlai Hull; at the time Nie was a political science postgraduate at Stanford University.

The availability of computing power, in turn, increased the demand for quantitative data, leading for example to the founding the Inter-University Consortium for Political and Social Research in 1962. Statistical computation benefited tremendously from "Moore's Law"—the approximate doubling of computing power every 18 months—as computers both decreased in size from room-sized monsters with their own electrical and cooling systems to devices that could fit comfortably in a backpack, while concurrently the capacity of the devices increased exponentially, to the point where a contemporary smart phone probably has greater computing power than the combined capacity of North American research universities in 1965.<ref>For example, in 1965 the central research computer at Indiana University/Bloomington was a Control Data 3600 with 384 kilobytes of memory, an experimental "drum" storage holding 1 megabyte (when it worked) and a single processor running at 500,000 cycles per second. It filled a room about the size of a 50-person classroom [4] and cleverly used the water of an Olympic-sized swimming pool for cooling. In 2011, an iPhone 4 has 512 megabytes (512,000 kilobytes) of random access memory supplemented with at least 32 gigabytes (1,000,000 kilobytes) of solid state "disk" memory, and multiple processors running at 1-billion cycles per second handling the programs and operating system, graphics, two cameras and yes, a phone. It weighs less than 5 ounces (140 grams). </ref><ref>Even if most of that capacity is devoted to viewing pictures of kittens playing with string and catapulting exploding birds at green pigs.</ref>. That computing power has made possibly statistical techniques that were simply impossible in an earlier era due to the computational requirements which has led, for example, to a renewed interest in Bayesian analysis.

The availability of machine-readable data has also increased dramatically in the past decade due to the World Wide Web. Many organizations and governments make data available on web sites, and research data sets are now generally available immediately for downloading, replacing the earlier process of sending these using physical media such as punch cards and magnetic tape. Finally, with advances in automated natural language processing, the web itself is a subject of statistical analysis, for example in monitoring social media such as blogs, Twitter and Facebook for trends in political interests and attitudes.

Key Concepts[edit]

Systematic Methodology[edit]

Scientific Realism[edit]

Scientific realism is, at the most general level, the view that the world described by science is the real world, as it is, independent of what we might take it to be. Within philosophy of science, it is often framed as an answer to the question "how is the success of science to be explained?" The debate over what the success of science involves centers primarily on the status of unobservable entities apparently talked about by scientific theories. Generally, those who are scientific realists assert that one can make reliable claims about unobservables (viz., that they have the same ontological status) as observables.

Main features of scientific realism[edit]

Scientific realism involves two basic positions. First, it is a set of claims about the features of an ideal scientific theory; an ideal theory is the sort of theory science aims to produce. Second, it is the commitment that science will eventually produce theories very much like an ideal theory and that science has done pretty well thus far in some domains. It is important to note that one might be a scientific realist regarding some sciences while not being a realist regarding others. For example, one might hold realist attitudes toward physics, chemistry and biology, and not toward economics, psychology and sociology.

According to scientific realism, an ideal scientific theory has the following features:

  • The claims the theory makes are either true or false, depending on whether the entities talked about by the theory exist and are correctly described by the theory. This is the semantic commitment of scientific realism.
  • The entities described by the scientific theory exist objectively and mind-independently. This is the metaphysical commitment of scientific realism.
  • There are reasons to believe some significant portion of what the theory says. This is the epistemological commitment.

Combining the first and the second claim entails that an ideal scientific theory says definite things about genuinely existing entities. The third claim says that we have reasons to believe that the things said about these entities are true.

Scientific realism usually holds that science makes progress, i.e. scientific theories usually get successively better, or, rather, answer more and more questions. For this reason, many people, scientific realist or otherwise, hold that realism should make sense of the progress of science in terms of theories being successively more like the ideal theory that scientific realists describe.

Characteristic claims[edit]

The following claims are typical of those held by scientific realists. Due to the wide disagreements over the nature of science's success and the role of realism in its success, a scientific realist would agree with some but not all of the following positions.<ref>Jarrett Leplin (1984), Scientific Realism, University of California Press, p. 1, ISBN 0-520-05155-6, http://books.google.com/books?id=UFCpopYlB9EC&lpg=PA189&pg=PA1#v=onepage&f=false </ref>

  • The best scientific theories are at least partially true.
  • The best theories do not employ central terms that are non referring expressions.
  • To say that a theory is approximately true is sufficient explanation of the degree of its predictive success.
  • The approximate truth of a theory is the only explanation of its predictive success.
  • Even if a theory employs expressions that do not have a reference, a scientific theory may be approximately true.
  • Scientific theories are in a historical process of progress towards a true account of the physical world.
  • Scientific theories make genuine, existential claims
  • Theoretical claims of scientific theories should be read literally and are definitively either true or false.
  • The degree of the predictive success of a theory is evidence of the referential success of its central terms.
  • The goal of science is an account of the physical world that is literally true. Science has been successful because this is the goal that it has been making progress towards.

History of scientific realism[edit]

Scientific realism is related to much older philosophical positions including rationalism and realism. However, it is a thesis about science developed in the twentieth century. Portraying scientific realism in terms of its ancient, medieval, and early modern cousins is at best misleading.

Scientific realism is developed largely as a reaction to logical positivism. Logical positivism was the first philosophy of science in the twentieth century and the forerunner of scientific realism, holding that a sharp distinction can be drawn between observational terms and theoretical terms, the latter capable of semantic analysis in observational and logical terms.

Logical positivism encountered difficulties with:

  • The verification theory of meaning (for which see Hempel (1950)).
  • Troubles with the analytic-synthetic distinction (for which see Quine (1950)).
  • The theory ladenness of observation (for which see Kuhn (1970) and Quine (1960)).
  • Difficulties moving from the observationality of terms to observationality of sentences (for which see Putnam (1962)).
  • The vagueness of the observational-theoretical distinction (for which see Maxwell (1962)).

These difficulties for logical positivism suggest, but do not entail, scientific realism, and lead to the development of realism as a philosophy of science.

Realism became the dominant philosophy of science after positivism. Bas van Fraassen developed constructive empiricism as an alternative to realism. Responses to van Fraassen have sharpened realist positions and lead to some revisions of scientific realism.

Arguments for and against scientific realism[edit]

One of the main arguments for scientific realism centers on the notion that scientific knowledge is progressive in nature, and that it is able to predict phenomena successfully[citation needed]. Many realists (e.g., Ernan McMullin, Richard Boyd) think the operational success of a theory lends credence to the idea that its more unobservable aspects exist, because they were how the theory reasoned its predictions. For example, a scientific realist would argue that science must derive some ontologicalsupport for atoms from the outstanding phenomenological success of all the theories using them.

Arguments for scientific realism often appeal to abductive reasoning or "inference to the best explanation"[citation needed]. Scientific realists point to the success of scientific theories in predicting and explaining a variety of phenomena, and argue that from this we can infer that our scientific theories (or at least the best ones) provide true descriptions of the world, or approximately so.

On the other hand, pessimistic induction, one of the main arguments against realism, argues that the history of science contains many theories once regarded as empirically successful but which are now believed to be false. Additionally, the history of science contains many empirically successful theories whose unobservable terms are not believed to genuinely refer. For example, the effluvial theory of static electricity is an empirically successful theory whose central unobservable terms have been replaced by later theories. Realists reply that replacement of particular realist theories with better ones is to be expected due to the progressive nature of scientific knowledge, and when such replacements occur only superfluous unobservables are dropped. For example, Albert Einstein's theory of special relativity showed that the concept of the luminiferous ether could be dropped because it had contributed nothing to the success of the theories of mechanics and electromagnetism. On the other hand, when theory replacement occurs, a well-supported concept, such as the concept of atoms, is not dropped but is incorporated into the new theory in some form.

Also against scientific realism social constructivists might argue that scientific realism is unable to account for the rapid change that occurs in scientific knowledge during periods of revolution. Constructivists may also argue that the success of theories is only a part of the construction[citation needed]. However, these arguments ignore the fact that many scientists are not realists. In fact, during what is perhaps the most notable example of revolution in science—the development of quantum mechanics in the 1920s—the dominant philosophy of science was logical positivism. The alternative realist Bohm interpretation and many-worlds interpretation of quantum mechanics do not make such a revolutionary break with the concepts of classical physics

Another argument against scientific realism, deriving from the underdetermination problem, is not so historically motivated as these others. It claims that observational data can in principle be explained by multiple theories that are mutually incompatible[citation needed]. Realists might counter by saying that there have been few actual cases of underdetermination in the history of science[citation needed]. Usually the requirement of explaining the data is so exacting that scientists are lucky to find even one theory that fulfills it. Furthermore, if we take the underdetermination argument seriously, it implies that we can know about only what we have directly observed[citation needed]. For example, we could not theorize that dinosaurs once lived based on the fossil evidence because other theories (e.g., that the fossils are clever hoaxes) can account for the same data. Realists claim that, in addition to empirical adequacy, there are other criteria for theory choice, such as parsimony

Inductive and Deductive Methods[edit]

Positivism[edit]

Positivism asserts that the only authentic knowledge is that which is based on sense, experience and positive verification. As an approach to the philosophy of science deriving from Enlightenment thinkers such as Henri de Saint-Simon and Pierre-Simon Laplace, Auguste Comte saw the scientific method as replacing metaphysics in the history of thought, observing the circular dependence of theory and observation in science. Sociological positivism was later reformulated by Émile Durkheim as a foundation to social research. At the turn of the 20th century the first wave of German sociologists, including Max Weber and Georg Simmel, rejected the doctrine, thus founding the antipositivist tradition in sociology. Later antipositivists and critical theorists have associated positivism with "scientism"; science as ideology.

In the early 20th century, logical positivism—a descendant of Comte's basic thesis but an independent movement— sprang up in Vienna and grew to become one of the dominant schools in Anglo-American philosophy and the analytic tradition. Logical positivists (or 'neopositivists') reject metaphysical speculation and attempt to reduce statements and propositions to pure logic. Critiques of this approach by philosophers such as Karl Popper, Willard Van Orman Quine and Thomas Kuhn have been highly influential, and led to the development of postpositivism. In psychology, the positivist movement was influential in the development of behavioralism and operationalism. In economics, practising researchers tend to emulate the methodological assumptions of classical positivism, but only in a de-facto fashion: the majority of economists do not explicitly concern themselves with matters of epistemology. In jurisprudence, "legal positivism" essentially refers to the rejection of natural law, thus its common meaning with philosophical positivism is somewhat attenuated and in recent generations generally emphasizes the authority of human political structures as opposed to a "scientific" view of law.

In contemporary social science, strong accounts of positivism have long since fallen out of favour. Practitioners of positivism today acknowledge in far greater detail observer bias and structural limitations. Modern positivists generally eschew metaphysical concerns in favor of methodological debates concerning clarity, replicability, reliability and validity.<ref name="Gartell">Gartell, David, and Gartell, John. 1996. "Positivism in sociological practice: 1967-1990". Canadian Review of Sociology, Vol. 33 No. 2.</ref> This positivism is generally equated with "quantitative research" and thus carries no explicit theoretical or philosophical commitments. The institutionalization of this kind of sociology is often credited to Paul Lazarsfeld, who pioneered large-scale survey studies and developed statistical techniques for analyzing them. This approach lends itself to what Robert K. Merton called middle-range theory: abstract statements that generalize from segregated hypotheses and empirical regularities rather than starting with an abstract idea of a social whole.<ref name="Boudon">Boudon, Raymond. 1991. "Review: What Middle-Range Theories are". Contemporary Sociology, Vol. 20 Num. 4 pp 519-522.</ref> Other new movements, such as critical realism, have emerged to reconcile the overarching aims of social science with various so-called 'postmodern' critiques.

Replication[edit]

The Hypothetical-Deductive Method[edit]

The hypothetico-deductive model or method, first so-named by William Whewell,<ref>William Whewell (1837) History of the Inductive Sciences</ref><ref>William Whewell (1840), Philosophy of the Inductive Sciences</ref> is a proposed description of scientific method. According to it, scientific inquiry proceeds by formulating a hypothesis in a form that could conceivably be falsified by a test on observable data. A test that could and does run contrary to predictions of the hypothesis is taken as a falsification of the hypothesis. A test that could but does not run contrary to the hypothesis corroborates the theory. It is then proposed to compare the explanatory value of competing hypotheses by testing how stringently they are corroborated by their predictions.

"From the long tradition of empiricism we have inherited the hypothetico-deductive model of scientific research."

—p.86 Brody, Thomas A. (1993), The Philosophy Behind Physics, Springer Verlag, ISBN 0-387-55914-0 . (Luis De La Peña and Peter E. Hodgson, eds.)

Qualification of corroborating evidence is sometimes raised as philosophically problematic. The raven paradox is a famous example. The hypothesis that 'all ravens are black' would appear to be corroborated by observations of only black ravens. However, 'all ravens are black' is logically equivalent to 'all non-black things are non-ravens' (this is the contraposition form of the original implication). 'This is a green tree' is an observation of a non-black thing that is a non-raven and therefore corroborates 'all non-black things are non-ravens'. It appears to follow that the observation 'this is a green tree' is corroborating evidence for the hypothesis 'all ravens are black'. Attempted resolutions may distinguish:

  • non-falsifying observations as to strong, moderate, or weak corroborations
  • investigations that do or do not provide a potentially falsifying test of the hypothesis.<ref>John N.W. Watkins (1984), Science and Skepticism, p. 319.</ref>

Corroboration is related to the problem of induction, which arises because a general case (a hypothesis) cannot be logically deduced from any series of specific observations. That is, any observation can be seen as corroboration of any hypothesis if the hypothesis is sufficiently restricted. The argument has also been taken as showing that both observations are theory-laden, and thus it is not possible to make truly independent observations. One response is that a problem may be sufficiently narrowed (or axiomatized) as to take everything except the problem (or axiom) of interest as unproblematic for the purpose at hand.<ref>Karl R. Popper (1963), Conjectures and Refutations, pp. 238-39.</ref>

Evidence contrary to a hypothesis is itself philosophically problematic. Such evidence is called a falsification of the hypothesis. However, under the theory of confirmation holism it is always possible to save a given hypothesis from falsification. This is so because any falsifying observation is embedded in a theoretical background, which can be modified in order to save the hypothesis. Popper acknowledged this but maintained that a critical approach respecting methodological rules that avoided such immunizing stratagems is conducive to the progress of science.<ref>Karl R. Popper (1979, Rev. ed.), Objective Knowledge, pp. 30, 360.</ref>

Despite the philosophical questions raised, the hypothetico-deductive model remains perhaps the best understood theory of scientific method. This is an example of an algorithmic statement of the hypothetico-deductive method:<ref>Peter Godfrey-Smith (2003) Theory and Reality, p. 236.</ref>

  1. Gather data (observations about something that is unknown, unexplained, or new)
  1. Hypothesize an explanation for those observations.
  1. Deduce a consequence of that explanation (a prediction). Formulate an experiment to see if the predicted consequence is observed.
  1. Wait for corroboration. If there is corroboration, go to step 3. If not, the hypothesis is falsified. Go to step 2.

Falsification[edit]

Error creating thumbnail: Unable to save thumbnail to destination
Are all swans white? The classical view of the philosophy of science is that it is the goal of science to "prove" such hypotheses or induce them from observational data. This seems hardly possible, since it would require us to infer a general rule from a number of individual cases, which is logically inadmissible. However, if we find one single black swan, logic allows us to conclude that the statement that all swans are white is false. Falsificationism thus strives for questioning, for falsification, of hypotheses instead of proving them.

Falsifiability or refutability is the logical possibility that an assertion can be contradicted by an observation or the outcome of a physical experiment. That something is "falsifiable" does not mean it is false; rather, that if it is false, then some observation or experiment will produce a reproducible result that is in conflict with it.

For example, the claim "atoms do exist" is unfalsifiable: Even if all observations made so far did not produce an atom, it is still possible that the next observation does. In the same way, "all men are mortal" is unfalsifiable: Even if someone is observed who has not died so far, he could still die in the next instant "All men are immortal," by contrast, is falsifiable, by the presentation of just one dead man. Not all statements that are falsifiable in principle are falsifiable in practice. For example, "it will be raining here in one million years" is theoretically falsifiable, but not practically so.

The concept was made popular by Karl Popper, who, in his philosophical criticism of the popular positivist view of the scientific method, concluded that a hypothesis, proposition, or theory talks about the observable only if it is falsifiable. Popper however stressed that unfalsifiable statements are still important in science, and are often implied by falsifiable theories. For example, while "all men are mortal" is unfalsifiable, it is a logical consequence the falsifiable theory that "every man dies before he reaches the age of 150 years". Similarly, the ancient metaphysical and unfalsifiable idea of the existence of atoms has led to corresponding falsifiable modern theories. Popper invented the notion of metaphysical research programs to name such unfalsifiable ideas. In contrast to Positivism, which held that statements are senseless if they cannot be verified or falsified, Popper claimed that falsifiability is merely a special case of the more general notion of criticizability, even though he admitted that empirical refutation is one of the most effective methods by which theories can be criticized.

Falsifiability is an important concept within the creation-evolution controversy, where proponents of both sides claim that Popper developed falsifiability to denote ideas as unscientific or pseudoscientific and use it to make arguments against the views of the respective other side. The question of what can legitimately be called science and what cannot be legitimately called science is of major importance in this debate because US law says that only science may be taught in public school classes. Thus, the controversy raises the issue whether, on the one hand, creationistic ideas, or at least some of them, or at least in some form, may be legitimately called science, and, on the other hand, whether evolution itself may be legitimately called science. Falsifiability has even been used in court decisions in this context as a key deciding factor to distinguish genuine science from the nonscientific.

Causality[edit]

In 1747, while serving as surgeon on HM Bark Salisbury, James Lind carried out a controlled experiment to develop a cure for scurvy.<ref name="ADC1997">Dunn, Peter (January 1997). "James Lind (1716-94) of Edinburgh and the treatment of scurvy". Archive of Disease in Childhood Foetal Neonatal (United Kingdom: British Medical Journal Publishing Group) 76 (1): 64–65. doi:10.1136/fn.76.1.F64. PMC 1720613. PMID 9059193. http://fn.bmj.com/cgi/content/full/76/1/F64. Retrieved 2009-01-17. </ref> In this study his subjects' cases "were as similar as I could have them", that is he provided strict entry requirements to reduce extraneous variation. The men were paired, which provided blocking. From a modern perspective, the main thing that is missing is randomized allocation of subjects to treatments.

Complexity and Simplicity[edit]

Occam's Razor[edit]

Occam's razor (or Ockham's razor) often expressed in Latin as the lex parsimoniae, translating to law of parsimony, law of economy or law of succinctness, is a principle that generally recommends selecting the competing hypothesis that makes the fewest new assumptions, when the hypotheses are equal in other respects; for instance, if all the hypotheses can sufficiently explain the observed data.

The principle is often inaccurately summarized as "the simplest explanation is most likely the correct one." This summary is misleading, however, since the principle is actually focused on shifting the burden of proof in discussions.<ref>"The aim of appeals to simplicity in such contexts seem to be more about shifting the burden of proof, and less about refuting the less simple theory outright.." Alan Baker, Simplicity, Stanford Encyclopedia of Philosophy, (2004),http://plato.stanford.edu/entries/simplicity/</ref> That is, the razor is a principle that suggests we should tend towards simpler theories (see justifications section below) until we can trade some simplicity for increased explanatory power. Contrary to the popular summary, the simplest available theory is sometimes a less accurate explanation. Philosophers also add that the exact meaning of "simplest" can be nuanced in the first place.<ref>"In analyzing simplicity, it can be difficult to keep its two facets—elegance and parsimony—apart. Principles such as Occam's razor are frequently stated in a way which is ambiguous between the two notions...While these two facets of simplicity are frequently conflated, it is important to treat them as distinct. One reason for doing so is that considerations of parsimony and of elegance typically pull in different directions." Alan Baker, Simplicity, Stanford Encyclopedia of Philosophy, (2004),http://plato.stanford.edu/entries/simplicity/</ref>

In science, Occam’s razor is used as a heuristic (rule of thumb) to guide scientists in the development of theoretical models rather than as an arbiter between published models. In physics, parsimony] was an important heuristic in the formulation of special relativity by Albert Einstein,<ref name= "fn_(102)">Einstein, Albert (1905), "Does the Inertia of a Body Depend Upon Its Energy Content?" (in German), Annalen der Physik, pp. 639–41 .</ref><ref name= "fn_(103)">L Nash, The Nature of the Natural Sciences, Boston: Little, Brown (1963).</ref> the development and application of the principle of least action by Pierre Louis Maupertuis and Leonhard Euler,<ref name="fn_(104)">de Maupertuis, PLM (1744) (in French), Mémoires de l'Académie Royale, p. 423 .</ref> and the development of quantum mechanics by Ludwig Boltzmann, Max Planck, Werner Heisenberg and Louis de Broglie." /><ref>de Broglie, L (1925) (in French), Annales de Physique, pp. 22–128 .</ref> In chemistry, Occam’s razor is often an important heuristic when developing a model of a reaction mechanism.<ref name="fn_(107)">RA Jackson, Mechanism: An Introduction to the Study of Organic Reactions, Clarendon, Oxford, 1972.</ref><ref name = "fn_(108)">BK Carpenter, Determination of Organic Reaction Mechanism, Wiley-Interscience, New York, 1984.</ref> However, while it is useful as a heuristic in developing models of reaction mechanisms, it has been shown to fail as a criterion for selecting among published models. In this context, Einstein himself expressed a certain caution when he formulated Einstein's Constraint: "Everything should be kept as simple as possible, but no simpler." Elsewhere, Einstein harks back to the theological roots of the razor, with his famous put-down: "The Good Lord may be subtle, but he is not malicious."

In the scientific method, parsimony is an epistemological, metaphysical or heuristic preference, not an irrefutable principle of logic, and certainly not a scientific result.<ref name= autogenerated1>Sober, Eliot (1994), "Let’s Razor Occam’s Razor", in Knowles, Dudley, Explanation and Its Limits, Cambridge University Press, pp. 73–93 .</ref> As a logical principle, Occam's razor would demand that scientists accept the simplest possible theoretical explanation for existing data. However, science has shown repeatedly that future data often supports more complex theories than existing data. Science tends to prefer the simplest explanation that is consistent with the data available at a given time, but history shows that these simplest explanations often yield to complexities as new data become available. Science is open to the possibility that future experiments might support more complex theories than demanded by current data and is more interested in designing experiments to discriminate between competing theories than favoring one theory over another based merely on philosophical principles.

When scientists use the idea of parsimony, it only has meaning in a very specific context of inquiry. A number of background assumptions are required for parsimony to connect with plausibility in a particular research problem. The reasonableness of parsimony in one research context may have nothing to do with its reasonableness in another. It is a mistake to think that there is a single global principle that spans diverse subject matter.

As a methodological principle, the demand for simplicity suggested by Occam’s razor cannot be generally sustained. Occam’s razor cannot help toward a rational decision between competing explanations of the same empirical facts. One problem in formulating an explicit general principle is that complexity and simplicity are perspective notions whose meaning depends on the context of application and the user’s prior understanding. In the absence of an objective criterion for simplicity and complexity, Occam’s razor itself does not support an objective epistemology.

The problem of deciding between competing explanations for empirical facts cannot be solved by formal tools. Simplicity principles can be useful heuristics in formulating hypotheses, but they do not make a contribution to the selection of theories. A theory that is compatible with one person’s world view will be considered simple, clear, logical, and evident, whereas what is contrary to that world view will quickly be rejected as an overly complex explanation with senseless additional hypotheses. Occam’s razor, in this way, becomes a “mirror of prejudice.” It has been suggested that Occam’s razor is a widely accepted example of extraevidential consideration, even though it is entirely a metaphysical assumption. There is little empirical evidence that the world is actually simple or that simple accounts are more likely than complex ones to be true.<ref name="fn_(120)">Science, 263, 641–646 (1994)</ref>

Most of the time, Occam’s razor is a conservative tool, cutting out crazy, complicated constructions and assuring that hypotheses are grounded in the science of the day, thus yielding ‘normal’ science: models of explanation and prediction. There are, however, notable exceptions where Occam’s razor turns a conservative scientist into a reluctant revolutionary. For example, Max Planck interpolated between the Wien and Jeans radiation laws used an Occam’s razor logic to formulate the quantum hypothesis, and even resisting that hypothesis as it became more obvious that it was correct.

However, on many occasions Occam's razor has stifled or delayed scientific progress. For example, appeals to simplicity were used to deny the phenomena of meteorites, ball lightning, continental drift, and reverse transcriptase. It originally rejected DNA as the carrier of genetic information in favor of proteins, since proteins provided the simpler explanation. Theories that reach far beyond the available data are rare, but general relativity provides one example. In hindsight, one can argue that it is simpler to consider DNA as the carrier of genetic information, because it uses a smaller number of building blocks (four nitrogenous bases). However, during the time that proteins were the favored genetic medium, it seemed like a more complex hypothesis to confer genetic information in DNA rather than proteins.

One can also argue (also in hindsight) for atomic building blocks for matter, because it provides a simpler explanation for the observed reversibility of both mixing and chemical reactions as simple separation and re-arrangements of the atomic building blocks. However, at the time, the atomic theory was considered more complex because it inferred the existence of invisible particles which had not been directly detected. Ernst Mach and the logical positivists rejected the atomic theory of John Dalton, until the reality of atoms was more evident in Brownian motion, as explained by Albert Einstein.<ref>Ernst Mach, The Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/ernst-mach/</ref>

There are many examples where Occam’s razor would have picked the wrong theory given the available data. Simplicity principles are useful philosophical preferences for choosing a more likely theory from among several possibilities that are each consistent with available data. A single instance of Occam’s razor picking a wrong theory falsifies the razor as a general principle.

If multiple models of natural law make exactly the same testable predictions, they are equivalent and there is no need for parsimony to choose one that is preferred. For example, Newtonian, Hamiltonian, and Lagrangian classical mechanics are equivalent. Physicists have no interest in using Occam’s razor to say the other two are wrong. Likewise, there is no demand for simplicity principles to arbitrate between wave and matrix formulations of quantum mechanics. Science often does not demand arbitration or selection criteria between models which make the same testable predictions.

Michael Lee and others<ref>Lee, M. S. Y. (2002): Divergent evolution, hierarchy and cladistics. Zool. Scripta 31(2): 217–219. Template:DOIPDF fulltext</ref> provide cases where a parsimonious approach does not guarantee a correct conclusion and, if based on incorrect working hypotheses or interpretations of incomplete data, may even strongly support a false conclusion. He warns "When parsimony ceases to be a guideline and is instead elevated to an ex cathedra pronouncement, parsimony analysis ceases to be science."

Reductionist Models[edit]

Complexity, Chaos and Emergent Properties[edit]

Conclusion[edit]

References[edit]

<references group=""></references>

Discussion questions[edit]

Problems[edit]

Glossary[edit]

  • [[Def: ]]
  • [[Def: ]]
  • [[Def: ]]