Actions

Philosophy of science

From OPOSSEM


Objectives[edit]

Introduction[edit]

The concept of a "science" of social behavior is relatively new, although, as we shall see, in some forms it dates back many centuries. The style of "social science" for which the OPOSSEM project was designed, however, goes back only about five decades, when the widespread availability of computers in governments and large universities, along with data sets that could be easily processed by computers, made statistical analysis routine. That analysis, however, built on earlier developments in social scientific research going back to the late 19th century, and concepts of "scientific" research generally that went back to the beginning of the 17th century.

One of the problems in dealing with social behavior scientifically is that "science" is not a requirement for understanding most of the social behavior that we encountered on a day-to-day basis. Humans are social animals, and a degree of understanding of some forms of social behavior is present even in very young infants. As children mature, they deal with increasingly complex forms of social behavior, first within the family, then in interactions with other children, to structured behaviors in schools, sports and other activities, and finally to the fantastically complex social hierarchies of high school and usually some introduction to a work environment. Along the way—typically beginning around the age of 14 — they begin to observe political and economic behavior in the larger society, and begin to formulate attitudes about this, for example identifying as politically liberal or conservative.

All of this can be done in the course of normal development without any appeal to a "social science", though in fact individuals gain a great deal of unsystematic knowledge through formal schooling and through various forms of entertainment. But the formal scientific study is not a requirement for becoming a fully functioning member of society. In contrast, one would not expect to design an apartment building without a knowledge of physics and mechanics or to work as an industrial chemist without having formally studied chemistry. That said, even those fields were not 'scientific' until relatively recently:: the great cathedrals of medieval Europe were built without a systematic knowledge of mechanics or gravity; the Romans devised cements that have allowed structures to survive 2000 year

The concept of a systemic scientific approach to the study of human behavior dates, for the most part, to the past 150 years or so, generally coinciding with the increases in communications brought about by industrialization and increases in literacy, as well as the spectacular technological changes which occurred as a consequence of the application of scientific principles to natural systems. As the scientific method was applied to increasingly complex phenomena, the possibility of extending this to human behavior arose repeatedly, though it took a great deal of time before those methods were successfully adapted, and approaches are still evolving.

The philosophy of the social sciences varies between disciplines, and in particular economics and psychology developed mathematical traditions (and for psychology, experimental traditions) considerably earlier than did political science and sociology. The discussion here is focused primarily on those two disciplines — which for the most part have developed in parallel.


A Brief History[edit]

Scholasticism to the Enlightenment[edit]

To a limited extent, scientific approaches to social behavior can be traced in the West back to the Greeks, specifically to Aristotle, notably his Politics, which provides a very systematic categorization of political institutions, and Aristotle or his students may have undertaken a systemic empirical collection of constitutions of the Greek city-states in order to develop this, though that work is now lost. For the next two-thousand or so years, however, systematic theorizing was generally ignored in favor of narrative history, although late in this period one finds theoretical treatments in the works of ibn-Khaldun and Machiavelli.

For the most part, however, the study of social behavior prior to the 19th century took the form of philosophical work. In addition, from the beginnings of the university system in the high Middle Ages until about the middle of the 18th century, most of the philosophical work used the method of scholasticism, which emphasized the importance of ancient authorities, rhetorical technique, and elegant theory over the importance of empirical observations.<ref> Though not, it seems, to the point of endlessly debating "how many angels can dance on a pinhead", which appears to be a fabrication from the early modern period specifically designed to discredit the scholastic approach; [1][2]</ref> The scholastic approach began to be challenged at the beginning of the 17th century, in particular with the work of the English philosopher and politician Francis Bacon(1561-1626), whose book Novum OrganumThe New Instrument—presented a relatively complete framework for what would become the modern scientific method. In complete contrast to the scholastics, Bacon emphasized the importance of empirical observation over the received theoretical wisdom of established authorities. Bacon also proposed that science should be both public—rather than retained as private secrets—and used for the public good, and even proposed government financing of scientific inquiry (a proposal which was not successful).


The Nineteenth Century[edit]

Pre-WWII Social Science[edit]

The Development of Modern Statistics[edit]

The association of "statistics" with the social sciences goes back to the origins of the term" "statistics" originally referred to the systematic collection of information of interest to "states", particularly demographic and economic data. In the early 19th century, however, the meaning of the term broadened to refer more generally to the collection, summary, and analysis of data. By the middle of the 19th century, the issue of public health in the rapidly expanding cities of industrializing Great Britain provided early examples of applied statistics. Florence Nightingale, who gained fame for her innovations in nursing during the Crimean War was also an early innovation in the collection and use of statistics, and became a pioneer in the visual presentation of information and [graphics].<ref>Lewi, Paul J. (2006). Speaking of Graphics. http://www.datascope.be/sog.htm. </ref> In 1854, the London physician John Snow famously used a map of individual cholera cases to illustrate how these clustered around a pump on Broad Street and made a solid use of statistics to illustrate the connection between the quality of the source of water and the incidence of cholera. Snow's study is considered a major event in the history of public health, and geography, and can be regarded as the founding event of the science of epidemiology.

At the same time that these applications of statistics were developing, advances were being made in the theoretical field of "mathematical statistics", which combined designates the mathematical theories of probability—whose initial results were found in the 17th and 18th centuries, particularly in the analysis of games of chance (gambling)—and statistical inference. The relation between statistics and probability theory developed rather late, however. By 1800, astronomy used probability models and statistical theories, particularly the method of least squares, which was invented by Legendre and Gauss.

By the late 19th century, statistics were increasingly use in industrial and particularly agricultural research. In the first two decades of the 20th century, the collection of frequentist" methods including confidence intervals and significance tests were developed by Ronald A. Fisher, Jerzy Neyman and Egon Pearson. By the end of the 1920s, these had largely displaced the earlier "Bayesian" approach <ref> The word Bayesian appeared in the 1930s, and by the 1960s it became the term preferred by those dissatisfied with the limitations of frequentist statistics.</ref><ref name="Miller Earliest Uses">Jeff Miller, "Earliest Known Uses of Some of the Words of Mathematics (B)"</ref>, which, as we will discuss below, provided a more common-sensical approach to the interpretation of data, but which was not computationally tractable given the technology of the time.

The use of statistics to study social behavior increased substantially in response to the increased complexity of industrialized economies, particularly in dealing with the Great Depression and the industrial mobilization for World War II. The collection of a wide variety of information by governments became increasingly widespread and was assisted by some technological innovations, notably the "punch card" which allowed large amounts of information to be recorded using holes in paper cards, which could then be mechanically sorted and counted. In addition, the mass media began the systematic practice of election forecasting with Literary Digest starting a mail survey to predict the presidential election in 1916 . The Literary Digest poll, which was subject to sampling bias, famously incorrectly predicted that Franklin Roosevelt would lose the election, but because the pollsters used a telephone survey and only wealthy people were able to afford phones at that time, and 12 years later polling errors led the Chicago Daily Tribune, a pro-Republican newspaper, to famously print the banner headline DEWEY DEFEATS TRUMAN (Truman in fact led Dewey by almost five percentage points in the popular vote and a 3-to-2 margin in the Electoral College), but the forecasting community responded to these errors and over time their methods gradually improved.

The ability to analyze such information using complex mathematical techniques was limited by the fact that calculations could not be automated beyond the level of mechanical calculators. This situation began to change in the 1960s with spread of digital computers in government and research universities. While much of the original funding for digital computing had come from the military, by 1951 an early computer, the Univac I, had been installed in the U.S. Census Bureau <ref>http://www.census.gov/history/www/census_then_now/census_facilities/bowie_computer_center.html</ref>. As the cost of computers declined and their capacity increased, statistical packages were developed to automate the once time-consuming process of statistical calculations, and by the 1960s these were widespread; for example the statistical package SPSS ("Statistical Package for the Social Sciences") was released in its first version in 1968 after being developed by Norman H. Nie and C. Hadlai Hull; at the time Nie was a political science postgraduate at Stanford University.

The availability of computing power, in turn, increased the demand for quantitative data, leading for example to the founding the Inter-University Consortium for Political and Social Research in 1962. Statistical computation benefited tremendously from "Moore's Law"—the approximate doubling of computing power every 18 months—as computers both decreased in size from room-sized monsters with their own electrical and cooling systems to devices that could fit comfortably in a backpack, while concurrently the capacity of the devices increased exponentially, to the point where a contemporary smart phone probably has greater computing power than the combined capacity of North American research universities in 1965.<ref>For example, in 1965 the central research computer at Indiana University/Bloomington was a Control Data 3600 with 384 kilobytes of memory, an experimental "drum" storage holding 1 megabyte (when it worked) and a single processor running at 500,000 cycles per second. It filled a room about the size of a 50-person classroom and cleverly used the water of an Olympic-sized swimming pool for cooling. In 2011, an iPhone 4 has 512 megabytes (512,000 kilobytes) of random access memory supplemented with at least 32 gigabytes (1,000,000 kilobytes) of solid state "disk" memory, and multiple processors running at 1-billion cycles per second handling the programs and operating system, graphics, two cameras and yes, a phone. It weighs less than 5 ounces (140 grams). </ref><ref>Even if most of that capacity is devoted to viewing pictures of kittens playing with string and catapulting exploding birds at green pigs.</ref>. That computing power has made possibly statistical techniques that were simply impossible in an earlier era due to the computational requirements which has led, for example, to a renewed interest in Bayesian analysis.

The availability of machine-readable data has also increased dramatically in the past decade due to the World Wide Web. Many organizations and governments make data available on web sites, and research data sets are now generally available immediately for downloading, replacing the earlier process of sending these using physical media such as punch cards and magnetic tape. Finally, with advances in automated natural language processing, the web itself is a subject of statistical analysis, for example in monitoring social media such as blogs, Twitter and Facebook for trends in political interests and attitudes.

The Behavioral Revolution[edit]

Key Concepts[edit]

Systematic Methodology[edit]

Scientific Realism[edit]

Scientific realism is, at the most general level, the view that the world described by science is the real world, as it is, independent of what we might take it to be. Within philosophy of science, it is often framed as an answer to the question "how is the success of science to be explained?" The debate over what the success of science involves centers primarily on the status of unobservable entities apparently talked about by scientific theories. Generally, those who are scientific realists assert that one can make reliable claims about unobservables (viz., that they have the same ontological status) as observables.

Main features of scientific realism[edit]

Scientific realism involves two basic positions. First, it is a set of claims about the features of an ideal scientific theory; an ideal theory is the sort of theory science aims to produce. Second, it is the commitment that science will eventually produce theories very much like an ideal theory and that science has done pretty well thus far in some domains. It is important to note that one might be a scientific realist regarding some sciences while not being a realist regarding others. For example, one might hold realist attitudes toward physics, chemistry and biology, and not toward economics, psychology and sociology.

According to scientific realism, an ideal scientific theory has the following features:

  • The claims the theory makes are either true or false, depending on whether the entities talked about by the theory exist and are correctly described by the theory. This is the semantic commitment of scientific realism.
  • The entities described by the scientific theory exist objectively and mind-independently. This is the metaphysical commitment of scientific realism.
  • There are reasons to believe some significant portion of what the theory says. This is the epistemological commitment.

Combining the first and the second claim entails that an ideal scientific theory says definite things about genuinely existing entities. The third claim says that we have reasons to believe that the things said about these entities are true.

Scientific realism usually holds that science makes progress, i.e. scientific theories usually get successively better, or, rather, answer more and more questions. For this reason, many people, scientific realist or otherwise, hold that realism should make sense of the progress of science in terms of theories being successively more like the ideal theory that scientific realists describe.

Characteristic claims[edit]

The following claims are typical of those held by scientific realists. Due to the wide disagreements over the nature of science's success and the role of realism in its success, a scientific realist would agree with some but not all of the following positions.<ref>Jarrett Leplin (1984), Scientific Realism, University of California Press, p. 1, ISBN 0-520-05155-6, http://books.google.com/books?id=UFCpopYlB9EC&lpg=PA189&pg=PA1#v=onepage&f=false </ref>

  • The best scientific theories are at least partially true.
  • The best theories do not employ central terms that are non referring expressions.
  • To say that a theory is approximately true is sufficient explanation of the degree of its predictive success.
  • The approximate truth of a theory is the only explanation of its predictive success.
  • Even if a theory employs expressions that do not have a reference, a scientific theory may be approximately true.
  • Scientific theories are in a historical process of progress towards a true account of the physical world.
  • Scientific theories make genuine, existential claims
  • Theoretical claims of scientific theories should be read literally and are definitively either true or false.
  • The degree of the predictive success of a theory is evidence of the referential success of its central terms.
  • The goal of science is an account of the physical world that is literally true. Science has been successful because this is the goal that it has been making progress towards.

History of scientific realism[edit]

Scientific realism is related to much older philosophical positions including rationalism and realism. However, it is a thesis about science developed in the twentieth century. Portraying scientific realism in terms of its ancient, medieval, and early modern cousins is at best misleading.

Scientific realism is developed largely as a reaction to logical positivism. Logical positivism was the first philosophy of science in the twentieth century and the forerunner of scientific realism, holding that a sharp distinction can be drawn between observational terms and theoretical terms, the latter capable of semantic analysis in observational and logical terms.

Logical positivism encountered difficulties with:

  • The verification theory of meaning (for which see Hempel (1950)).
  • Troubles with the analytic-synthetic distinction (for which see Quine (1950)).
  • The theory ladenness of observation (for which see Kuhn (1970) and Quine (1960)).
  • Difficulties moving from the observationality of terms to observationality of sentences (for which see Putnam (1962)).
  • The vagueness of the observational-theoretical distinction (for which see Maxwell (1962)).

These difficulties for logical positivism suggest, but do not entail, scientific realism, and lead to the development of realism as a philosophy of science.

Realism became the dominant philosophy of science after positivism. Bas van Fraassen developed constructive empiricism as an alternative to realism. Responses to van Fraassen have sharpened realist positions and lead to some revisions of scientific realism.

Arguments for and against scientific realism[edit]

One of the main arguments for scientific realism centers on the notion that scientific knowledge is progressive in nature, and that it is able to predict phenomena successfully[citation needed]. Many realists (e.g., Ernan McMullin, Richard Boyd) think the operational success of a theory lends credence to the idea that its more unobservable aspects exist, because they were how the theory reasoned its predictions. For example, a scientific realist would argue that science must derive some ontologicalsupport for atoms from the outstanding phenomenological success of all the theories using them.

Arguments for scientific realism often appeal to abductive reasoning or "inference to the best explanation"[citation needed]. Scientific realists point to the success of scientific theories in predicting and explaining a variety of phenomena, and argue that from this we can infer that our scientific theories (or at least the best ones) provide true descriptions of the world, or approximately so.

On the other hand, pessimistic induction, one of the main arguments against realism, argues that the history of science contains many theories once regarded as empirically successful but which are now believed to be false. Additionally, the history of science contains many empirically successful theories whose unobservable terms are not believed to genuinely refer. For example, the effluvial theory of static electricity is an empirically successful theory whose central unobservable terms have been replaced by later theories. Realists reply that replacement of particular realist theories with better ones is to be expected due to the progressive nature of scientific knowledge, and when such replacements occur only superfluous unobservables are dropped. For example, Albert Einstein's theory of special relativity showed that the concept of the luminiferous ether could be dropped because it had contributed nothing to the success of the theories of mechanics and electromagnetism. On the other hand, when theory replacement occurs, a well-supported concept, such as the concept of atoms, is not dropped but is incorporated into the new theory in some form.

Also against scientific realism social constructivists might argue that scientific realism is unable to account for the rapid change that occurs in scientific knowledge during periods of revolution. Constructivists may also argue that the success of theories is only a part of the construction[citation needed]. However, these arguments ignore the fact that many scientists are not realists. In fact, during what is perhaps the most notable example of revolution in science—the development of quantum mechanics in the 1920s—the dominant philosophy of science was logical positivism. The alternative realist Bohm interpretation and many-worlds interpretation of quantum mechanics do not make such a revolutionary break with the concepts of classical physics

Another argument against scientific realism, deriving from the underdetermination problem, is not so historically motivated as these others. It claims that observational data can in principle be explained by multiple theories that are mutually incompatible[citation needed]. Realists might counter by saying that there have been few actual cases of underdetermination in the history of science[citation needed]. Usually the requirement of explaining the data is so exacting that scientists are lucky to find even one theory that fulfills it. Furthermore, if we take the underdetermination argument seriously, it implies that we can know about only what we have directly observed[citation needed]. For example, we could not theorize that dinosaurs once lived based on the fossil evidence because other theories (e.g., that the fossils are clever hoaxes) can account for the same data. Realists claim that, in addition to empirical adequacy, there are other criteria for theory choice, such as parsimony

Inductive and Deductive Methods[edit]

Positivism[edit]

Positivism asserts that the only authentic knowledge is that which is based on sense, experience and positive verification. As an approach to the philosophy of science deriving from Enlightenment thinkers such as Henri de Saint-Simon and Pierre-Simon Laplace, Auguste Comte saw the scientific method as replacing metaphysics in the history of thought, observing the circular dependence of theory and observation in science. Sociological positivism was later reformulated by Émile Durkheim as a foundation to social research. At the turn of the 20th century the first wave of German sociologists, including Max Weber and Georg Simmel, rejected the doctrine, thus founding the antipositivist tradition in sociology. Later antipositivists and critical theorists have associated positivism with "scientism"; science as ideology.

In the early 20th century, logical positivism—a descendant of Comte's basic thesis but an independent movement— sprang up in Vienna and grew to become one of the dominant schools in Anglo-American philosophy and the analytic tradition. Logical positivists (or 'neopositivists') reject metaphysical speculation and attempt to reduce statements and propositions to pure logic. Critiques of this approach by philosophers such as Karl Popper, Willard Van Orman Quine and Thomas Kuhn have been highly influential, and led to the development of postpositivism. In psychology, the positivist movement was influential in the development of behavioralism and operationalism. In economics, practising researchers tend to emulate the methodological assumptions of classical positivism, but only in a de-facto fashion: the majority of economists do not explicitly concern themselves with matters of epistemology. In jurisprudence, "legal positivism" essentially refers to the rejection of natural law, thus its common meaning with philosophical positivism is somewhat attenuated and in recent generations generally emphasizes the authority of human political structures as opposed to a "scientific" view of law.

In contemporary social science, strong accounts of positivism have long since fallen out of favour. Practitioners of positivism today acknowledge in far greater detail observer bias and structural limitations. Modern positivists generally eschew metaphysical concerns in favor of methodological debates concerning clarity, replicability, reliability and validity.<ref name="Gartell">Gartell, David, and Gartell, John. 1996. "Positivism in sociological practice: 1967-1990". Canadian Review of Sociology, Vol. 33 No. 2.</ref> This positivism is generally equated with "quantitative research" and thus carries no explicit theoretical or philosophical commitments. The institutionalization of this kind of sociology is often credited to Paul Lazarsfeld, who pioneered large-scale survey studies and developed statistical techniques for analyzing them. This approach lends itself to what Robert K. Merton called middle-range theory: abstract statements that generalize from segregated hypotheses and empirical regularities rather than starting with an abstract idea of a social whole.<ref name="Boudon">Boudon, Raymond. 1991. "Review: What Middle-Range Theories are". Contemporary Sociology, Vol. 20 Num. 4 pp 519-522.</ref> Other new movements, such as critical realism, have emerged to reconcile the overarching aims of social science with various so-called 'postmodern' critiques.

Replication[edit]

The Hypothetical-Deductive Method[edit]

The hypothetico-deductive model or method, first so-named by William Whewell,<ref>William Whewell (1837) History of the Inductive Sciences</ref><ref>William Whewell (1840), Philosophy of the Inductive Sciences</ref> is a proposed description of scientific method. According to it, scientific inquiry proceeds by formulating a hypothesis in a form that could conceivably be falsified by a test on observable data. A test that could and does run contrary to predictions of the hypothesis is taken as a falsification of the hypothesis. A test that could but does not run contrary to the hypothesis corroborates the theory. It is then proposed to compare the explanatory value of competing hypotheses by testing how stringently they are corroborated by their predictions.

"From the long tradition of empiricism we have inherited the hypothetico-deductive model of scientific research."

—p.86 Brody, Thomas A. (1993), The Philosophy Behind Physics, Springer Verlag, ISBN 0-387-55914-0 . (Luis De La Peña and Peter E. Hodgson, eds.)

Qualification of corroborating evidence is sometimes raised as philosophically problematic. The raven paradox is a famous example. The hypothesis that 'all ravens are black' would appear to be corroborated by observations of only black ravens. However, 'all ravens are black' is logically equivalent to 'all non-black things are non-ravens' (this is the contraposition form of the original implication). 'This is a green tree' is an observation of a non-black thing that is a non-raven and therefore corroborates 'all non-black things are non-ravens'. It appears to follow that the observation 'this is a green tree' is corroborating evidence for the hypothesis 'all ravens are black'. Attempted resolutions may distinguish:

  • non-falsifying observations as to strong, moderate, or weak corroborations
  • investigations that do or do not provide a potentially falsifying test of the hypothesis.<ref>John N.W. Watkins (1984), Science and Skepticism, p. 319.</ref>

Corroboration is related to the problem of induction, which arises because a general case (a hypothesis) cannot be logically deduced from any series of specific observations. That is, any observation can be seen as corroboration of any hypothesis if the hypothesis is sufficiently restricted. The argument has also been taken as showing that both observations are theory-laden, and thus it is not possible to make truly independent observations. One response is that a problem may be sufficiently narrowed (or axiomatized) as to take everything except the problem (or axiom) of interest as unproblematic for the purpose at hand.<ref>Karl R. Popper (1963), Conjectures and Refutations, pp. 238-39.</ref>

Evidence contrary to a hypothesis is itself philosophically problematic. Such evidence is called a falsification of the hypothesis. However, under the theory of confirmation holism it is always possible to save a given hypothesis from falsification. This is so because any falsifying observation is embedded in a theoretical background, which can be modified in order to save the hypothesis. Popper acknowledged this but maintained that a critical approach respecting methodological rules that avoided such immunizing stratagems is conducive to the progress of science.<ref>Karl R. Popper (1979, Rev. ed.), Objective Knowledge, pp. 30, 360.</ref>

Despite the philosophical questions raised, the hypothetico-deductive model remains perhaps the best understood theory of scientific method. This is an example of an algorithmic statement of the hypothetico-deductive method:<ref>Peter Godfrey-Smith (2003) Theory and Reality, p. 236.</ref>

  1. Gather data (observations about something that is unknown, unexplained, or new)
  1. Hypothesize an explanation for those observations.
  1. Deduce a consequence of that explanation (a prediction). Formulate an experiment to see if the predicted consequence is observed.
  1. Wait for corroboration. If there is corroboration, go to step 3. If not, the hypothesis is falsified. Go to step 2.

Falsification[edit]

Error creating thumbnail: Unable to save thumbnail to destination
Are all swans white? The classical view of the philosophy of science is that it is the goal of science to "prove" such hypotheses or induce them from observational data. This seems hardly possible, since it would require us to infer a general rule from a number of individual cases, which is logically inadmissible. However, if we find one single black swan, logic allows us to conclude that the statement that all swans are white is false. Falsificationism thus strives for questioning, for falsification, of hypotheses instead of proving them.

Falsifiability or refutability is the logical possibility that an assertion can be contradicted by an observation or the outcome of a physical experiment. That something is "falsifiable" does not mean it is false; rather, that if it is false, then some observation or experiment will produce a reproducible result that is in conflict with it.

For example, the claim "atoms do exist" is unfalsifiable: Even if all observations made so far did not produce an atom, it is still possible that the next observation does. In the same way, "all men are mortal" is unfalsifiable: Even if someone is observed who has not died so far, he could still die in the next instant "All men are immortal," by contrast, is falsifiable, by the presentation of just one dead man. Not all statements that are falsifiable in principle are falsifiable in practice. For example, "it will be raining here in one million years" is theoretically falsifiable, but not practically so.

The concept was made popular by Karl Popper, who, in his philosophical criticism of the popular positivist view of the scientific method, concluded that a hypothesis, proposition, or theory talks about the observable only if it is falsifiable. Popper however stressed that unfalsifiable statements are still important in science, and are often implied by falsifiable theories. For example, while "all men are mortal" is unfalsifiable, it is a logical consequence the falsifiable theory that "every man dies before he reaches the age of 150 years". Similarly, the ancient metaphysical and unfalsifiable idea of the existence of atoms has led to corresponding falsifiable modern theories. Popper invented the notion of metaphysical research programs to name such unfalsifiable ideas. In contrast to Positivism, which held that statements are senseless if they cannot be verified or falsified, Popper claimed that falsifiability is merely a special case of the more general notion of criticizability, even though he admitted that empirical refutation is one of the most effective methods by which theories can be criticized.

Falsifiability is an important concept within the creation-evolution controversy, where proponents of both sides claim that Popper developed falsifiability to denote ideas as unscientific or pseudoscientific and use it to make arguments against the views of the respective other side. The question of what can legitimately be called science and what cannot be legitimately called science is of major importance in this debate because US law says that only science may be taught in public school classes. Thus, the controversy raises the issue whether, on the one hand, creationistic ideas, or at least some of them, or at least in some form, may be legitimately called science, and, on the other hand, whether evolution itself may be legitimately called science. Falsifiability has even been used in court decisions in this context as a key deciding factor to distinguish genuine science from the nonscientific.


Causality[edit]

Conclusion[edit]

References[edit]

<references group=""></references>

Discussion questions[edit]

Problems[edit]

Glossary[edit]

  • [[Def: ]]
  • [[Def: ]]
  • [[Def: ]]