Quantitative Methods




Why Political Science students should want to study quantitative methods: numbers are political

Many students are initially apprehensive about taking a course in quantitative methods, finding it uninteresting at best and terrifying at worst. After all, quantitative analysis evokes frightening terms like numbers, statistics, and chi square. In fact, many of you became political scientists expressly because you were uninterested in the harder sciences. If you wanted to become a physicist or even an economist then you would have done so.

Well, the truth is, statistical studies and a reliance on quantitative methods more generally can be dry, arcane, and even a little bit frightening. However, quantitative analysis can also be so much more.

So why should a Political Science student want to study quantitative methods?

In the first place having a basic understanding of quantitative analysis is necessary. You cannot avoid numbers even if you want to. They are all around us. Never mind the academic journals in Political Science. The popular media is saturated with opinion polls, government data on economic growth, and numerical ‘proof’ that one variable causes another. And all of this gives the numbers a certain type of power

Numbers are powerful in several ways. In the first place, and in a way similar to the qualitative methods that we have already explored, quantitative methods have the power to inform. Quantitative analysis can reveal the underlying patterns of a complex social world. On this basis it has the power to recommend rational courses of action for citizens and policy makers.

But the power of numbers doesn’t stop at their power to inform. Numbers also have the power to obfuscate, to deceive, to normalize, and to control.

So why study quantitative methods? Numbers have a social power over you whether you like it or not.

Thankfully, and despite the novelty of some of the techniques that will we be exploring, your study of qualitative methods has prepared you well. In fact, any good quantitative analysis should begin with a strong reflection on that which is non-quantitative – on the politics that are embedded into the numbers prior to a statistical operation. Surveys, experiments, interviews – the techniques that produce the raw data upon which statistical operations work their magic – they are governed by the choices and assumptions of their designers, and these choices leave an indelible impact on the final numerical results. Accordingly, at least in a discipline like Political Science, numbers are oftentimes no less social and political than campaign promises.

Quantitative data has a life and vital force beyond the spreadsheets and tables. Some of this becomes obvious when we reflect upon how numbers are produced. Take Gross Domestic Product. GDP is one of the core numbers used in statistical analysis – do Democracies grow faster than Autocracies? Does a particular policy correspond with an increase in GDP? A great deal of social scientific analysis involving higher level statistical comparisons is based upon this number. So ask yourself a question: Where does a significant social number like Gross Domestic Product come from? Did it just sprout up from the ground like a mushroom – waiting to be discovered by a fortunate gastronome? No, it is a social product. Social statistics like GDP (or crime rates, or unemployment), always have a specific moment of emergence in history. They are always parented by real human beings. That much is incontestable.

What is contestable is the impact of all of the political and ideological assumptions on the number itself. Is the social DNA of the parent embedded in the genes of the statistical progeny?

Getting back to GDP, the World Bank, the IMF, the World Trade Organization, all the key capitalist states, and most academic studies promote this figure as the best (and in some respects the only) measure of poverty, inequality, and ultimately development. GDP is the starting point for a great deal of economic and political analysis.

Nevertheless, there is now an extensive literature arguing that the number is pregnant with social assumptions.

Gross Domestic Product, like any social statistic, is a filter. It is a number that simplifies a complex reality. It condenses a virtually infinite number of transactions into a simple sum figure that only includes certain pieces of the whole. Why does this matter? Think of the story of the blind men and the elephant. <ref>Template:Cite bookfckLRfckLR</ref> Asked to describe what an elephant is, each one touches a different part - One a tusk, one the trunk, one an ear, one a leg - And as a result each one has a different idea of the essential nature of this creature. “A whole elephant is difficult to summarize”.<ref>Template:Cite bookfckLRfckLR</ref> The same thing is true for “the economy.” We never actually see or touch “the economy”. At best we experience elements of it - employment, consumption, hunger. At best we can only measure and sum together pieces of the whole. This means that, like the case of the elephant, we may not be getting the whole picture.

Now selectivity is necessarily the case for any type of summary indicator (it is, after all, a summary indicator). What is significant here is that the filtering process is rarely random or arbitrary. There are often certain systemic biases in the number.

For example, in her book Counting for Nothing, Marilyn Waring has suggested that GDP is systematically patriarchal.<ref>Template:Cite bookfckLRfckLR</ref> Why? Well, (to quote) “overwhelmingly, those experiences that are economically visible and included can be summarized as what men do.”<ref>Template:Cite bookfckLRfckLR</ref> The numbers reflect a socially entrenched disregard for women and the types of roles that they have traditionally played.

It is not as if this household labour is economically insignificant. And it is not as if it cannot be measured. To give just one estimate: “If unpaid activities (1995) were valued at prevailing wages, they would amount to $16 Trillion…of this $11 Trillion (70%) represents women’s work”<ref>Template:Cite bookfckLRfckLR</ref>

Or consider the work of Joseph Stiglitz, the Nobel prize winning economist, and former World Bank chief economist. Stiglitz has also written extensively on this topic.<ref>Template:Cite bookfckLRfckLR</ref> He has criticized GDP for, among other things, the way it treats the environment. Using the existing quantitative framework, an environmental disaster like an oil spill can increase GDP even while it destroys the ocean - provided that more is spent on the clean-up than is lost to tourism and commercial fishing. Oil spills translate into growth. Or consider that clear cutting a forest will increase GDP – up until the last tree has been chopped down and the entire forestry sector collapses. GDP growth, then, says absolutely nothing about sustainability.<ref>Template:Cite webfckLRfckLR</ref>

So there is case to be made for seeing GDP (and by extension any social indicator) as partial. Social data, regardless of how it is ultimately quantified and regardless of the degree of numerical precision, is always based upon political assumptions. And those assumptions and biases are replicated to the extent that these numbers guide our evaluations and inform our policy choices.

In the same light, consider the effects of those choices. The numbers can indirectly construct that which they are supposedly disinterestedly measuring. Indicators and Benchmarks can Discipline and Normalize, as policy makers orient policy towards maximizing their performance vis a vis those benchmarks.<ref>Template:Cite bookfckLRfckLR</ref>

Numbers, in other words, are political. So we need to be appropriately skeptical about quantitative evidence, in the same way that we should be appropriately skeptical about qualitative evidence. Quantity does not necessarily mean quality (in the normal sense of the term). Numeration can be partial, instrumental, political, and occasionally flat out wrong

So what to do?

Knowing how quantitative evidence is produced, and knowing how to produce it and use it ourselves, is a good start. In learning how to do quantitative analysis, you will be better able to differentiate good from bad statistics – to judge what they are and are not saying.

So we shouldn’t abandon statistics, even while we need to think critically about them before we accept them or use them. Skepticism should not obscure the powerful emancipatory capabilities of quantitative analysis. After all, one of the best ways of revealing the limitations of a quantitative analysis is through the use of more quantitative analysis. As an example of this, consider the American ‘war on terror’ that was launched subsequent to 9/11. Statistics can be used to provide a particular sense of scale to this very costly policy choice (costly in terms of dollars and lives). John Mueller, for example, notes that, outside of 9/11, more Americans drowned in toilets than were killed by acts of international terrorism on American soil.<ref>Template:Cite journalfckLRfckLR</ref> Justifying the rationality of the global war on terror in the light of this type of statistic can change the discussion.

Other examples of what statistics can do Inferential – have you ever wondered how it is possible for pollsters to predict election results, often to the exact percent, before an election takes place? A precise measure of how much one variable is impacting another? Etc

Another Introduction to Quantitative Methodology[edit]

“Statistics are socially constructed: the products of social activities. There’s a tendency in our culture to believe that statistics – that numbers – are little nuggets of truth. That we can come upon them and pick them up very much the way a rock collector picks up stones. A better metaphor would be to suggest that statistics are like jewels; that is, they have to be selected, they have to be cut, they have to be polished, and they have to be placed in settings so that they can be viewed from particular angles.” <ref>Template:Cite journal fckLR</ref>

Numbers in the form of data are the vehicles upon which quantitative methodology operates. Quantitative methodology concerns the use of statistics and logical inference on numerically measured concepts to discover patterns and rules concerning political phenomena. It is frequently contrasted with qualitative research but the basic logical structure of the discovery process remains the same. Each of these two fields of investigation uses a logical inference to make determinations about the existing political world. Quantitative methodology emphasizes measurement and the development of mathematical models to mimic the interplay of political concepts within the world. Quantitative research is based on the basic assumption that we can objectively and accurately measure the concepts we are investigating. For many political phenomena this may appear as a matter-of-fact exercise but usually this simplicity hides a complex array for research questions that must be considered. As an illustrative example, let’s survey a subset of a population to ascertain the percentage of citizens within that sample who consider themselves as a ‘liberal’ as opposed to a ‘conservative.’ This measurement will provide us with a simple summary statistic for the sample, say “35% of the public are liberal” that then can be used to make a judgment about the population, namely “35% of the population are liberal.” Underneath this measurement reside a number of important judgments that are masked by the brevity of the statistical statement. First, what is the population we are generalizing to? How was the sample determined? Could each member of the population be included? Was the sample randomly generated? How do we know the individuals in the sample accurately represent the individuals in the population? Did every member of the sample answer the question? If not, was the group that answered different from those who did not? Second, how is ‘liberal’ or ‘conservative’ defined? Do individuals sampled understand these terms? Even if they do, do they understand these terms in the same way? Third, how do we know if those sampled answered honestly and truthfully? Did those who answered fully consider the question? These questions reveal the complexity of quantitative research measurement. As a result, an extremely important component in quantitative research is the development of objective numerical measures of every concept. Often we forget to investigate the process used to produce the numerical measure of the concept. We take the numbers produced as objective without a clear discovery of the process these numbers were developed. As noted by the quote that begins this section, while the statistic itself may be simple, the researchers who created the statistic used an interpretative process that inevitably was influenced by value-laden decisions. With this in mind, let’s return to the basic understanding of quantitative methodology. Quantitative research refers to the systematic empirical investigation of social phenomena via statistical, mathematical or computational techniques. The objective of quantitative research is to develop and employ theories and hypotheses through mathematical models to understand political phenomena. The process of measurement is central to quantitative research because it provides the fundamental connection between empirical observation and mathematical expression of quantitative relationships. Quantitative data is data that is in numerical form such as statistics, percentages, counts, etc. Statistical techniques are then used on value-neutral, objective, numerical data to make descriptive or causal inferences about the population from which the data is drawn. The researcher’s traditional goal is that their statistical analysis will yield an unbiased result that can be generalized to some larger population. Quantitative researchers most often use a positivist approach in a sequential manner. Quantitative research begins with the formulation of a research question to develop a hypothesis or series of hypotheses. Next, the researcher identifies, defines and operationalizes the variables under consideration, creates a data set containing a numerical measurement of each one of the variables, and selects the appropriate mathematical procedures with which to analyze the data. Once this statistical analysis is conducted, the researcher makes inferences from his analysis to the population from which their data has been drawn. Quantitative methodology developed to deal with the problems inherent in more explicitly subjective investigative methods. Its objective is to develop descriptive and causal inferences that accurately describe or predict phenomena within the social environment. However, limitations exist which challenge the ability of researchers utilizing quantitative methods to reach this objective. As noted above with the example concerning surveying individuals to determine ideology, at its root every measurement contains a subjective element. The terms used to define our measures are developed in a value-laden social environment. Judgment is inherent in many aspects of the analysis process. In addition, in almost every research endeavor a balance must be sought between parsimony and accuracy in modeling the political phenomena under investigation. For example, if we seek to determine the causes of poverty, it is likely that the number of factors is vast. But to determine that political phenomena are determined by an extremely large collection of factors is to reduce the usefulness of the result. As a result, good quantitative research seeks to limit the variables under consideration to a manageable level to allow for meaningful interpretation. In addition, clear, stable definitions of concepts remain illusive. For example, the term ‘liberal’ has a general definitional meaning that can be looked up but the term itself is a social construct for each of the individuals within the study. In layman’s terms what I understand as a ‘liberal’ may be slightly or significantly different from the other individuals within the study. Even if the term is clearly delineated within the survey, the words within that definition are social constructs as well. In addition, the quantitative method implicitly assumes that all relevant concepts can be measured numerically and that the measurement of each concept is equally valid. This is likely not the case. For example, while it is easy to quantify how much money an individual contributes to a political candidate, it is much more difficult to determine what value that individual places on the actual election of that candidate. A number of sophisticated techniques have been developed to measure items not easily quantifiable. However, these techniques are still substitutes for an actual measure of the concept considered.









<references group=""></references>
An analogy used to great effect by Best, Joel, More Damned Lies and Statistics: How numbers confuse public issues, London: University of California Press, 2004

Best, 2004

Discussion questions[edit]



  • [[Def: ]]
  • [[Def: ]]
  • [[Def: ]]