I’ve been thinking about a problem of methodology over the past couple of weeks. This problem pits two classic opponents against each others: anecdotes versus data. Data has become one of the buzzwords of the twenty first century, where our ability to build worlds based on information has become formidable. This development has been phrased well by Nathan Jurgenson, for the New Inquiry in an essay titled “View from Nowhere“:
“As the name suggests, Big Data is about size. Many proponents of Big Data claim that massive databases can reveal a whole new set of truths because of the unprecedented quantity of information they contain. But the big in Big Data is also used to denote a qualitative difference — that aggregating a certain amount of information makes data pass over into Big Data, a “revolution in knowledge,” to use a phrase thrown around by startups and mass-market social-science books. Operating beyond normal science’s simple accumulation of more information, Big Data is touted as a different sort of knowledge altogether, an Enlightenment for social life reckoned at the scale of masses.”
Big Data, as Jurgenson suggests, has fit neatly into a large gap in the human experience: a sense that we must be capable of knowing everything so long as we have perfect epistemology. Big data further benefits from a huge amount of financial backing, as its led to a number of empirical verifications and has radically transformed how we operate within society (think about the respective position Tweets and telegrams played in politics and society between the turn of last century and this one.
However, we need to separate shifts in evidence and methodology when we speak about the renewed emphasis on “rationalism” as the saving grace for the world. I wish I could say I was overstating the position of a number “rationalists” in regards to their capability of saving individuals, I bring to you Sam Frank’s piece, “Power and Paranoia in Silicon Valley” for Harper Monthly on Less Wrong, and new-age rationalism more generally:
“Our society was sick–root, branch and memeplex– and rationality was the only cure.” –Michael Vassar
“‘This is the New Enlightenment. Old project’s finished. We actually have science now, now we can have the next part of the Enlightenment Project.” –Eliezer Yudkowsky
Both Michael Vassar and Eliezer Yudkowsky are deeply involved with the Less Wrong movement to bring the grace of rationality to the common people. But this mode of “rationality” happened to coincide with the big data studies of the start-up and disruptive innovation scenes. More specifically, it’s stemmed out of a series of practices that can be traced back to the middle twentieth century.
To think about this in context with big data, let’s take a look at another source. The University of Chicago Aims of Education speeches are often used to discuss the importance of the liberal arts commitment to undergraduate engagements. Bernard Harcourt gave the speech in 2011, with the title “Questioning the Authority of Truth.” The focus was on how methods came in and out of vogue, rather than investigating the evidence focused on such shifts. One of his examples captured our methodological interest perfectly:
“But that had been a long time ago, and by 1961, McNamara was decidedly a civilian bureaucrat taking over a military organization. Now what McNamara was especially good at was statistical control. During the war it was called “operations research.” It was the technical use of data and statistics to master military weapons systems.”
This was the shift from traditional modes of war research to the more verified “operations research”. In turn, this would turn into “cost-benefit analysis”. The role of McNamara was introducing the role of studies and efficiency to questions that had been traditionally left to military veterans. Classic war texts and philosophies were out, McNamara’s innovation was to move the question into one of studies and statistics; in a word empirics. As Harcourt summarized:
“No need for political wrangling, for value judgments, for practical experience, no need for Aristotelian values of phronesis nor for Machiavellian notions of virtu. The right answer would emerge from the machine-model that evaluates cost and benefits.”
(Interesting note: Harcourt’s language of machine-model seems to emulate a number of the German thinkers on the problem of uniformity in mass culture; Heidegger’s Da Sein, Adorno’s Mass Culture, the former even using referring to the Enlightenment as “deception of the masses”, but this is another story.)
From the 50s onward, economics had a serious impact on field after field. Thinking about history and slavery, for example, what made Robert Fogel and Stanley Engerman’s work on slavery so valuable and half as controversial was the use of economic methods on historical sources, a field that had been shielded for some time from the role of empirical methodologies. Critiques of the work often come in two veins: people who concern themselves with the political and social ramifications of a conclusion like “slavery was better than Northern wage-labor” and people who asked whether it was sensible to attempt creation of a systemic study of questions and employ them within the historical record, for subjects who could no longer speak. The sort of critiques that stem from the former are those that translate into focuses on identity politics in contemporary environments. The latter, from a question of limitation in regards to empiric methodologies. This limitation was also described by Frank:
“His [Vassar’s]rationalism seemed so limited to me, so incomplete. ‘It is unfortunate’ he said that we are in a situation where our cultural heritage is possessed only by people who are extremely unappealing to most of the population.’ That hadn’t been what I meant. I had meant rationalism as itself a failure of the imagination.”
This is an extremely important distinction to make between “cultural heritage of the elites” and rationalism “as a failure of the imagination.”. The critique Frank makes here goes to the heart of assessments of Eurocentrism. The argument is not that the Scientific Method stops working outside of non-Western areas (and in fact, the Golden Age of Islam would have lots to say about the creation of the Scientific Method.) The argument maintains there is an inability to deploy things like the Scientific Method in not strictly empirical areas for meaningful results.
To think about this lack of imagination, let’s use a simple example that draws from my own experience working in a couple of sociological and political science data settings: a group of individuals, who have never been impoverished decide they’d like to run a study on poverty The specific interest is irrelevant–it’s enough that they want to focus on a question focused on living in an experience beyond their own. They start formulating questions:
What is poverty? Who do we survey?
What is the best way to contact participants?
What is the best way to phrase the question for our data?
Each of these questions are initially answered through discussion and assessment based on making the data useful. So for the first example, this group might use the federal definition for poverty, refer to traditional sociology for phrasing questions, and use a number of methods both on and off line to contact participants, so far so good.
They send out the questions, and get back answers, the data. Now it’s time to clean and input the data into a useable form! They come up with a series of protocols to keep the input uniform and solve any unexpected complications, and they set to work.
One of the group runs into such a complication: it appears a participant was unaware they couldn’t circle two answers for a question. After discussing the problem, everyone comes up with satisfactory answer, and they move on.
A second complication arises: it appears as though someone may have misunderstood a question entirely, as their answer does not make sense. The researchers decide to omit the entry.
A third complication shows up: a number of participants have mis-interpreted a word, making the data a little more difficult to interpret. The researchers decide to drop the question, lest they mis-interpret the answer and have less meaningful results.
This process goes on until the data are clean and sent along for publication, free of omissions, and confusion. But is this actually a useful model for what the participants experience? Certainly an index can be created (if 10% of the data are bad, the whole thing is moot) but this is only a stand-in for the fact that, in any given context for social science data, decisions and trade-offs must be made. And when those decisions are made by people with no experience in the areas they’d like to research, it becomes incredibly difficult to understand whether such data are meaningful. This is not a critique of the process of “data collection” per se. It is a critique of the uneven distribution of this process, between subject and theorist, and how that uneven distribution is overall more harmful to both than the theorist would like to advocate.
The social theorist would like to collect and publish data she believes will help bring more people (to keep with the example, in poverty) into the discussions about “rationality”. But balance of probability suggests the starting social theorist has little experience in those areas. It becomes a vicious circle in which less-meaningful data are published for results that end up reinforcing the sorts of structure they’d meant to dismantle.
The other element of Frank’s quote was the question of “cultural heritage.” Where Vassar is content to leave the issue of cultural development to the ether, good historians and anthropologists probably would balk at the sentiment. Each of the quotes from Yudkowsky and Vassar provide an insight into the projected world wherein new-age rationalism, and the salvation complexes of sites like Less Wrong become a little more clear. Cultural formations like “Enlightenment”, “Society” and “cultural heritage” become uniform, homogenized entities that need to be spread throughout the legacy of the world.
When I use “transcendental”, I mean it both in terms of location and time. To understand this a little more explicitly, let me use an example Yudkowsky builds in his ever famous Harry Potter and the Methods of Rationality, Chapter 7:
“I wonder how difficult it would be to just make a list of all the top blood purists and kill them.
They’d tried exactly that during the French Revolution, more or less–make a list of all the enemies of progress and remove everything above the neck–and it hadn’t worked out well from what Harry recalled. Maybe he just needed to dust off some of those history books his father had bought him and see if what had gone wrong with the French Revolution was something easy to fix.”
Let’s think about the assumptions Harry is making here:
- There is an objective understanding of the French Revolution.
- This objective understanding is attainable, and can be extended to figuring out what went wrong.
- This “solution” as to what went wrong in the context of late 18th century France is applicable to late 20th century Wizarding England.
Points 1 and 2 are part of a combined rabbit hole in historical objectivity I’ve ventured before on this blog. What we’re interested in is point 3, which is the crux of most transcendental discussions. The argument here is that there is something that connects the sensibilities and mentalities of late 18th century Parisians and late 20th century English wizards, such that a coup like the French Revolution might be possible for blood purists. Of course, Harry rejects the option in the abstract but for abhorring its violence rather than its shaky historical grounds.
For Rational Harry Potter (and to that extent, the “rational” mentality) the Enlightenment has introduced a set of beliefs and ideas that have homogenized culture enough to the point that when we issue peer-review studies, or double blind exams about not-strictly empirical issues, we have meaningful results. This is a critique that goes beyond the particulars of postcolonial theory; it’s a question about whether there’s a possibility of replicating based on temporal difference. We might imagine a survey taken today and tomorrow would yield similar results. Not so much for today and today, 1015 CE.
Of course, what we mean by “Enlightenment” itself could come under fire. When we speak of scientific classification systems, for example, can we so confidently say that all less-than-ideal wrong aspects of biological determinism have been done away with on the grounds of solid empirical evidence? Some “rationalists” seem to think so, but a cursory glance at how biological and cultural determinism have done a little dance together over the past century of research into poverty show precisely how persuasive wrong lines of research can be. And this speaks not at all towards the normal set of postcolonial inquiries into what it properly means to project the Western mentality beyond its origins.
Yet even outside of academia, we still see all sorts of wars being fought on ideologies long dead. We can’t exactly hold theorists responsible for the influence that happens, even hundreds of years after they die, but it is still an important part of the conversation when we speak about the long-term results of implementing systems of thought. Not to mention the lack of cultural nuance such implementations often have when attempting to ingrain themselves within the mentalities of people who exist beyond their influence. Historically, this has been the legacy of colonialism itself. And arguments even for internal sorts of colonialism have come from the conservative and liberal perspectives on this front (for example, what to do about anti-vax families and populations in the United States?)
I’m not trying to bring down rationality as this ultimate tool of evil, but its ambitions need to be seriously scaled back until it has a proper set of analyses to deal with the historical question.