Banal Cartography: A Critique of Quantitative Content Analysis in Contemporary Cartographic Research
Keywords: quantitative content analysis, map research, map analysis, critique
Abstract. This is a critique – a rebuke of a method that I helped promote and grow within the cartographic discipline.
During this era of big-data fetishism, cartographers (including this author) have been searching for ways to analyze maps that are more quantitative than previous, descriptive methods. This discipline-specific shift is part of a much larger, well-documented swing in the sciences away from qualitative analysis (observation, interviewing, and descriptive evaluation) to quantitative data analysis (eye tracking, mouse-click watching, and statistical evaluation). To garner broad research appeal today (i.e., grant funding and publication), cartographic researchers often need to embrace some sort of statistical analysis.
It is argued here, however, that the results of this positivist trend are not all positive.
Enter Quantitative Content Analysis (QCA). In a matter of less than 10 years, QCA has gone from an esoteric research technique borrowed from the social sciences to a sure-fire method with which to push out numbers-driven cartographic publications.
I argue that QCA, a method that once gave promise to help bridge the art-versus-science dichotomy in the mapping sciences, is utterly failing the discipline. The cumulative result of contemporary QCA studies, both well and poorly done, is banal cartography. Banal cartography is defined here as map research that is largely insignificant, unoriginal, and sheds little-to-no-insight on maps that was not already discernible via qualitative observation.
There are three broad reasons QCA is failing.
First, QCA is frequently used for the wrong reason. All foundational literature on QCA notes that it should only be used to help answer pre-existing research questions of significance. Reviewing twenty-first-century QCA research in cartography, it is obvious that this is rarely the reason the method is chosen. Instead, it is often used in cartography to create large amounts of numeric data from which researchers can harvest answers to post-facto research questions of dubious merit. This approach nullifies the legitimacy of QCA.
Second, QCA simply sucks the soul out of cartographic research. The results of the research result in descriptive statistics – when we’re lucky! – that do nothing more than describe a sample of maps that is rarely, if ever, random. The journal articles read like fantasy football statistics about teams and players no one has ever heard of.
Instead of allowing us to analyze maps for what they are – a communication device in a particular social context – researchers using QCA typically break maps down into a set of binary codes of 1s and 0s.
Map has a north arrow? Check (1); Map has a title? No (0); ad nauseam.
Ironically, the numeracy of QCA is working to undermine our understanding of the complexity of maps. QCA merely provides a sum of all a map’s, or group of map’s, parts. We know that maps are always more complex than the elements comprising them. In this regard, QCA adds to a cartographer’s understanding of maps what counting the number of different brush strokes comprising a piece of fine art does for an art historian. With QCA, we are literally taking a visual communication and trying to force it into a data table. What a godawful thing to do!
Third, cartographers are often sloppy at content analysis making it unlikely most of the (typically inane) results could ever be replicated. If the results can’t be reliably replicated, what’s the point of stripping maps down into numbers and squashing them into spreadsheets? After all, one of the main benefits of content analysis is its supposed replicability.
Content analysis is brutal. I often quip to my students that I wouldn’t wish the method on my worst enemies. Developing useful codes takes hours, days, and even months of trial and error. Finding a sample of maps that is robust, non-homogenous, and not too systematically sampled is a chore. Then actually doing the analysis? Please see the first sentence of this paragraph for a synopsis.
That is a summary for one researcher. Content analysis is supposed to be replicable. One must find a second researcher willing to memorize the archaic coding scheme developed by the first, and then go through the same arduous process. Human error and sleep-deprived cheating exists in almost all studies. (Few researchers would openly admit this. But humans are involved in processing massive amounts of visual data. Some of whom are not paid much, if anything, to do it. Of course the work is fallible!)
Finally, after all of this work, what researchers discover is rarely a diamond in the rough. More typically a lump of coal. Of course, in science this is what is supposed to happen. If you aren’t failing to prove things most of the time, you aren’t doing science. In reality, of course, after spending months, years, and tons of assistant money on coding large datasets, you cannot end up with nothing.
And alas we come full-circle back to the original problem. New research questions are asked, post-facto (one of the biggest sins in QCA).
Questions like:
Was there variation in the dimensionality of bar charts found accompanying Average Annual Precipitation maps in Goode’s World Atlas? Result: Wow! They went three-dimensional for two editions in the 1990s even against the sage advice of Edward Tufte? We can write about this! (Never mind that, perhaps given the context, the change had nothing to do with cartographic decision-making, but a new intern hired to create the graphics.) QCA has a place in cartography, but it’s time we call a spade a spade. Many of the studies using this method are done poorly, are of minimal relevance, and probably don’t provide any knowledge or insight we couldn’t get more reliably via other means. I am not critiquing others alone. Some of my previous research is guilty of this as well. I never felt quite right about it. Artificial intelligence of maps will help alleviate much of the human error and allow us to ask more interesting questions about large samples of maps in the future. It may not alleviate the issues discussed in reason two, however. And until cartographic researchers stop creating QCA datasets to simply harvest for publications, the problem of banal cartography will continue for the foreseeable future. If nothing else, hopefully this abstract helps fuel a debate in the methodology sections of these future papers.