Commentary: RAND Counterinsurgency Scorecard

RAND recently released the Counterinsurgency Scorecard: Afghanistan in early 2011 relative to the insurgencies of the past 30 years. I read this with considerable interest as the topic (insurgency) and methodology (Delphi study) interest me greatly. The use of experts to attempt to understand a domain or area of knowledge is specifically of interest to me. This is actually an update to previous work and looking at midcourse elements of the counterinsurgency in Afghanistan. As such credence could be given to the pattern of responses but the principle as discussed within the paper should not fully place the current situation in a win or loss.

The RAND study by Christopher Paul identifies several success factors and factors leading to failure based on 30 insurgencies studied through the same methods between 1978 and 2008. The study takes into account 15 good factors and 12 bad factors to predict outcomes of the insurgency (insurgent wins, or government wins).

If you just want the political fodder the spoiler and to get on to reading something else the Afghanistan insurgency is in the maybe category. There are a few reasons that become obvious from the methodology of the study that would lead to this result. First, the previous 30 insurgencies are not United States insurgencies and I’m willing to bet that the experts identified for the exercise were predominantly United States citizens (expertise in Afghanistan being a requirement for assignment as an expert p. 14 – specifically current books, reports, studies, and articles on Afghanistan). Second, there is a significant concern on the number of experts used to complete the protocol. Of 27 original experts only 11 completed the entire exercise. That means each experts opinion was approximately 10 percent of the value given. Finally, this is a midcourse check using methods that evaluated historical (even if recent) examples against a current ongoing conflict. Any midcourse change thus would affect the result and any inherent  current bias has not been weeded out through the process of time and reflection.

Thinking about the methods experts are not merely defined by the people who have written about a topic. Experts should have advanced problem solving skills, knowledge of the domain, advanced skills in knowledge organization, instant or automatized reactions, and practical ability within the domain of a problem as discussed by Tynjala (1999) while summarizing Sternberg (1994). Even experts who have been identified to have all the attributes of expertise as previously discussed can perform poorly on domain specific tasks.  In less structured knowledge domains the use of experts is almost irrelevant and has as much weight as the general populaces responses.  When examining the knowledge of experts there is a heavy preponderance to give their opinion on less structured knowledge domains to much credibility. This was alliterated quite well in looking at undergraduate students diagnosis capability versus trained psychologists (Johnson, 1988, p.211). With a low number of respondents the RAND study credibility is further jeopardized as poor or biased experts can heavily affect the aggregate. Conversely in well-structured domains of knowledge the use of an expert panel can immediately determine biases and processes of the use of knowledge.

It is worth noting that the study was set up in such a way as to insure that groupthink and discussion among the experts was only an issue on areas where there was significant divergence in opinion. This would allow for taxonomical and ontological issues to be hashed out and is a strength in the study. I’m mildly confused on whether the respondents worked with the study coordinator or worked among themselves for the discussion portion.  If they worked among themselves and the discussion was open to all of them it might be considered a strength (with the inherent groupthink issues) and if they only worked with the study coordinator it could also be considered a strength (common definitions and issues being coordinated). It would appear that there were several points that the study had discussion as described on page 15 (specifically 50 single-spaced pages in aggregate for 11 participants).

The study with the resulting “maybe” for a win or loss in Afghanistan is still a fairly robust look at the history of insurgencies and a relatively strong look at the current conflict in Afghanistan with that point of view. The points used for win criteria and loss criteria have less political baggage than many other criteria that could have been chosen. It will be interesting to see how this is evaluated overtime. The model currently a descriptive model being used in a predictive capacity is interesting.

Works Cited

Johnson, E. J. (1988). Expertise and decision under uncertainty: Performance and process. In M. T. H. Chi, R. Glaser & M. J. Farr (Eds.), The nature of expertise (pp. 209-228). Hillsdale, New Jersey: Lawrence Erlbaum Associates.

Tynjala, P. (1999). Towards expert knowledge? A comparison between a constructivist and a traditional learning environment in the university. International Journal of Educational Research, 31(5), 357-442.

Sternberg, R. (1994). Cognitive conceptions of expertise. International Journal of Expert Systems, 7(1), 1-12.

 

Caveat: Please see ATTRIBUTION before  quoting this article or affiliation of the author.

 

 

 

Leave a Reply