Earth News This Week

Friday, October 26, 2007

Geology interpretation

Home PageSociety Home PageContact UsFrequently Asked QuestionsSite SearchSite Map GSA BookstoreOnline JournalsJoin GSADonate Now!


GSA Today

Article: pp. 4–10 | Abstract | PDF (4.72M)

What do you think this is? “Conceptual uncertainty” in geoscience interpretation

C.E. Bonda, A.D. Gibbsb, Z.K. Shiptonc, S. Jonesd

a Dept. of Geographical and Earth Sciences, University of Glasgow, Glasgow G12 8QQ, UK, and Midland Valley Exploration Ltd., 144 West George Street, Glasgow G2 2HG, UK, clare@mve.com
b Midland Valley Exploration Ltd., 144 West George Street, Glasgow G2 2HG, UK
c Dept. of Geographical and Earth Sciences, University of Glasgow, Glasgow G12 8QQ, UK
d Midland Valley Exploration Ltd., 144 West George Street, Glasgow G2 2HG, UK

Interpretations of seismic images are used to analyze sub-surface geology and form the basis for many exploration and extraction decisions, but the uncertainty that arises from human bias in seismic data interpretation has not previously been quantified. All geological data sets are spatially limited and have limited resolution. Geoscientists who interpret such data sets must, therefore, rely upon their previous experience and apply a limited set of geological concepts. We have documented the range of interpretations to a single data set, and in doing so have quantified the “conceptual uncertainty” inherent in seismic interpretation. In this experiment, 412 interpretations of a synthetic seismic image were analyzed. Only 21% of the participants interpreted the “correct” tectonic setting of the original model, and only 23% highlighted the three main fault strands in the image. These results illustrate that conceptual uncertainty exists, which in turn explains the large range of interpretations that can result from a single data set. We consider the role of prior knowledge in biasing individuals in their interpretation of the synthetic seismic section, and our results demonstrate that conceptual uncertainty has a critical influence on resource exploration and other areas of geoscience. Practices should be developed to minimize the effects of conceptual uncertainty, and it should be accounted for in risk analysis.

Received: March 28, 2007; Accepted: August 16, 2007

DOI: 10.1130/GSAT01711A.1

INTRODUCTION

Geoscientists are required to make predictions from geological data that are often sparsely distributed or incomplete. For example, boreholes and seismic surveys sample limited volumes and have a limited resolution. Geoscientists use these data to produce geological framework models (3-D representations of stratigraphic horizons and fault planes) and to determine properties such as lithology and permeability. Components of these framework models will always be characterized by some uncertainty due to the inherent incompleteness of geological data sets. Quantifying this uncertainty is important because geological framework models are often used as the basis for assessments and decisions that have important social and commercial implications (e.g., resource extraction, ground-water supply, CO2 and nuclear waste storage, solute transport, and earthquake and other geological hazard predictions).

Quantification of uncertainty in geological framework models in petroleum geoscience has concentrated on such parameters as the petrophysical properties of reservoirs (e.g., Egermann and Lenormand, 2005) and the resolution and processing of seismic data (e.g., Jones et al., 2007). In petrophysical models, predictions of reservoir permeability are based on water saturation data (Aguilera, 2004), pressure measurements, log data, well cores (Yan et al., 2006), and the like. These data are used to predict heterogeneities in reservoir properties and to calculate uncertainty parameters for flow simulations. Geo-statistics, particularly in reservoir modeling, is widely used to aid in reservoir forecasting, uncertainty calculations, and risk analysis for decision making. Understanding the limitations in geostatistics is critical if it is to be used as a decision-making tool (Deutsch, 2006). Defining the limitations of geostatistics and uncertainty in interpretation is important for (1) acknowledging and assessing possible alternatives to a single interpretation; (2) highlighting areas within an interpretation that are less well constrained; (3) propagating uncertainties into further modeling or risk assessments; (4) combining and rationalizing different, seemingly inconsistent data sets and/or types; and (5) educating management, politicians, and the general public about scientific uncertainty.

We have called the range of concepts that geoscientists could apply to a single data set conceptual uncertainty. Geoscientists use their training and experience (i.e., their prior knowledge) to apply a concept (or rarely, to generate a new one) to data to construct an interpretation and, ultimately, to produce a framework model. We suggest that the initial geological framework model is a fundamental source of uncertainty because it is dependent on the tectonic paradigm or concept used in its construction. We argue that conceptual uncertainty can be more important than the uncertainty inherent in the positioning of horizons or fault planes in a framework model or in the subsequent populating of these features with petrophysical properties.

In this study, we have attempted to quantify conceptual uncertainty from 412 interpretations of a single synthetic seismic data set. In collating the interpretations of a large number of geoscientists with different backgrounds and experience, we have effectively constrained the range of concepts that could be applied to the synthetic seismic. In effect, we have defined the “conceptual uncertainty space” of that data set. We have examined the role of prior knowledge in seismic data interpretation and have highlighted examples in which the prior knowledge of individual geoscientists appears to have affected their interpretational choices and final outcome. In particular, we looked at examples of how expertise in particular tectonic settings, length of experience, and type of training and interpretational techniques used may have affected interpretational behavior. We also considered the influence of the broader contextual information a geoscientist uses in his or her interpretation. In the Discussion section of this paper, we make suggestions for more rigorous studies of conceptual uncertainty and discuss the significance for interpretation and prediction in geoscience.

Use of prior knowledge is the main method by which scientific disciplines progress and evolve (e.g., Levi-Strauss, 1966; Kuhn, 1962), and commonly described human biases form part of the way we use prior knowledge to interpret data (e.g., Frodeman, 1995). Cognitive bias commonly results from using heuristics or rules of thumb based on experience (prior knowledge). In cases of interpretational uncertainty, bias from prior knowledge is well documented in other disciplines, such as economics. In these disciplines, theories, such as elicitation theory, are used to mitigate against bias from prior knowledge. Baddeley et al. (2004) noted that heuristics are often used when making quick decisions or in instances when data are difficult to process or are limited in extent. Curtis and Wood (2004 and references therein) have provided examples and discussions of the use of prior knowledge in geoscience. However, few actual studies of the effects of prior knowledge in geoscience have been undertaken, with the exception of a study by Rankey and Mitchell (2003), who undertook an experiment to document the variation in interpretation of a data set by six seismic interpreters.

EXPERIMENT DESIGN AND PROCEDURE

To document the range of potential interpretations from a single data set and to test whether prior knowledge is important for interpretational outcomes, we asked geoscientists to produce a single interpretation of a seismic section. Rather than ask geoscientists to interpret a real section for which the “answer” is unknown, we created a synthetic seismic section from a 2-D geological model. Forward modeling enabled us to produce a geological model from an initial layer-cake stratigraphy so we could define the model input parameters and evolution, allowing us to compare interpretations against a “correct” answer. The synthetic seismic section (in two-way time) was printed as an A4 color plate (Fig. 1) with a series of questions on the reverse.1 In 2005 and 2006, we asked 412 geoscientists (participants) to make interpretations (answers) at conferences, workshops, and universities. These events took place in Europe, North America, and the Middle East. We questioned participants on factors we thought might have influenced their interpretations, including the participant's educational level, length of experience, background expertise, and perception of his or her ability in structural geology and seismic interpretation. If you would like to try the interpretation experiment for yourself, bear in mind that Figure 2 and the next paragraph contain the “answer” to the synthetic seismic section.

Figure 1.

What do you think this is? The uninterpreted synthetic seismic section used for this study. The synthetic seismic and questionnaire given to participants can also be found in the GSA Data Repository (DR1; see text footnote 1).

Figure 2.

Production of the geological framework model. (A–D) show stages of the forward modeling in 2DMove (the model used the inclined shear algorithm, and isostasy and compaction were assumed not to have first order effects). Details of the forward modeling are found in the figure and in the GSA Data Repository (DR2; see text footnote 1). (E) Examples of interpretations of the synthetic seismic image, proposed prior to the experiment.

Figure 2.

Continued.

We forward modeled an inverted growth fault (i.e., extension followed by thrusting on a single structure; Figs. 2A–2D). The model was designed so that a number of realistic interpretations could be made from the single synthetic seismic data set (Fig. 2E). Details of the model and synthetic seismic generation can be found in the GSA Data Repository (see footnote 1). In the experiment, participants were deliberately given little information about the seismic section and its generation. However, if the participants had taken the time to carefully read the information on the reverse of the seismic image, they would have learned that the seismic section they were being asked to interpret was synthetic. The introduction to the questionnaire included the sentence, “The section overleaf has been created by forward modeling using known assumptions.” However, few of the participants who engaged in conversation about the exercise appeared to appreciate this fact.

We wanted to test the range of concepts that would come out of a simple interpretation exercise. We therefore did not ask the participants how they would have tested their interpretations. In a real geological situation, once a preliminary model or hypothesis has been generated, it is generally tested by collecting further data or by checking the validity, for instance, by restoring the section. In our study, the participants were only asked to produce a single interpretation and were given no further information than that on the questionnaire. This precluded participants from forming multiple models that could then be compared and rejected based on such testing.

RESULTS

We sorted the returned interpretations into the following tectonic setting categories: extension, thrust (shortening), inversion, strike-slip, diapirism (salt or mud), other (geological setting identified but not included in the list), and unclear (tectonic setting could not be identified). Many answers contained more than one of these tectonic setting elements; in these cases, the dominant or main tectonic setting was chosen. Strict selection criteria (horizon offsets, arrows or labels to define fault motion, and/or written annotations) were used to categorize the answers to reduce our own bias in the sorting procedure. Because of these strict sorting criteria, 32% of participants' answers had to be classified as unclear. One reason for the small number (2%) of strike-slip answers returned is that out-of-plane displacement cannot be seen in a vertical seismic section. None of the participants annotated in- or out-of-plane movements, even those who explicitly stated they had a strike-slip interpretation. For the “apparently” strike-slip answers, we considered the geometric arrangement of faults, but only classified the answer as strike-slip in cases where there was no ambiguity. The 5% of answers in the “other” category fell outside the range of the common tectonic settings categories we had chosen.

The range of tectonic settings implied by the interpretations is summarized in Figure 3. The answers span all five tectonic concept categories (inversion, strike-slip, extension, shortening [thrust], and diapirism). Participants also applied nontectonic concepts to their interpretation, such as carbonate reefs and sequence stratigraphy concepts (5%), further extending the conceptual uncertainty for the data set. Only 10% of answers showed the “correct” inversion model, as produced in the forward model. This increases to 21% if we include participants who showed both extension and thrusting in their interpretation. In the rest of this article, inversion is classified as extension and thrusting anywhere in the section, for instance, extension on one fault and shortening on another. Across all the tectonic setting expertise groups the most common interpretational answer (26%) was a thrust (shortening) tectonic setting. The results show that 79% of the participants applied the wrong concept in their interpretation and that the most common answer was not the “correct” scenario created in the forward model.

Figure 3.

Pie chart showing the relative proportions of interpretations by tectonic setting. Thrust-based interpretations were the most common, accounting for 26% of the total. Inversion of a single structure, as in the forward model, accounted for only 10% of answers. Overall, inversion within the section was recognized by 21% of participants.

The features most commonly singled out in the seismic image were areas of high- or low-intensity signals. High-intensity features would normally indicate an acoustic impedance contrast and therefore a geological feature or change in rock property (e.g., Brown, 1986). The two main fault segments, areas of high-intensity signals on the image, were highlighted by 69% and 68% of participants, respectively. The next most common feature interpreted, highlighted by 62% of participants, is an area of almost no data (Fig. 4). In the original model, this area of no data is a small fault splay in the hanging wall to one of the main faults. Many participants annotated diapirism or a gas chimney in this area.

Figure 4.

The uninterpreted synthetic seismic section with the three most frequently interpreted features annotated (inset). Percentages refer to the percentage of participants that highlighted each feature.

In the following sections, we look at examples in which the prior knowledge of individual geoscientists appears to have directly influenced the concepts they applied in their interpretations of the seismic image. We show examples of how expertise, experience, and training influenced the concepts applied and hence the final interpretation. Finally, we consider the reliance of interpreters on a breadth of geological and geographical information to support their interpretations of a data set.

Expertise (Tectonic Setting)

Participants were provided with a list of tectonic settings (extension, inversion, thrust, salt, strike-slip, and other) and asked to indicate their dominant field of expertise. Some participants indicated more than one expertise category; the following analyses do not include these participants. Twenty-nine percent of the participants who indicated thrust tectonics as their dominant field of expertise interpreted the section as thrust faults, while 27% of participants with some other expertise also produced a thrust fault answer. Of the participants with dominant expertise in inversion, 25% produced an inversion interpretation, whereas of those without inversion expertise 20% produced an inversion interpretation. Participants with dominant expertise in extension and diapirism were more likely to produce an answer that matched their expertise than participants with some other dominant expertise (extension expertise 10% as compared to 3% other expertise; diapirism expertise 13% as compared to 7% other expertise). The only group in which the dominant expertise negatively correlated with interpretational outcome, when compared to other geoscientists, was strike-slip (strike-slip expertise 0% as compared to 3% other).

Examples can be found for all settings where those with a specific expertise appear to have allowed this to dominate their interpretations. Figure 5 shows two examples from students who described their expertise as salt tectonics and sequence stratigraphy, respectively, and who produced interpretations that appear to be based directly on their expertise. In Figure 5B, a master's student in sequence stratigraphy has used classic sequence stratigraphy interpretation techniques: maximum flooding surfaces, onlaps and truncations to interpret a reef build-up. In Figure 5A, a Ph.D. student in salt tectonics shows doming associated with salt mobilization. Although these participants have honored the data, they have chosen to interpret it in a way that fits with their dominant expertise and knowledge. By applying these dominant concepts to the data set, they have produced an “incorrect” interpretation. In other examples, interpreters have not honored the data, perhaps due to inexperience in seismic interpretation.

Figure 5.

(A and B) Digitized examples of the interpretations of two students who have experience in sequence stratigraphy and salt tectonics, respectively; (C and D) different participants each with +15 years experience interpreted the same structures as thrust and extension faults. Their fields of expertise were thrust and extension tectonics, respectively.

Tectonic setting expertise seems to have influenced the concepts some participants brought to their interpretation. However, the percentage differences between the expertise categories are not all statistically significant, and the examples of interpretations that match dominant expertise are not seen across the group as a whole. The results suggest that one or more other factors also influenced the concepts applied to a data set by an individual. It is important to note that we asked for the participants' dominant expertise rather than their breadth of expertise, and, additionally, we did not take into consideration how proficient each participant may have been in seismic interpretation.

Experience (Length)

To evaluate how length of experience affects conceptual uncertainty, interpreters were asked to choose the length of time that they had technical experience, from none, student, 0–5 years, 5–10 years, 10–15 years, and 15+ years. In the study group, students were just as likely to produce an “incorrect” answer (76%) as participants with 15+ years experience (76%). Like the student examples in Figures 5A and 5B, two professionals with the same level of experience (15+ years) interpreted the seismic image at a petroleum industry conference and produced answers that matched their dominant tectonic setting expertise (Figs. 5C and 5D). Both marked the same features in the top part of the section as faults. The first interpreter, with dominant expertise in thrust tectonics, interpreted the features as thrust faults. The second interpreter, with extensional expertise, marked the features as extensional faults. Neither of these features are faults in the original model (Fig. 2D). These results indicate that participants with a greater number of years of experience did not necessarily produce more “right” answers.

Interpretational Techniques

We classified the answers according to the interpretational techniques applied to analyze and interpret the data. We defined five technique classifications from the interpretations: (1) identification of features, in which the participants had highlighted features such as faults, gas chimneys, unconformities, etc., by drawing along them; (2) identification of horizons, where participants had drawn along horizon reflectors and/or identified sediment packages; (3) drawing “sticks”—participants simply drew straight lines on the seismic section; (4) annotation, where participants used arrows and writing to annotate features and horizons; and (5) sketches and/or writing, where participants wrote a description of their interpretation of the seismic section or drew sketches to show the evolution of their interpretation through time. Examples of the different classifications can be seen in Figure 6. The different styles have an effect on the identification of specific features (e.g., participants whose interpretational style included feature identification were ∼30% more likely to identify the main fault strands than participants who just identified horizons). Table 1 groups the participants by the number of techniques they used to complete their answer. The participants who used the most techniques were most likely to get the “correct” interpretational answer. Both of the two participants who used 4 out of the possible 5 techniques produced the “correct” answer.

Figure 6.

Digitized answers showing the range in interpretational styles of the participants. Each example corresponds to one of the technique classifications: features, annotations, sticks, sketches and writing, and horizons.

TABLE 1.

NUMBER OF TECHNIQUES PARTICIPANTS USED IN INTERPRETATIONS AND PERCENTAGE OF PARTICIPANTS IN THESE SUBGROUPS WHO MADE A “CORRECT” INTERPRETATION

DISCUSSION

We have documented the breadth in conceptual uncertainty for a single data set. The interpretation produced most often was a thrust-based interpretation rather than the “correct” forward modeled scenario of inversion. There are several non-unique and geologically sound solutions to the data set (Fig. 1E); therefore the small percentage (21%) of correct interpretational answers that matched the forward model and the range of concepts applied to the data set is perhaps not surprising.

Observations of participants' interpretations suggest that they used a range of prior knowledge to undertake the interpretation exercise. In some cases, but significantly not all, prior knowledge based on dominant tectonic setting expertise appears to have biased the concepts participants applied to the data set (Fig. 5).

These observations contrast with those of Rankey and Mitchell (2003) who concluded that interpretations are likely to be based on previous experience and preconceived notions. Our results suggest that other factors, such as an individual's training and the techniques used to interpret the section, may have more influence on interpretational outcome than tectonic expertise. How we define prior knowledge is important when comparing our results to those of other workers.

It is interesting to note that participants with more experience (measured as number of years of experience) did not necessarily produce more “correct” answers. This suggests that type of experience is more significant than length of experience alone. How participants defined their own length of experience was not constrained (i.e., did participants who were two years into a Ph.D. count themselves as students or as having two years post-degree experience?). Similarly, we asked participants for their dominant expertise rather than their breadth of expertise, and it is likely that a participant with expertise in more than one tectonic setting may be better able to distinguish between likely interpretations. These initial results suggest that more than one controlling factor influences conceptual uncertainty; therefore, a full multivariate statistical analysis is required to establish significant relationships.

Participants used a range of interpretational techniques that led to different styles of answers. Our results show that the greater the number of techniques used by individual participants, the greater their chances of producing a “correct” interpretation. We believe that the number of techniques used serves as a proxy for the intensity with which each participant queried the data. Those who used the most techniques may have scrutinized the data more thoroughly than participants who used fewer techniques. However, some techniques, such as feature identification, also appear to be more effective than others at identifying key elements within the seismic section.

The effect of the techniques employed and interpretational style applied to the interpretation of the data set have implications for training and education.

Interpretations of the synthetic seismic image focused on areas of high- and low-intensity signals. Areas of low-intensity responses in seismic images are often caused by disruption of layering due to diapirism or gas percolation through the overlying strata (e.g., Bouriak and Akhemtjanov, 1998; Veerayya et al., 1998), and many participants marked such features in an area of poor data quality, even though this part of the section was not crucial to the overall tectonic interpretation. Sixty-two percent of participants focused their interpretations on an area of no data (i.e., an area of high uncertainty). Annotating gas or diapirism in this area, a direct hydrocarbon indicator, could be critical in a commercial situation. The need to interpret the part of the synthetic seismic image with the least data perhaps says something about human nature, but it also suggests that participants were drawn to anomalous areas with the highest and lowest intensity data.

In the following, we consider the influences of different types of prior knowledge and bias for our study in the context of definitions from psychology. In cognitive psychology, biases are commonly divided into types. The most relevant bias types for this study are described here, but see Krueger and Funder (2004) for a full discussion of bias types and their origins. Availability bias occurs when interpreters use the model or interpretation that is most dominant in their minds. For example, a geoscientist interpreting a new data set having just spent six months looking at fold and thrust belts will have the concepts for fold and thrust belt terrains most readily available in his or her mind. Anchoring bias is the failure to adjust from experts' beliefs, dominant approaches, or initial ideas. In this case, interpreters may know that a seismic section is from, for example, the Gulf of Mexico, and will therefore have the concept of salt tectonics in their minds because this is the accepted interpretational concept for the Gulf of Mexico area. Interpreters will not consider other concepts in their interpretations. Confirmation bias involves actively seeking out opinions and facts that support one's own beliefs or hypotheses. For example, when a geoscientist believes that the seismic section is from an extensional terrain, he or she will identify features that support this belief and ignore information that does not corroborate or correspond to an extensional interpretation.

Examples of bias based on dominant tectonic setting expertise can be found at all levels of experience. Individual participants with 15+ years experience anecdotally show evidence of availability and anchoring bias in the same way students do. Participants do, however, require some experience to undertake the exercise because an interpreter has to be able to apply relevant knowledge and concepts to the data to produce a realistic interpretation. Many participants asked “where in the world?” the seismic section was from. Participants were effectively asking for confirmation, provided by such context, for their interpretations. Alternatively, they may have been seeking a starting point on which to base their interpretations. Typically, when interpreting geological data, the geographical location and, hence, broad tectonic setting of the data is known and interpreters use this prior information to aid their interpretations. Therefore an anchoring bias may operate because interpreters expect to see a particular type of structure in a given setting.

We suggest that the synthetic seismic image may have been effectively biased toward a thrust tectonic setting interpretation because this setting received the highest number (26%) of answers. Conversely, the 2-D seismic section was negatively biased toward a strike-slip interpretation, the tectonic setting category with the lowest number (2%) of interpretational answers. As discussed earlier, many answers that we classified as strike-slip may have fallen into the unclear category due to the selection criteria used to categorize the results. This suggests that we may be seeing elements of both confirmation bias and disconfirmation bias (the use of features as evidence against a particular hypothesis or model) within the participant group: participants confirming thrust features, but disconfirming strike-slip features.

Interpreting geological data is generally an under-constrained problem, requiring knowledge of geological analogues and an ability to apply these to new problems and areas. Frodeman (1995) set geology apart from classical sciences, such as physics, because of the scientific reasoning required in geological science. Frodeman argued that such scientific reasoning skills will become increasingly crucial for issues like global warming, assessing uncertainty and risks in hazard prediction, solute transport, and resource management. In earth and environmental science, scientific uncertainty has an important impact on public policy formation. Pollack (2007) argued that scientific uncertainty should not be seen as a barrier to public policy development but as an opportunity for creative and competitive solutions that can be continuously developed. Assessing uncertainty and risk requires accurate geological framework models from which predictions can be made. Therefore, as geoscientists, acknowledging and evaluating conceptual uncertainty must be a critical factor in maximizing the effectiveness of the geological reasoning process and hence for informing public policy. Understanding more about the factors affecting the concepts that geoscientists apply to information-limited data sets will improve our predictions and the assessment of risk associated with those predictions.

CONCLUSIONS

Conceptual uncertainty is likely to be a major risk factor for sciences in which decision making is based on the interpretation of data sets containing limited information. Our experiment has quantified the range in conceptual uncertainty for a single data set and shown that conceptual uncertainty can have a large effect on interpretational outcome. The interpretational answers of participants in our study show evidence for bias due to their prior knowledge. A range of factors affects how an individual's prior knowledge and hence concepts are applied to data sets. These factors include an interpreter's tectonic expertise and/or breadth of expertise, the length of his or her experience, and the type of techniques an interpreter uses to interpret a section. Distinguishing between these different factors and putting practices in place to elicit intelligent information while mitigating against the unconscious negative use of prior knowledge is a key challenge. Conceptual uncertainty, once quantified, can be used in combination with petrophysical models and other uncertainty calculations to increase the predictability of petroleum and other geological systems and their properties. How an individual geoscientist's prior knowledge may influence his or her interpretation and hence affect the collective conceptual uncertainty for the data set has important implications for training, team building, risk analysis, and decision making. Our results emphasize that a geological interpretation is a model that needs testing.

Acknowledgments

This work was supported by Midland Valley Exploration Ltd and the Scottish Executive SCORE scheme. Midland Valley's 2DMove software was used for forward modeling. Mike Goodwin and GX Technology are thanked for creating the synthetic seismic image using GXII software. The work could not have been completed without the support of individuals within the geoscience community who took part in the interpretation exercise. Andrew Curtis, Glen Stockmal, and Andy Calvert provided thorough and constructive reviews.

REFERENCES CITED

  1. Aguilera R. 2004. Integration of geology, petrophysics, and reservoir engineering for characterization of carbonate reservoirs through Pickett plots. AAPG Bulletin. v. 88. p 433–446 doi: 10.1306/12010303071. Find this article online
  2. Baddeley M.C., Curtis A., Wood R. 2004. An introduction to prior information derived from probabilistic judgements: elicitation of knowledge, cognitive bias and herding. in Curtis A., Wood R. eds Geological Prior Information: Informing Science and Engineering. London: Geological Society Special Publication 239. p 15–27.
  3. Bouriak S.V., Akhemtjanov A.M. 1998. Origin of gas hydrate accumulation on the continental slope of the Crimea from geophysical studies. in Henriet J.-P., Mienert J. eds Gas hydrates: relevance to world margin stability and climatic change. London: Geological Society Special Publication 137. p 215–222.
  4. Brown A.R. 1986. Interpretation of 3D seismic data. 6th ed AAPG Memoir 42 541.
  5. Curtis A., Wood R. eds 2004. Geological Prior Information: Informing Science and Engineering. London: Geological Society Special Publication 239. 229–p p.
  6. Deutsch C.V. 2006. What in the reservoir is geostatistics good for? Journal of Canadian Petroleum Technology. v. 45(no. 4): p 14–20. Find this article online
  7. Egermann P., Lenormand R. 2005. A new methodology to evaluate the impact of localized heterogeneity on petrophysical parameters (k(r), P-c) applied to carbonate rocks. Petrophysics. v. 46(no. 5): p 335–345.
  8. Frodeman R. 1995. Geological reasoning: Geology as an interpretive and historical science. Geological Society of America Bulletin. v. 107(no. 8): p 960–968 doi: 10.1130/0016-7606(1995)107<0960:grgaai>2.3.CO;2. Find this article online
  9. Jones G.D., Barton P.J., Singh S.C. 2007. Velocity images from stacking depth-slowness seismic wavefields. Geophysical Journal International. v. 168. p 583–592 doi: 10.1111/j.1365-246X.2006.03055.x. Find this article online
  10. Kuhn T.S. 1962. The structure of scientific revolutions. Chicago: The University of Chicago Press. 240–p p.
  11. Krueger J.I., Funder D.C. 2004. Towards a balanced social psychology: Causes, consequences and cures for the problem-seeking behaviour and cognition. The Behavioral and Brain Sciences. v. 27. p 313–327 doi: 10.1017/S0140525X04000081. Find this article online
  12. Levi-Strauss C. 1966. The savage mind. Chicago: The University of Chicago Press. 310–p p.
  13. Pollack H.N. 2007. Scientific uncertainty and public policy: Moving on without all the answers. GSA Today. v. 17. p 28–29 doi: 10.1130/GSAT01703GW.1. Find this article online
  14. Rankey E.C., Mitchell J.C. 2003. That's why it's called interpretation: Impact of horizon uncertainty on seismic attribute analysis. The Leading Edge. v. 22. p 820–828 doi: 10.1190/1.1614152.
  15. Veerayya M., Karisiddaiah S.M., Vora K.H., Wagle B.G., Almeida F. 1998. Detection of gas-charged sediments and gas hydrate horizons along the western continental margin of India. in Henriet J.-P., Mienert J. eds Gas hydrates: relevance to world margin stability and climatic change. London: Geological Society Special Publication 137. p 239–253.
  16. Yan J., Tucker M., Lui T. 2006. Reservoir description from well-log and reservoir engineering: An example from Triassic reservoirs in Northwest China. Petroleum Science and Technology. v. 24. p 1417–1430 doi: 10.1080/1091 6460600904385. Find this article online

1GSA Data Repository item 2007280, Uninterpreted seismic section and example questionnaire (DR1) and geological model details and synthetic seismic generation (DR2), is available at www.geosociety.org/pubs/ft2007.htm. You can also obtain a copy by writing to editing@geosociety.org.


Thursday, October 25, 2007

Kyoto protocol: A Dodo?

Time to ditch Kyoto

Nature 449, 973-975 (25 October 2007) | doi:10.1038/449973a; Published online 24 October 2007

Gwyn Prins1 & Steve Rayner2

  1. Gwyn Prins is at the London School of Economics Mackinder Centre for the Study of Long Wave Events, London WC2A 2AE, UK.
  2. Steve Rayner is at the James Martin Institute for Science and Civilization, University of Oxford, Oxford OX1 1HP, UK.
Top

Climate policy after 2012, when the Kyoto treaty expires, needs a radical rethink. More of the same won't do, argue Gwyn Prins and Steve Rayner.

Time to ditch Kyoto

B. MELLOR

The Kyoto Protocol is a symbolically important expression of governments' concern about climate change. But as an instrument for achieving emissions reductions, it has failed1. It has produced no demonstrable reductions in emissions or even in anticipated emissions growth. And it pays no more than token attention to the needs of societies to adapt to existing climate change. The impending United Nations Climate Change Conference being held in Bali in December — to decide international policy after 2012 — needs to radically rethink climate policy.

Kyoto's supporters often blame non-signatory governments, especially the United States and Australia, for its woes. But the Kyoto Protocol was always the wrong tool for the nature of the job. Kyoto was constructed by quickly borrowing from past treaty regimes dealing with stratospheric ozone depletion, acid rain from sulphur emissions and nuclear weapons. Drawing on these plausible but partial analogies, Kyoto's architects assumed that climate change would be best attacked directly through global emissions controls, treating tonnes of carbon dioxide like stockpiles of nuclear weapons to be reduced via mutually verifiable targets and timetables. Unfortunately, this borrowing simply failed to accommodate the complexity of the climate-change issue2.

Kyoto has failed in several ways, not just in its lack of success in slowing global warming, but also because it has stifled discussion of alternative policy approaches that could both combat climate change and adapt to its unavoidable consequences. As Kyoto became a litmus test of political correctness, those who were concerned about climate change, but sceptical of the top-down approach adopted by the protocol were sternly admonished that "Kyoto is the only game in town". We are anxious that the same mistake is not repeated in the current round of negotiations.

The Kyoto Protocol was always the wrong tool for the nature of the job.

Already, in the post-Kyoto discussions, we are witnessing that well-documented human response to failure, especially where political or emotional capital is involved, which is to insist on more of what is not working: in this case more stringent targets and timetables, involving more countries. The next round of negotiations needs to open up new approaches, not to close them down as Kyoto did.

Economic theory recognizes the futility of throwing good money after bad. In politics, however, sunk costs are often seen as political capital or as an investment of reputation and status. So we acknowledge that those advocating the Kyoto regime will be reluctant to embrace alternatives because it means admitting that their chosen climate policy has and will continue to fail. But the rational thing to do in the face of a bad investment is to cut your losses and try something different.

No silver bullet

Influenced by three major policy initiatives of the 1980s, the Kyoto strategy is elegant but misguided. Ozone depletion, acid rain and nuclear arms control are difficult problems, but compared to climate change they are relatively simple. Ozone depletion could be prevented by controlling a small suite of artificial gases, for which technical substitutes could be found. Acid rain was mainly caused by a single activity in a single industrial sector (power generation) and nuclear arms reductions were achieved by governments agreeing to a timetable for mutually verifiable reductions in warheads. None of this applies to global warming.

The rational thing to do in the face of a bad investment is to cut your losses and try something different.

In practice, Kyoto depends on the top-down creation of a global market in carbon dioxide by allowing countries to buy and sell their agreed allowances of emissions. But there is little sign of a stable global carbon price emerging in the next 5–10 years. Even if such a price were to be established, it is likely to be modest — sufficient only to stimulate efficiency gains3. Without a significant increase in publicly funded research and development (R&D) for clean energy technology and changes to innovation policies, there will be considerable delay before innovation catches up with this modest price signal.

On present trends, for another 20 years, the world will continue installing carbon-intensive infrastructure, such as coal power plants, with a 50-year lifetime. If climate change is as serious a threat to planetary well-being as we have long believed it to be, it is time to interrupt this cycle.

Climate change is not amenable to an elegant solution because it is not a discrete problem. It is better understood as a symptom of a particular development path and its globally interlaced supply-system of fossil energy. Together they form a complex nexus of mutually reinforcing, intertwined patterns of human behaviour, physical materials and the resulting technology. It is impossible to change such complex systems in desired ways by focusing on just one thing.

Social scientists understand how path-dependent systems arise4; but no one has yet determined how to deliberately unlock them. When change does occur it is usually initiated by quite unexpected factors. When single-shot solutions such as Kyoto are attempted, they often produce quite unintended, often negative consequences. The many loopholes that have enabled profiteers to make money from the Clean Development Mechanism, without materially affecting emissions, are examples5. Therefore, there can be no silver bullet — in this case the top-down creation of a global carbon market — to bring about the desired end.

Time to ditch Kyoto

L. LOMBARDI/ZUMA PRESS/NEWSCOM

Climate villains? Protesters have called for the United States and Australia to ratify the Kyoto Protocol.

But could there be silver buckshot? Could we assemble a portfolio of approaches that would move us in the right direction, even though we cannot predict which specific ones might stimulate the necessary fundamental change? If so, what would such a portfolio look like? We believe that a radical rethink of climate policy should possess at least five central elements.

Focus mitigation efforts on the big emitters

The notion that emissions mitigation is a global commons problem, requiring consensus among more than 170 countries, lies at the heart of the Kyoto approach. Engaging all of the world's governments has the ring of idealistic symmetry (matching global threat with universal response), but the more parties there are to any negotiation, the lower the common denominator for agreement — as has been the case under Kyoto.

The G8+5 Climate Change Dialogue, established in 2006 to convene the leaders of the top 13 polluters, was a belated recognition of the error of involving too many parties, each with dramatically different stakes and agendas. In September, the United States convened the top 16 polluters. Such initiatives are summarily dismissed by Kyoto's true believers, who see them as diversions rather than necessary first steps. However, these approaches begin to recognize the reality that fewer than 20 countries are responsible for about 80% of the world's emissions. In the early stages of emissions mitigation policy, the other 150 countries only get in the way.

Allow genuine emissions markets to evolve from the bottom up

Theoretically, the simplest way to establish a price signal would be through a carbon tax. However, past experience with Britain's fuel price escalator (1993–99) and US President Bill Clinton's attempt to introduce a modest 4.3-cent-per-gallon hike in gasoline tax, shows there are serious political obstacles to establishing a level of tax sufficiently high to encourage energy efficiency, let alone to stimulate serious investments in innovation.

An alternative price-based approach to market failure is cap-and-trade. But to work, such schemes must be built — like all genuine markets — from the bottom up. The cap shapes the market by signalling the social goal as simply as possibly: in this case, reduction of anthropogenic impact on the environment. The market does the rest. But, in trying to introduce, from the top down, a global market in all greenhouse gases and all sources and sinks, the Kyoto approach tries to do too much, too soon, especially in the absence of binding legal frameworks to enforce contracts among parties who are not bound by other strong ties.

There is no precedent for imposing a global trading system from above. First, lessons need to be learned from regional experiments with trading. The European Union Emission Trading Scheme confined itself to trading only in carbon, but then made the fatal error of allowing governments unrestricted powers to allocate allowances instead of auctioning a limited supply, leading to a collapse in the price. The Chicago Climate Exchange is successfully trading a basket of gases, but participation is voluntary. Eventually, different trading systems could evolve and link up as sensible standard practices emerge, giving rise to a global market. But in the final analysis, cap-and-trade cannot deliver the escape velocity required to get investment in technological innovation into orbit, in time. That calls for something else.

Put public investment in energy R&D on a wartime footing

We stare at stark divergences of trends. On the one hand, the International Energy Agency predicts a doubling of global energy demand from present levels in the next 25 years. On the other, since 1980 there has been a worldwide reduction of 40% in government budgets for energy R&D6. Without huge investment in R&D, the technologies upon which a viable emissions reduction strategy depends will not be available in time to disrupt a new cycle of carbon-intensive infrastructure.

So investment in energy R&D should be placed on a wartime footing. This is a cause that embraces the political spectrum, including Kyoto supporters. In 1992 former US Vice-President Al Gore called for a 'strategic environment initiative' as part of his vision for a 'global Marshall Plan'. The conservative American Enterprise Institute in Washington DC also supports primary research on sustainable new energy technologies. In 2006, Lord Rees, the president of Britain's Royal Society suggested that major public investment in R&D should be kick-started by a global investment in energy technologies research on the scale of the Manhattan Project7.

It seems reasonable to expect the world's leading economies and emitters to devote as much money to this challenge as they currently spend on military research — in the case of the United States, about $80 billion per year. Such investment would provide a more promising foundation for decarbonization of the global energy system than the current approach.

Increase spending on adaptation

For the best part of a decade, discussion of adaptation was regarded by most participants in climate policy-making as tantamount to betrayal8. Even though it was widely recognized by the end of the 1980s that the existing stock of atmospheric greenhouse gases was likely to lead to some inevitable warming, the policy community suppressed discussion of adaptation out of fear that it would blunt the arguments for greenhouse-gas mitigation.

Today, although the taboo on discussing adaptation is lifting, the allocation of effort remains skewed. The (unmet) commitment of international resources to the multilateral Adaptation Fund under the United Nations Climate Change Convention is $1.5 billion, derived in part from a tax on the Clean Development Mechanism. Funds for mitigation, however, come from many sources and total at least $19 billion. We believe that global adaptation efforts need to be funded at comparable scales to those we advocate for investment in technology R&D.

Time to ditch Kyoto

T. YAMANAKA/AFP/GETTY

J. SUTTON-HIBBERT/ZUMA PRESS/NEWSCOM

The treaty negotiated at Kyoto in 1997 has been ratified by 172 nations. Has it made a difference?

Many climate activists seem to assume that slowing greenhouse-gas emissions has logical and ethical priority over adapting to climate impacts. But the ethical issues cut both ways. Current emissions reductions will mainly benefit future generations, whereas the momentum already in the climate system drives the near-term. Faced with imminent warming, adaptation has a faster response time, a closer coupling with innovation and incentive structures, and thereby confers more protection more quickly to more people. It is not clear to us that the interests of millions of people in poorer countries who depend on marginal ecosystems are best served by an exclusive preoccupation with mitigation. Indeed, such a narrow focus is likely to be a fatal error. Mitigation and adaptation must go hand in hand.

Work the problem at appropriate scales

Climate change is a multi-level governance problem. Some commentators recognized early on that it is not just, or even primarily, a matter for negotiation among nation states9. However, national governments have been slow to recognize this. Global responses to climate policy can learn from the US system of federalism that encourages small-scale policy experiments at the state or local-government levels as well as with the philanthropic and private sectors. When state or local policies succeed, such experiments can be implemented at the federal level, and often with the enthusiastic support of national politicians.

David Victor at the Council on Foreign Relations and his colleagues have proposed exactly this approach to climate policy, suggesting that a "global federalism of climate policy" is emerging from the rubble of the Kyoto Protocol. Rather than the top-down universalism embodied in Kyoto, countries would choose policies that suit their particular circumstances. Ironically, this 'policies and measures' approach was being pursued before the emergence of the Kyoto regime10. However, it has been largely neglected in the post-Kyoto process. Although a bottom-up approach may seem painfully slow and sprawling, it may be the only way to build credible institutions that markets endorse. The agenda for the Bali conference should focus on this and on the scale-up of energy R&D rather than on drafting a 'bigger and better' version of Kyoto.

The silver buckshot approach

Sometimes the best line of attack is not head-on. Indirect measures can deliver much more: these range from informational instruments, such as labelling of consumer products; market instruments, such as emissions trading; and market stimuli, such as procurement programmes for clean technologies; to a few command-and-control mechanisms, such as technology standards11. The benefit of this approach is that it focuses on what governments, firms and households actually do to reduce their emissions, in contrast to the directive target setting that has characterized international discussions since the late 1980s.

Because no one can know beforehand the exact consequences of any portfolio of policy measures, with a bottom-up approach, governments would focus on navigation, on maintaining course and momentum towards the goal of fundamental technological change, rather than on compliance with precise targets for emissions reductions. The flexibility of this inelegant approach would allow early mitigation efforts to serve as policy experiments from which lessons could be learned about what works, when and where12. Thus cooperation, competition and control could all be brought to bear on the problem.

Does the Kyoto bandwagon have too much political momentum? We hope not. It will take courage for a policy community that has invested much in boosting Kyoto to radically rethink climate policy and adopt a bottom-up 'social learning' approach. But finding a face-saving way to do so is imperative. Not least, this is because today there is strong public support for climate action; but continued policy failure 'spun' as a story of success could lead to public withdrawal of trust and consent for action, whatever form it takes.

Top

References

  1. Victor, D. The Collapse of the Kyoto Protocol and the Struggle to Slow Global Warming (Princeton Univ. Press, New Jersey, 2001).
  2. Prins, G. & Rayner, S. The Wrong Trousers. Radically Rethinking Climate Policy Joint Working Paper (James Martin Inst./Mackinder Centre, in the press).
  3. British Petroleum Statistical Review of World Energy June 2007; http://www.bp.com/statisticalreview
  4. Arthur, W. B. Econ. J. 99, 116–131 (1989). | Article |
  5. Wara, M. Nature 445, 595–596 (2007). | Article | PubMed | ChemPort |
  6. Rayner, S. The International Challenge of Climate Change: UK Leadership in the G8 and EU, Memorandum to the Environmental Audit Committee, House of Commons (24 November 2004).
  7. Rees, M. Science 313, 591 (2006). | Article | PubMed | ChemPort |
  8. Pielke, R. A., Prins, G., Rayner, S. & Sarewitz, D. Nature 445, 597–598 (2007). | Article | PubMed | ChemPort |
  9. Rayner, S. & Malone, E. L. Ten Suggestions for Policy Makers in Human Choice and Climate Change: An International Assessment, Vol. 4, What Have We Learned? (eds Rayner, S. & Malone, E.) 109–138 (Battelle Press, Columbus, Ohio, 1998).
  10. Victor, D. G., House, J. C. & Joy, S. Science 311, 336 (2006). | PubMed | ChemPort |
  11. US Department of Energy A Compendium of Options for Government Policy to Encourage Private Sector Responses to Potential Climate Change. Report to the Congress of the United States (1989).
  12. Verweij, M. et al. Public Administration 84, 817–843 (2006).

Afghan Geology: Riches Unmined

Published online 22 October 2007 | Nature 449, 968-971 (2007) | doi:10.1038/449968a

News Feature

Geology: Mine games

Under the rubble of war-torn Afghanistan lie natural resources worth billions. Rex Dalton reports from Kabul on the scientists risking their lives to see them developed for the good of the country.

P. BRONSTEIN/GETTY

In a canyon just outside Kabul, the rocky terrain is strewn with debris symbolizing the troubled past and tenuous future of war-torn Afghanistan.

Exploratory cores, drilled decades ago by Soviets probing for minerals, are scattered across a landscape peppered with landmines. A line of bomb craters crosses the basin, which was home to a terrorist training camp until late 2001, when US B-52s swept overhead, dropping bunker-busters in retaliation for the terrorist attacks of 11 September. Among other things, the Americans destroyed a building that had been used to store geological cores, later turned into an ammunitions dump.

“The Taliban would have killed them if they had found the reports.”

Antony Benham

Below this rubble lies a potential economic and social boon for the troubled nation — a massive copper deposit estimated to be worth US$30 billion at today's high prices. The deposit, called Aynak, has never been developed into a viable mine, but international corporations are now competing to win a major mining concession there. What happens at Aynak could eventually serve as a model for developing Afghanistan's other natural resources, ranging from mineral wealth to reserves of coal and petroleum.

But concerns about the Aynak bidding process have set off a behind-the-scenes scramble among consulting scientists, diplomats and aid agency officials to try to ensure its success. In June, the top World Bank geological consultant to the Afghanistan government sent a report to the office of President Hamid Karzai that sharply criticized how the country's ministry of mines was handling the competition. The consultant, James Yeager, called for new analysis of the bids, more emphasis on social and economic benefits and a stronger analytical role by the inter-ministerial council that administers the process. The government is soon expected to announce two finalists for the concession, narrowing the field from the current five.

Because he raised the alarm, Yeager thinks that he was targeted for assassination. A capped beer bottle of hydrochloric acid was slipped into the refrigerator of his heavily guarded apartment in Kabul; he stopped just short of drinking it. Yeager did not renew his World Bank contract and instead returned home to Denver, Colorado, joining numerous other consulting scientists leaving Afghanistan at a time when their experience is sorely needed. Meanwhile, researchers who remain there face a range of threats, from kidnapping to landmines to booby traps.

Nevertheless, some are optimistic that Afghanistan's natural resources can be developed in a stable and sustainable manner. Officials at the World Bank, for instance, say that the inter-ministerial council has started to strengthen its role in the Aynak bidding process by asserting power over the ministry of mines and shifting the competition onto a steadier course. Using the phrase that has been a byword for conflict in the region since the days of Rudyard Kipling's Kim , World Bank mining engineer Michael Stanley says: “We are into a new phase of the Great Game.”

Afghanistan is a key player in the game because of its panoply of geological riches, created as the Indian subcontinent rams into Asia, and thrust into the air and exposed in the Hindu Kush mountain range. Coal, rare industrial metals and precious stones abound at various points along the range. The northern provinces of the country also have oil and gas reserves.

Back to work

Abandoned shell casings litter the exploration tunnels around the Aynak copper deposit.Abandoned shell casings litter the exploration tunnels around the Aynak copper deposit.J. YEAGER

For centuries, Aynak has been known for its copper, used for weapons, tools and trade along the Silk Road. Before withdrawing from Afghanistan in 1989, the Soviets drilled countless cores to assess the deposit, now estimated to hold 240 million tonnes of the metal. Work halted for years during Taliban rule, but after 2001, reconstruction teams started to identify the country's assets. In the United States, the Bush administration encouraged Afghan expatriates to help develop their homeland; some scientists who had fled the country returned (see 'Science after the Taliban').

Work in post-Taliban Afghanistan wasn't easy. In Kabul, the Afghanistan Geological Survey building had been reduced to a shell, pockmarked by rocket blasts. Its equipment, samples and library were destroyed; anything burnable had been used for fuel. US officials helped to reconstruct the building, spending at least $6.2 million to modernize the facilities with computers, labs, sample storage racks and a library housing old, rare and sometimes bullet-scarred reference volumes. The UK government also chipped in with US$8 million and a three-year contract for scientists from the British Geological Survey (BGS) to analyse natural resources.

Although Aynak is only about 35 kilometres southeast of Kabul, the BGS scientists were forbidden to go there without military protection. Even when they did manage to get there, they couldn't sample for minerals in the scattered cores. The cores had been blasted apart by bombs aimed at the old mining tunnels, which had been suspected of housing Osama bin Laden.

Back in Kabul, the team managed to patch together a detailed picture of the copper deposit from surviving Soviet core reports. Courageous staff from the Afghanistan Geological Survey had hidden the 20-year-old documents during the Taliban regime. “The Taliban would have killed them if they had found the reports,” says Antony Benham, a mineral specialist at the BGS.

Working with these formerly hidden records, BGS scientists plugged in data from the cores to create a computerized model of copper distribution at Aynak. “It was a remarkable job,” says geologist Richard Ellison, the official in charge of the agency's contract, which ended on 1 September. The BGS is now negotiating for a new contract with the World Bank.

Power problems

The firm that eventually wins the Aynak concession will face many difficult tasks, but perhaps the most daunting will be to secure electricity for mining and smelting equipment. The villages around Aynak have only generators as a source of power, and building an enormous copper production facility will require lots of power from coal-fired plants. But before hundreds of millions of dollars are invested in power plants and mining facilities, coal supplies must be located, assessed and graded for development.

To evaluate coal resources in Afghanistan, the US Geological Survey (USGS) sent in a task force led by geologist John SanFilipo. SanFilipo had earned an international reputation for assessing coal in dangerous environments, particularly in adjacent Pakistan. There, he discovered one of Asia's largest coal reserves, the Thar deposit in the Sindh province of southeastern Pakistan. But much of Thar's coal is difficult to mine because of political difficulties in the country.

SanFilipo and his colleagues faced similar challenges in assessing Afghanistan's coal resources. The Soviets and officials from the Afghanistan Geological Survey had previously found a massive coal band running across much of northern Afghanistan, with mining centred around the town of Pul-I-Khumri, northwest of Kabul. Another known coal band runs along the country's southeastern border with Pakistan, in the Katawaz Basin.

Historically, this coal has been tapped by artisanal methods — in mines called 'dog holes', dug by locals who use donkeys for underground hauling. With little shoring or proper ventilation, the tunnels regularly collapse, killing villagers. In these warrens, the geologists went coal-hunting, sometimes using old ways. Yeager, for instance, carried a bird to test air quality. “When the bird died, I left,” he says.

Details on the locations of the coal deposit were extremely sketchy, and map conditions worse. “The Afghans had squirrelled the maps away during the Taliban days,” says SanFilipo. “When I got there, they had brought them back to the Afghanistan Survey building, where they were piled like junk. We set out to organize, scan and digitize them for a permanent record.”

The quality of Afghanistan's coal deposits varies greatly. Some contain coal that burns hot and clean; other coals are more problematic, rich in sulphur and fluorine, and emitting noxious gases when burnt. Just north of Aynak lies a coal deposit called Chalaw. The coal there could fire a power plant for the Aynak mine, officials say, but it is rich in fluorine — which requires added measures to limit pollution from power plants, and protective venting if burned inside houses.

Another challenge is to find coal that is not buried so deep that it can't be extracted. “The critical step is to determine where coal is not at the surface, but is still easily minable,” says SanFilipo. Geologists thus search for a relatively flat area where coal is just below a weathered surface.

Dating the coal is also crucial, because coals of the same age will tend to be of the same quality. Older geological maps show coal reserves that might have been dated incorrectly. “We want to create a stratigraphic picture of coal deposits across the entire country,” says SanFilipo. The USGS team uses palaeobotanical clues such as pollen to date the coal. It relies on scientists such as Rahman Ashraf, a palaeobotanist who fled Afghanistan after the Soviet invasion but now serves as special adviser to President Karzai. “It was a dream that I could return to work in my country,” says Ashraf, who has also been appointed chancellor of Kabul University.

Road blocks

The power lines of Afghan politics run through nearly all attempts to characterize the country's natural resources. In one case, sources allege that SanFilipo was blocked from returning to Afghanistan for coal exploration by the actions of another USGS geologist — Afghanistan-born Said Mirzad.

Originally trained in France, Mirzad was director of the Afghanistan Geological Survey before the Soviet invasion in 1979. After that he ran computer services for a small USGS office in San Diego, California. After the terrorist attacks of 11 September, Mirzad's Afghan friendships vaulted him to the USGS headquarters in Reston, Virginia, to help coordinate resource development in Afghanistan.

Mirzad has deep and historic connections in Afghanistan, where his brother-in-law is the minister of defence. Mirzad is also the mentor of the minister of mines, Mohamad Ibrahim Adel, who was one of those criticized for the handling of the Aynak copper bidding competition. And Mirzad has powerful allies in Washington DC; both the US state and defence departments awarded him medals for outstanding service in 2005.

“It was a dream that I could return to work in my country.”

Rahman Ashraf

In Afghanistan, Mirzad has aided multiple projects, such as an airborne geological assessment he urged the Karzai government to fund after aid agencies declined. But some also see him as an obstructionist. Beginning in early 2005, SanFilipo attempted unsuccessfully to return to Afghanistan to continue his fieldwork and geological map inventory. His repeated requests to US officials in Kabul for clearance to return were denied, keeping him out of the country for 15 months. He was finally allowed to return three times in 2006, but not since then. “A geologist must go out in the field to see,” says Ashraf, praising Yeager and SanFilipo's expeditions.

Sources say that denials for SanFilipo's travel to Afghanistan were traced to Mirzad, who was in Kabul advising Zalmay Khalilzad, then the US ambassador to Afghanistan. Khalilzad is arguably the Bush administration's most-favoured Afghan and has since been appointed as the US ambassador to the United Nations. Mirzad's historic friendships also extend to the presidential palace in Afghanistan: he used to play bridge with President Karzai's father.

Mirzad, though, denies hindering SanFilipo's work in any way. “This is all gossip,” he says. “There is not a shred of evidence.” But neither he nor the USGS officials could explain why SanFilipo was refused access to Afghanistan during the time in question.

In October 2006, SanFilipo lectured at the annual meeting of the Geological Society of America in Philadelphia, Pennsylvania, on the poor state of mining in Afghanistan. Not long afterwards, he was removed as the project leader for the USGS effort. Since the meeting, he has declined to discuss the issue publicly.

These events set back coal exploration in Afghanistan substantially, say several sources in Afghanistan and the United States, who requested anonymity so they may continue to help the country without reprisals. “It is unforgivable what has happened, a disaster,” says Mary Louise Vitelli, a US attorney in Kabul who has worked extensively in war-torn regions. “Guys like SanFilipo are rare; he produces quality analysis under difficult circumstances.”

Researchers such as Antony Benham (left) and Bob McIntosh (right) are helping to revitalize research at the Afghanistan Geological Survey building (far left).Researchers such as Antony Benham (left) and Bob McIntosh (right) are helping to revitalize research at the Afghanistan Geological Survey building (far left).A. BENHAM

And some scientists with long-term experience in the subcontinent saw the tapping of Mirzad for a reconstruction role as counterproductive — as were other selections by the Bush administration in Afghanistan and Iraq. Jack Shroder, a geologist at the University of Nebraska in Omaha, has worked in Afghanistan for 35 years, conducting glacial, mapping and global-positioning-system studies. He has been integrally involved in the American Institute of Afghanistan Studies, a multidisciplinary organization to foster research. But Shroder says that he and his fellow institute leaders were never consulted about the Bush administration's science policy for Afghanistan. “We were the boots-on-the ground guys — in and out of Afghanistan before the terrorist attacks,” he says. “They completely ignored us; they think academics are all left-wingers.”

Shroder also says that he has repeatedly encountered difficulties dealing with Mirzad, whom he calls a hard-core nationalist. “He didn't want foreigners to get access to maps, even if they were helping,” says Shroder. But Mirzad expressed surprise that he would be seen as an obstructionist. “I believe the only thing that can save Afghanistan is its indigenous wealth. I am completely behind that,” he says.

USGS managers of international programmes, such as Asia project chief Jack Medlin, praise Mirzad for fighting to secure funds for the agency to work in Afghanistan. Even so, the USGS wanted $12 million a year for five years to develop resources in Afghanistan, but scrapes by with about $9 million a year.

On 13 November, the USGS is scheduled to release a status report on minerals in Afghanistan, after some delay. The main coal report isn't to be released until next year, albeit short of data as few USGS scientists have gone to Afghanistan this year. Expatriate Afghan geologist Shah Wali Faryad, now of Charles University in Prague, repeatedly invited USGS scientists to attend a conference on geological opportunities on 15–16 October in Kabul, but the agency didn't respond. Medlin cites security issues as the reason.

Competitive streak

As the coal debacle simmers in the background, bigger questions arise about the Aynak copper project. Nine corporations originally sought the concession, which includes an option on the nearby Darband deposit. By June, the field of contenders had been narrowed to five firms, all mining heavyweights: Strikeforce, part of Russia's largest private employer, the Basic Element Group; China Metallurgical Group, a Chinese government-owned conglomerate based in Beijing; London-based Kazakhmys Consortium, which mines and processes copper in Kazakhstan; Hunter Dickinson of Vancouver, Canada, which mines minerals internationally; and Phelps Dodge, a leading US copper mining firm based in Phoenix, Arizona. An informed source says that a few months ago the favourite of the ministry of mines' technical group was the China Metallurgical Group, with Hunter Dickinson a distant second.

In his critique of the process, Yeager wrote that Afghan expertise wasn't being used to its fullest extent, and that officials controlled by Adel, the mining minister, had too much influence in the process. No economists, attorneys, environmentalists or foreign-affairs specialists had been involved in the technical analysis, he asserted, which violates the laws Afghanistan implemented after the Taliban were ousted. Yeager also noted the importance of the bidders' track records: the top-ranked company has come under fire for poor environmental records in mining in nations other than its native China.

Yeager also contends that the strategic implications of selecting either an Eastern or Western firm have not been addressed. If Afghanistan were to choose a Russian, Kazakh or Chinese bid, Yeager wrote, firms from Western nations might not seek other mineral concessions in the region in the future, fearing that Afghanistan's neighbours may have undue influence.

But Adel counters that the tender bids have been “very strong, and everyone is happy with the progress.” He adds that he has not seen Yeager's report, but considered its transmission to Karzai's office “a breach” of the adviser's duties. “He is not directly responsible for the bidding,” says Adel.

For environmental specialist Daud Saba, a human development adviser to President Karzai, the difficulties with Aynak have been particularly painful. Developing such a rich natural resource should be spearheaded by the country's leading scientists, he feels. “It breaks my heart when I see what is happening,” he says. And unless Afghanistan puts resource development on a steady course, many more hearts may also be broken by the opportunities lost.

Rex Dalton is a US West Coast correspondent for Nature . This article is part of the Global Theme on Poverty and Human Development, organized by the Council of Science Editors. All articles from the Nature Publishing Group are available free at http://www.nature.com/povhumdev The content from all participating journals can be found at http://www.councilscienceeditors.org/globalthemeissue.cfm

Comments

Reader comments are usually moderated after posting. If you find something offensive or inappropriate, you can speed this process by clicking 'Report this comment' (or, if that doesn't work for you, email redesign@nature.com). For more controversial topics, we reserve the right to moderate before comments are published.

  • Turn this article on its head, and it starts to make sense. Mirzad is an Afghan nationalist (and why not?). SanFilipo and Yeager have been trying to monopolise the exploitation of Afghan minerals for US corporations. Yeager's unsolicited letter: "If Afghanistan were to choose a Russian, Kazakh or Chinese bid, Yeager wrote, firms from Western nations might not seek other mineral concessions in the region" says it all. Mining conditions in Afghanistan may be closer to those that Chinese or Kazakh firms are used to, than US ones. And their standards - of environmental protection, worker safety etc.- may not be one westerners like; but they may suit the Afghans. The licencing decision process may have been corrupt; at least it is being decided in Kabul, not Washington.

    • 23 Oct, 2007
    • Posted by: Giles Cattermole
  • <<"The villages around Aynak have only generators as a source of power, and building an enormous copper production facility will require lots of power from coal-fired plants.">> With global warming being the problem that it is, why are they so intent on using coal to generate electricity? Sun and wind are plentiful in Afghanistan. Or is there something different about electricity generated from coal that makes it more suited to copper smelting?

    • 23 Oct, 2007
    • Posted by: Zoltan Toth