Earth News This Week

Friday, October 26, 2007

Geology interpretation

Home PageSociety Home PageContact UsFrequently Asked QuestionsSite SearchSite Map GSA BookstoreOnline JournalsJoin GSADonate Now!


GSA Today

Article: pp. 4–10 | Abstract | PDF (4.72M)

What do you think this is? “Conceptual uncertainty” in geoscience interpretation

C.E. Bonda, A.D. Gibbsb, Z.K. Shiptonc, S. Jonesd

a Dept. of Geographical and Earth Sciences, University of Glasgow, Glasgow G12 8QQ, UK, and Midland Valley Exploration Ltd., 144 West George Street, Glasgow G2 2HG, UK, clare@mve.com
b Midland Valley Exploration Ltd., 144 West George Street, Glasgow G2 2HG, UK
c Dept. of Geographical and Earth Sciences, University of Glasgow, Glasgow G12 8QQ, UK
d Midland Valley Exploration Ltd., 144 West George Street, Glasgow G2 2HG, UK

Interpretations of seismic images are used to analyze sub-surface geology and form the basis for many exploration and extraction decisions, but the uncertainty that arises from human bias in seismic data interpretation has not previously been quantified. All geological data sets are spatially limited and have limited resolution. Geoscientists who interpret such data sets must, therefore, rely upon their previous experience and apply a limited set of geological concepts. We have documented the range of interpretations to a single data set, and in doing so have quantified the “conceptual uncertainty” inherent in seismic interpretation. In this experiment, 412 interpretations of a synthetic seismic image were analyzed. Only 21% of the participants interpreted the “correct” tectonic setting of the original model, and only 23% highlighted the three main fault strands in the image. These results illustrate that conceptual uncertainty exists, which in turn explains the large range of interpretations that can result from a single data set. We consider the role of prior knowledge in biasing individuals in their interpretation of the synthetic seismic section, and our results demonstrate that conceptual uncertainty has a critical influence on resource exploration and other areas of geoscience. Practices should be developed to minimize the effects of conceptual uncertainty, and it should be accounted for in risk analysis.

Received: March 28, 2007; Accepted: August 16, 2007

DOI: 10.1130/GSAT01711A.1

INTRODUCTION

Geoscientists are required to make predictions from geological data that are often sparsely distributed or incomplete. For example, boreholes and seismic surveys sample limited volumes and have a limited resolution. Geoscientists use these data to produce geological framework models (3-D representations of stratigraphic horizons and fault planes) and to determine properties such as lithology and permeability. Components of these framework models will always be characterized by some uncertainty due to the inherent incompleteness of geological data sets. Quantifying this uncertainty is important because geological framework models are often used as the basis for assessments and decisions that have important social and commercial implications (e.g., resource extraction, ground-water supply, CO2 and nuclear waste storage, solute transport, and earthquake and other geological hazard predictions).

Quantification of uncertainty in geological framework models in petroleum geoscience has concentrated on such parameters as the petrophysical properties of reservoirs (e.g., Egermann and Lenormand, 2005) and the resolution and processing of seismic data (e.g., Jones et al., 2007). In petrophysical models, predictions of reservoir permeability are based on water saturation data (Aguilera, 2004), pressure measurements, log data, well cores (Yan et al., 2006), and the like. These data are used to predict heterogeneities in reservoir properties and to calculate uncertainty parameters for flow simulations. Geo-statistics, particularly in reservoir modeling, is widely used to aid in reservoir forecasting, uncertainty calculations, and risk analysis for decision making. Understanding the limitations in geostatistics is critical if it is to be used as a decision-making tool (Deutsch, 2006). Defining the limitations of geostatistics and uncertainty in interpretation is important for (1) acknowledging and assessing possible alternatives to a single interpretation; (2) highlighting areas within an interpretation that are less well constrained; (3) propagating uncertainties into further modeling or risk assessments; (4) combining and rationalizing different, seemingly inconsistent data sets and/or types; and (5) educating management, politicians, and the general public about scientific uncertainty.

We have called the range of concepts that geoscientists could apply to a single data set conceptual uncertainty. Geoscientists use their training and experience (i.e., their prior knowledge) to apply a concept (or rarely, to generate a new one) to data to construct an interpretation and, ultimately, to produce a framework model. We suggest that the initial geological framework model is a fundamental source of uncertainty because it is dependent on the tectonic paradigm or concept used in its construction. We argue that conceptual uncertainty can be more important than the uncertainty inherent in the positioning of horizons or fault planes in a framework model or in the subsequent populating of these features with petrophysical properties.

In this study, we have attempted to quantify conceptual uncertainty from 412 interpretations of a single synthetic seismic data set. In collating the interpretations of a large number of geoscientists with different backgrounds and experience, we have effectively constrained the range of concepts that could be applied to the synthetic seismic. In effect, we have defined the “conceptual uncertainty space” of that data set. We have examined the role of prior knowledge in seismic data interpretation and have highlighted examples in which the prior knowledge of individual geoscientists appears to have affected their interpretational choices and final outcome. In particular, we looked at examples of how expertise in particular tectonic settings, length of experience, and type of training and interpretational techniques used may have affected interpretational behavior. We also considered the influence of the broader contextual information a geoscientist uses in his or her interpretation. In the Discussion section of this paper, we make suggestions for more rigorous studies of conceptual uncertainty and discuss the significance for interpretation and prediction in geoscience.

Use of prior knowledge is the main method by which scientific disciplines progress and evolve (e.g., Levi-Strauss, 1966; Kuhn, 1962), and commonly described human biases form part of the way we use prior knowledge to interpret data (e.g., Frodeman, 1995). Cognitive bias commonly results from using heuristics or rules of thumb based on experience (prior knowledge). In cases of interpretational uncertainty, bias from prior knowledge is well documented in other disciplines, such as economics. In these disciplines, theories, such as elicitation theory, are used to mitigate against bias from prior knowledge. Baddeley et al. (2004) noted that heuristics are often used when making quick decisions or in instances when data are difficult to process or are limited in extent. Curtis and Wood (2004 and references therein) have provided examples and discussions of the use of prior knowledge in geoscience. However, few actual studies of the effects of prior knowledge in geoscience have been undertaken, with the exception of a study by Rankey and Mitchell (2003), who undertook an experiment to document the variation in interpretation of a data set by six seismic interpreters.

EXPERIMENT DESIGN AND PROCEDURE

To document the range of potential interpretations from a single data set and to test whether prior knowledge is important for interpretational outcomes, we asked geoscientists to produce a single interpretation of a seismic section. Rather than ask geoscientists to interpret a real section for which the “answer” is unknown, we created a synthetic seismic section from a 2-D geological model. Forward modeling enabled us to produce a geological model from an initial layer-cake stratigraphy so we could define the model input parameters and evolution, allowing us to compare interpretations against a “correct” answer. The synthetic seismic section (in two-way time) was printed as an A4 color plate (Fig. 1) with a series of questions on the reverse.1 In 2005 and 2006, we asked 412 geoscientists (participants) to make interpretations (answers) at conferences, workshops, and universities. These events took place in Europe, North America, and the Middle East. We questioned participants on factors we thought might have influenced their interpretations, including the participant's educational level, length of experience, background expertise, and perception of his or her ability in structural geology and seismic interpretation. If you would like to try the interpretation experiment for yourself, bear in mind that Figure 2 and the next paragraph contain the “answer” to the synthetic seismic section.

Figure 1.

What do you think this is? The uninterpreted synthetic seismic section used for this study. The synthetic seismic and questionnaire given to participants can also be found in the GSA Data Repository (DR1; see text footnote 1).

Figure 2.

Production of the geological framework model. (A–D) show stages of the forward modeling in 2DMove (the model used the inclined shear algorithm, and isostasy and compaction were assumed not to have first order effects). Details of the forward modeling are found in the figure and in the GSA Data Repository (DR2; see text footnote 1). (E) Examples of interpretations of the synthetic seismic image, proposed prior to the experiment.

Figure 2.

Continued.

We forward modeled an inverted growth fault (i.e., extension followed by thrusting on a single structure; Figs. 2A–2D). The model was designed so that a number of realistic interpretations could be made from the single synthetic seismic data set (Fig. 2E). Details of the model and synthetic seismic generation can be found in the GSA Data Repository (see footnote 1). In the experiment, participants were deliberately given little information about the seismic section and its generation. However, if the participants had taken the time to carefully read the information on the reverse of the seismic image, they would have learned that the seismic section they were being asked to interpret was synthetic. The introduction to the questionnaire included the sentence, “The section overleaf has been created by forward modeling using known assumptions.” However, few of the participants who engaged in conversation about the exercise appeared to appreciate this fact.

We wanted to test the range of concepts that would come out of a simple interpretation exercise. We therefore did not ask the participants how they would have tested their interpretations. In a real geological situation, once a preliminary model or hypothesis has been generated, it is generally tested by collecting further data or by checking the validity, for instance, by restoring the section. In our study, the participants were only asked to produce a single interpretation and were given no further information than that on the questionnaire. This precluded participants from forming multiple models that could then be compared and rejected based on such testing.

RESULTS

We sorted the returned interpretations into the following tectonic setting categories: extension, thrust (shortening), inversion, strike-slip, diapirism (salt or mud), other (geological setting identified but not included in the list), and unclear (tectonic setting could not be identified). Many answers contained more than one of these tectonic setting elements; in these cases, the dominant or main tectonic setting was chosen. Strict selection criteria (horizon offsets, arrows or labels to define fault motion, and/or written annotations) were used to categorize the answers to reduce our own bias in the sorting procedure. Because of these strict sorting criteria, 32% of participants' answers had to be classified as unclear. One reason for the small number (2%) of strike-slip answers returned is that out-of-plane displacement cannot be seen in a vertical seismic section. None of the participants annotated in- or out-of-plane movements, even those who explicitly stated they had a strike-slip interpretation. For the “apparently” strike-slip answers, we considered the geometric arrangement of faults, but only classified the answer as strike-slip in cases where there was no ambiguity. The 5% of answers in the “other” category fell outside the range of the common tectonic settings categories we had chosen.

The range of tectonic settings implied by the interpretations is summarized in Figure 3. The answers span all five tectonic concept categories (inversion, strike-slip, extension, shortening [thrust], and diapirism). Participants also applied nontectonic concepts to their interpretation, such as carbonate reefs and sequence stratigraphy concepts (5%), further extending the conceptual uncertainty for the data set. Only 10% of answers showed the “correct” inversion model, as produced in the forward model. This increases to 21% if we include participants who showed both extension and thrusting in their interpretation. In the rest of this article, inversion is classified as extension and thrusting anywhere in the section, for instance, extension on one fault and shortening on another. Across all the tectonic setting expertise groups the most common interpretational answer (26%) was a thrust (shortening) tectonic setting. The results show that 79% of the participants applied the wrong concept in their interpretation and that the most common answer was not the “correct” scenario created in the forward model.

Figure 3.

Pie chart showing the relative proportions of interpretations by tectonic setting. Thrust-based interpretations were the most common, accounting for 26% of the total. Inversion of a single structure, as in the forward model, accounted for only 10% of answers. Overall, inversion within the section was recognized by 21% of participants.

The features most commonly singled out in the seismic image were areas of high- or low-intensity signals. High-intensity features would normally indicate an acoustic impedance contrast and therefore a geological feature or change in rock property (e.g., Brown, 1986). The two main fault segments, areas of high-intensity signals on the image, were highlighted by 69% and 68% of participants, respectively. The next most common feature interpreted, highlighted by 62% of participants, is an area of almost no data (Fig. 4). In the original model, this area of no data is a small fault splay in the hanging wall to one of the main faults. Many participants annotated diapirism or a gas chimney in this area.

Figure 4.

The uninterpreted synthetic seismic section with the three most frequently interpreted features annotated (inset). Percentages refer to the percentage of participants that highlighted each feature.

In the following sections, we look at examples in which the prior knowledge of individual geoscientists appears to have directly influenced the concepts they applied in their interpretations of the seismic image. We show examples of how expertise, experience, and training influenced the concepts applied and hence the final interpretation. Finally, we consider the reliance of interpreters on a breadth of geological and geographical information to support their interpretations of a data set.

Expertise (Tectonic Setting)

Participants were provided with a list of tectonic settings (extension, inversion, thrust, salt, strike-slip, and other) and asked to indicate their dominant field of expertise. Some participants indicated more than one expertise category; the following analyses do not include these participants. Twenty-nine percent of the participants who indicated thrust tectonics as their dominant field of expertise interpreted the section as thrust faults, while 27% of participants with some other expertise also produced a thrust fault answer. Of the participants with dominant expertise in inversion, 25% produced an inversion interpretation, whereas of those without inversion expertise 20% produced an inversion interpretation. Participants with dominant expertise in extension and diapirism were more likely to produce an answer that matched their expertise than participants with some other dominant expertise (extension expertise 10% as compared to 3% other expertise; diapirism expertise 13% as compared to 7% other expertise). The only group in which the dominant expertise negatively correlated with interpretational outcome, when compared to other geoscientists, was strike-slip (strike-slip expertise 0% as compared to 3% other).

Examples can be found for all settings where those with a specific expertise appear to have allowed this to dominate their interpretations. Figure 5 shows two examples from students who described their expertise as salt tectonics and sequence stratigraphy, respectively, and who produced interpretations that appear to be based directly on their expertise. In Figure 5B, a master's student in sequence stratigraphy has used classic sequence stratigraphy interpretation techniques: maximum flooding surfaces, onlaps and truncations to interpret a reef build-up. In Figure 5A, a Ph.D. student in salt tectonics shows doming associated with salt mobilization. Although these participants have honored the data, they have chosen to interpret it in a way that fits with their dominant expertise and knowledge. By applying these dominant concepts to the data set, they have produced an “incorrect” interpretation. In other examples, interpreters have not honored the data, perhaps due to inexperience in seismic interpretation.

Figure 5.

(A and B) Digitized examples of the interpretations of two students who have experience in sequence stratigraphy and salt tectonics, respectively; (C and D) different participants each with +15 years experience interpreted the same structures as thrust and extension faults. Their fields of expertise were thrust and extension tectonics, respectively.

Tectonic setting expertise seems to have influenced the concepts some participants brought to their interpretation. However, the percentage differences between the expertise categories are not all statistically significant, and the examples of interpretations that match dominant expertise are not seen across the group as a whole. The results suggest that one or more other factors also influenced the concepts applied to a data set by an individual. It is important to note that we asked for the participants' dominant expertise rather than their breadth of expertise, and, additionally, we did not take into consideration how proficient each participant may have been in seismic interpretation.

Experience (Length)

To evaluate how length of experience affects conceptual uncertainty, interpreters were asked to choose the length of time that they had technical experience, from none, student, 0–5 years, 5–10 years, 10–15 years, and 15+ years. In the study group, students were just as likely to produce an “incorrect” answer (76%) as participants with 15+ years experience (76%). Like the student examples in Figures 5A and 5B, two professionals with the same level of experience (15+ years) interpreted the seismic image at a petroleum industry conference and produced answers that matched their dominant tectonic setting expertise (Figs. 5C and 5D). Both marked the same features in the top part of the section as faults. The first interpreter, with dominant expertise in thrust tectonics, interpreted the features as thrust faults. The second interpreter, with extensional expertise, marked the features as extensional faults. Neither of these features are faults in the original model (Fig. 2D). These results indicate that participants with a greater number of years of experience did not necessarily produce more “right” answers.

Interpretational Techniques

We classified the answers according to the interpretational techniques applied to analyze and interpret the data. We defined five technique classifications from the interpretations: (1) identification of features, in which the participants had highlighted features such as faults, gas chimneys, unconformities, etc., by drawing along them; (2) identification of horizons, where participants had drawn along horizon reflectors and/or identified sediment packages; (3) drawing “sticks”—participants simply drew straight lines on the seismic section; (4) annotation, where participants used arrows and writing to annotate features and horizons; and (5) sketches and/or writing, where participants wrote a description of their interpretation of the seismic section or drew sketches to show the evolution of their interpretation through time. Examples of the different classifications can be seen in Figure 6. The different styles have an effect on the identification of specific features (e.g., participants whose interpretational style included feature identification were ∼30% more likely to identify the main fault strands than participants who just identified horizons). Table 1 groups the participants by the number of techniques they used to complete their answer. The participants who used the most techniques were most likely to get the “correct” interpretational answer. Both of the two participants who used 4 out of the possible 5 techniques produced the “correct” answer.

Figure 6.

Digitized answers showing the range in interpretational styles of the participants. Each example corresponds to one of the technique classifications: features, annotations, sticks, sketches and writing, and horizons.

TABLE 1.

NUMBER OF TECHNIQUES PARTICIPANTS USED IN INTERPRETATIONS AND PERCENTAGE OF PARTICIPANTS IN THESE SUBGROUPS WHO MADE A “CORRECT” INTERPRETATION

DISCUSSION

We have documented the breadth in conceptual uncertainty for a single data set. The interpretation produced most often was a thrust-based interpretation rather than the “correct” forward modeled scenario of inversion. There are several non-unique and geologically sound solutions to the data set (Fig. 1E); therefore the small percentage (21%) of correct interpretational answers that matched the forward model and the range of concepts applied to the data set is perhaps not surprising.

Observations of participants' interpretations suggest that they used a range of prior knowledge to undertake the interpretation exercise. In some cases, but significantly not all, prior knowledge based on dominant tectonic setting expertise appears to have biased the concepts participants applied to the data set (Fig. 5).

These observations contrast with those of Rankey and Mitchell (2003) who concluded that interpretations are likely to be based on previous experience and preconceived notions. Our results suggest that other factors, such as an individual's training and the techniques used to interpret the section, may have more influence on interpretational outcome than tectonic expertise. How we define prior knowledge is important when comparing our results to those of other workers.

It is interesting to note that participants with more experience (measured as number of years of experience) did not necessarily produce more “correct” answers. This suggests that type of experience is more significant than length of experience alone. How participants defined their own length of experience was not constrained (i.e., did participants who were two years into a Ph.D. count themselves as students or as having two years post-degree experience?). Similarly, we asked participants for their dominant expertise rather than their breadth of expertise, and it is likely that a participant with expertise in more than one tectonic setting may be better able to distinguish between likely interpretations. These initial results suggest that more than one controlling factor influences conceptual uncertainty; therefore, a full multivariate statistical analysis is required to establish significant relationships.

Participants used a range of interpretational techniques that led to different styles of answers. Our results show that the greater the number of techniques used by individual participants, the greater their chances of producing a “correct” interpretation. We believe that the number of techniques used serves as a proxy for the intensity with which each participant queried the data. Those who used the most techniques may have scrutinized the data more thoroughly than participants who used fewer techniques. However, some techniques, such as feature identification, also appear to be more effective than others at identifying key elements within the seismic section.

The effect of the techniques employed and interpretational style applied to the interpretation of the data set have implications for training and education.

Interpretations of the synthetic seismic image focused on areas of high- and low-intensity signals. Areas of low-intensity responses in seismic images are often caused by disruption of layering due to diapirism or gas percolation through the overlying strata (e.g., Bouriak and Akhemtjanov, 1998; Veerayya et al., 1998), and many participants marked such features in an area of poor data quality, even though this part of the section was not crucial to the overall tectonic interpretation. Sixty-two percent of participants focused their interpretations on an area of no data (i.e., an area of high uncertainty). Annotating gas or diapirism in this area, a direct hydrocarbon indicator, could be critical in a commercial situation. The need to interpret the part of the synthetic seismic image with the least data perhaps says something about human nature, but it also suggests that participants were drawn to anomalous areas with the highest and lowest intensity data.

In the following, we consider the influences of different types of prior knowledge and bias for our study in the context of definitions from psychology. In cognitive psychology, biases are commonly divided into types. The most relevant bias types for this study are described here, but see Krueger and Funder (2004) for a full discussion of bias types and their origins. Availability bias occurs when interpreters use the model or interpretation that is most dominant in their minds. For example, a geoscientist interpreting a new data set having just spent six months looking at fold and thrust belts will have the concepts for fold and thrust belt terrains most readily available in his or her mind. Anchoring bias is the failure to adjust from experts' beliefs, dominant approaches, or initial ideas. In this case, interpreters may know that a seismic section is from, for example, the Gulf of Mexico, and will therefore have the concept of salt tectonics in their minds because this is the accepted interpretational concept for the Gulf of Mexico area. Interpreters will not consider other concepts in their interpretations. Confirmation bias involves actively seeking out opinions and facts that support one's own beliefs or hypotheses. For example, when a geoscientist believes that the seismic section is from an extensional terrain, he or she will identify features that support this belief and ignore information that does not corroborate or correspond to an extensional interpretation.

Examples of bias based on dominant tectonic setting expertise can be found at all levels of experience. Individual participants with 15+ years experience anecdotally show evidence of availability and anchoring bias in the same way students do. Participants do, however, require some experience to undertake the exercise because an interpreter has to be able to apply relevant knowledge and concepts to the data to produce a realistic interpretation. Many participants asked “where in the world?” the seismic section was from. Participants were effectively asking for confirmation, provided by such context, for their interpretations. Alternatively, they may have been seeking a starting point on which to base their interpretations. Typically, when interpreting geological data, the geographical location and, hence, broad tectonic setting of the data is known and interpreters use this prior information to aid their interpretations. Therefore an anchoring bias may operate because interpreters expect to see a particular type of structure in a given setting.

We suggest that the synthetic seismic image may have been effectively biased toward a thrust tectonic setting interpretation because this setting received the highest number (26%) of answers. Conversely, the 2-D seismic section was negatively biased toward a strike-slip interpretation, the tectonic setting category with the lowest number (2%) of interpretational answers. As discussed earlier, many answers that we classified as strike-slip may have fallen into the unclear category due to the selection criteria used to categorize the results. This suggests that we may be seeing elements of both confirmation bias and disconfirmation bias (the use of features as evidence against a particular hypothesis or model) within the participant group: participants confirming thrust features, but disconfirming strike-slip features.

Interpreting geological data is generally an under-constrained problem, requiring knowledge of geological analogues and an ability to apply these to new problems and areas. Frodeman (1995) set geology apart from classical sciences, such as physics, because of the scientific reasoning required in geological science. Frodeman argued that such scientific reasoning skills will become increasingly crucial for issues like global warming, assessing uncertainty and risks in hazard prediction, solute transport, and resource management. In earth and environmental science, scientific uncertainty has an important impact on public policy formation. Pollack (2007) argued that scientific uncertainty should not be seen as a barrier to public policy development but as an opportunity for creative and competitive solutions that can be continuously developed. Assessing uncertainty and risk requires accurate geological framework models from which predictions can be made. Therefore, as geoscientists, acknowledging and evaluating conceptual uncertainty must be a critical factor in maximizing the effectiveness of the geological reasoning process and hence for informing public policy. Understanding more about the factors affecting the concepts that geoscientists apply to information-limited data sets will improve our predictions and the assessment of risk associated with those predictions.

CONCLUSIONS

Conceptual uncertainty is likely to be a major risk factor for sciences in which decision making is based on the interpretation of data sets containing limited information. Our experiment has quantified the range in conceptual uncertainty for a single data set and shown that conceptual uncertainty can have a large effect on interpretational outcome. The interpretational answers of participants in our study show evidence for bias due to their prior knowledge. A range of factors affects how an individual's prior knowledge and hence concepts are applied to data sets. These factors include an interpreter's tectonic expertise and/or breadth of expertise, the length of his or her experience, and the type of techniques an interpreter uses to interpret a section. Distinguishing between these different factors and putting practices in place to elicit intelligent information while mitigating against the unconscious negative use of prior knowledge is a key challenge. Conceptual uncertainty, once quantified, can be used in combination with petrophysical models and other uncertainty calculations to increase the predictability of petroleum and other geological systems and their properties. How an individual geoscientist's prior knowledge may influence his or her interpretation and hence affect the collective conceptual uncertainty for the data set has important implications for training, team building, risk analysis, and decision making. Our results emphasize that a geological interpretation is a model that needs testing.

Acknowledgments

This work was supported by Midland Valley Exploration Ltd and the Scottish Executive SCORE scheme. Midland Valley's 2DMove software was used for forward modeling. Mike Goodwin and GX Technology are thanked for creating the synthetic seismic image using GXII software. The work could not have been completed without the support of individuals within the geoscience community who took part in the interpretation exercise. Andrew Curtis, Glen Stockmal, and Andy Calvert provided thorough and constructive reviews.

REFERENCES CITED

  1. Aguilera R. 2004. Integration of geology, petrophysics, and reservoir engineering for characterization of carbonate reservoirs through Pickett plots. AAPG Bulletin. v. 88. p 433–446 doi: 10.1306/12010303071. Find this article online
  2. Baddeley M.C., Curtis A., Wood R. 2004. An introduction to prior information derived from probabilistic judgements: elicitation of knowledge, cognitive bias and herding. in Curtis A., Wood R. eds Geological Prior Information: Informing Science and Engineering. London: Geological Society Special Publication 239. p 15–27.
  3. Bouriak S.V., Akhemtjanov A.M. 1998. Origin of gas hydrate accumulation on the continental slope of the Crimea from geophysical studies. in Henriet J.-P., Mienert J. eds Gas hydrates: relevance to world margin stability and climatic change. London: Geological Society Special Publication 137. p 215–222.
  4. Brown A.R. 1986. Interpretation of 3D seismic data. 6th ed AAPG Memoir 42 541.
  5. Curtis A., Wood R. eds 2004. Geological Prior Information: Informing Science and Engineering. London: Geological Society Special Publication 239. 229–p p.
  6. Deutsch C.V. 2006. What in the reservoir is geostatistics good for? Journal of Canadian Petroleum Technology. v. 45(no. 4): p 14–20. Find this article online
  7. Egermann P., Lenormand R. 2005. A new methodology to evaluate the impact of localized heterogeneity on petrophysical parameters (k(r), P-c) applied to carbonate rocks. Petrophysics. v. 46(no. 5): p 335–345.
  8. Frodeman R. 1995. Geological reasoning: Geology as an interpretive and historical science. Geological Society of America Bulletin. v. 107(no. 8): p 960–968 doi: 10.1130/0016-7606(1995)107<0960:grgaai>2.3.CO;2. Find this article online
  9. Jones G.D., Barton P.J., Singh S.C. 2007. Velocity images from stacking depth-slowness seismic wavefields. Geophysical Journal International. v. 168. p 583–592 doi: 10.1111/j.1365-246X.2006.03055.x. Find this article online
  10. Kuhn T.S. 1962. The structure of scientific revolutions. Chicago: The University of Chicago Press. 240–p p.
  11. Krueger J.I., Funder D.C. 2004. Towards a balanced social psychology: Causes, consequences and cures for the problem-seeking behaviour and cognition. The Behavioral and Brain Sciences. v. 27. p 313–327 doi: 10.1017/S0140525X04000081. Find this article online
  12. Levi-Strauss C. 1966. The savage mind. Chicago: The University of Chicago Press. 310–p p.
  13. Pollack H.N. 2007. Scientific uncertainty and public policy: Moving on without all the answers. GSA Today. v. 17. p 28–29 doi: 10.1130/GSAT01703GW.1. Find this article online
  14. Rankey E.C., Mitchell J.C. 2003. That's why it's called interpretation: Impact of horizon uncertainty on seismic attribute analysis. The Leading Edge. v. 22. p 820–828 doi: 10.1190/1.1614152.
  15. Veerayya M., Karisiddaiah S.M., Vora K.H., Wagle B.G., Almeida F. 1998. Detection of gas-charged sediments and gas hydrate horizons along the western continental margin of India. in Henriet J.-P., Mienert J. eds Gas hydrates: relevance to world margin stability and climatic change. London: Geological Society Special Publication 137. p 239–253.
  16. Yan J., Tucker M., Lui T. 2006. Reservoir description from well-log and reservoir engineering: An example from Triassic reservoirs in Northwest China. Petroleum Science and Technology. v. 24. p 1417–1430 doi: 10.1080/1091 6460600904385. Find this article online

1GSA Data Repository item 2007280, Uninterpreted seismic section and example questionnaire (DR1) and geological model details and synthetic seismic generation (DR2), is available at www.geosociety.org/pubs/ft2007.htm. You can also obtain a copy by writing to editing@geosociety.org.


No comments: