Reduce uncertainty in geomodelling by asking these essential questions
There are several known challenges to incorporating analogues in reservoir modelling workflows, yet the whole industry is talking about the importance of relying on our past knowledge to help us predict the future quality and production capacity of hydrocarbon assets. A few of the primary concerns when using analogues include: 1) Quality of the analogue; 2) Representativeness to the reservoir in question; 3) Applying the analogue appropriately; 4) Understanding the impact to the reservoir model; and 5) Consistency in methodology.
To build a reservoir model, you must start with some critical datasets and an understanding of the structural evolution and depositional environment of the reservoir. The concern is knowing whether the datasets you have are wholly or partially representative of the reservoir, whether there is enough data to make a reasonable assessment, and whether your methodology is based on science and not strictly intuition. The below questions are meant to help geologists and petroleum engineers improve the accuracy of their models by simply questioning the foundational assumptions underpinning the model.
The 5 Essential Questions
- Do I have enough of the right kind of data?
- If I find I am lacking data or my data isn’t formatted in a way I can query easily, how do I expand my resource base?
- Well, you could commission an expensive overhaul of your company’s existing databases, so the information is provided in a useable format to support your questions. But the likelihood of that exercise being completed in the time necessary for you to provide your recommendation is slim to say the least.
- You could scour the internet, university archives, SPE or AAPG papers, or elsewhere for technical papers containing appropriate analogues for your particular reservoir type. Again the issue becomes time – how much time can you afford to spend researching when there is a deadline for the field development paperwork looming?
You could commission new fieldwork to gather the appropriate analogue data, or approve studies such as reprocessing of seismic data, routine core analysis, special core analysis or analysis of fluid samples. However this may prove economically challenging, and there are no guarantees you would get the results back in time to incorporate them into the model. For example, a typical lab test to calculate relative permeability can take several months to a year before you receive the results. Or perhaps you are working in an organization that can afford to drill a wildcat or exploratory well in which case you would have access to new well and log data.
However in the world of $50/bbl, most of options A-C are rather unrealistic. You could perhaps utilize option B effectively if you had a large team at your disposal with some more junior support to do the data collection, but it’s also somewhat of a pipe dream. So your last option, D, becomes a much more attractive solution.
- You could purchase third-party datasets containing analogues of the type of information you are seeking and in a format that makes sense for your queries. There are several of these types of databases from academically sponsored consortium projects to commercially available products such as Ava Clastics. Many of these databases provide a wealth of knowledge that has been validated by industry experts and in some cases, such as Ava Clastics, expressed in a way that enables benchmarking of your assumptions against hundreds of analogues.
- How can I apply my findings and test the results?
- How do I ensure the geologist and engineer are modelling at the same scale and that our understanding of the geological constraints is consistent?
- But what about models I am currently working on? How can I incorporate analogues without starting over?
Before getting to work building a reservoir model you will likely start by examining the available ‘fresh’ data for that particular region, for example seismic surveys, geological studies, gravmag reports, and anything else that might give you some anchor points to lay a foundation. Or maybe you have an older model of the target you can dust off? The point is you begin building a larger picture understanding of the area so you can ensure your model is moored to reality.
Then you might gather and reference data from other nearby wells for some correlative insight. You might be able to glean some valuable information from the well logs, core analysis reports or production data that will begin to give you an understanding of the reservoir properties and potential behaviour of the reservoir. But you will need more details to fill in the missing gaps. This is when you will often begin searching the company archives.
Most oil companies have treasure troves of data at their disposal. However, much of the information has not been synthesised or tagged with metadata in a manner that makes searching these repositories intuitive. Furthermore, they typically offer a limited view of geographically similar projects vs. project to project comparisons.
For example, you may be able to search the data on a basin or play-type specific criteria, e.g. all data available for the Bakken or within a county, but you might be less successful if you wanted to do a worldwide search of all the shallow-marine assets at a depth of x with a thickness of y. Or maybe your company organises subsurface data on a well by well basis. Imagine combing each well’s data one-by-one to see if you can find some meaningful analogues.
So how do you access the right data in the least amount of time for the reservoir your trying to model? This brings up question #2.
Regardless of which combination of the above methods you adopt, the ultimate goal is to benchmark your findings against an aggregate, so you can better eliminate bias and reduce uncertainty in your geomodel. It’s very easy to fall victim to the trap of using one or two analogues as anchors to your model, but you can see how doing so can negatively impact the viability of your results by reading this whitepaper that describes what happens when experts rely on too little data points to interpret well data.
There are several methods you can choose to incorporate analogue data into your reservoir model - some are faster than others.
Although not particularly time efficient, you could, through using a spreadsheet, for example, summarise your data that supports the particular parameter in question and then analyse the findings to come up with an approximation and a range of uncertainties. This assumes you have collected a sufficient enough sample size to reduce bias and be truly representative of the reservoir. You would then need to manually input those parameters into the appropriate modelling algorithm for the particular facies, geobody, interval, stratigraphy etc. in your model. This process can be laborious, especially if your model has many zones, and the input parameters vary by zone.
Some third-party databases offer the functionality to aggregate the appropriate analogues based on your specified search criteria, and will automatically generate the acceptable ranges for your parameter. For example, with Ava Clastics, you can incorporate your own assumptions to test the practicality against the database, and you can even add your own analogues or expertise so your company’s proprietary information can also be expressed. What’s unique about Ava Clastics is that it will not only automatically transform the analogues to parameters, but it will also express them in algorithms appropriate for immediate use in Petrel* E&P software platform. This whole process takes only about 10-15 minutes, which means you can test multiple scenarios rapidly and directly deploy them to your Petrel model. There are also databases that will allow you to test the relationship of one parameter against another, such as porosity and resistivity, as an example, where you could cross-plot the queried analogues and see where your values lie in comparison.
above image represents three different scenarios tested using Ava Clastics and
An asset team can rapidly test the impact on the reservoir model if
they have uncertainty in a particular parameter.
In this example, the
depositional environment was the variable being analysed.
In both scenarios, the point is that you need to have a large enough sample of analogues so you can quantitatively assess where your assumptions lie in relation to the aggregate, and so you can select an appropriate range of probabilities for the parameter.
This is always a tricky question. The geologist and reservoir engineer might be modelling at the same scale, but the relevant geology impacting the production calculations might be at a smaller scale than the reservoir model or simulator can handle. This is a common problem in modelling, particularly when representing tighter, more heterogeneous reservoirs. It’s a major catalyst for new technology development such as digital rock physics, for example, which impresses upon geomodellers and petroleum engineers the importance of factoring in these fine-grained details. It’s also a chief concern for operators looking to maximise their secondary and tertiary recovery programs. Facies modelling similarly occurs at a small scale and can be challenging to express in the reservoir model. In all cases, it is remarkable how little (if any) guidance or general ‘rules of thumb’ are available to advise on the selection of modelling scale which is primarily expressed grid cell dimensions.
But let’s assume for a moment that you start off modelling at the same scale. Do all members of the team have the same understanding of the various components and geobodies that make up the reservoir, and their importance? Do all the team members agree the cell dimensions can capture the heterogeneity of the reservoir, or is the reservoir engineer going to upscale away all that detail? If not, you might be in for a frustrating ride of miscommunication as the geologist is drawing pictures on a whiteboard and the reservoir engineer is trying to decipher what on earth he or she is saying.
One way to avoid these difficult conversations is to utilize guides and documentation such as the Survival Guide to Fluvial Modelling or the embedded references displayed in some analogue databases. These reference materials are designed to eliminate confusion and to support multi-disciplinary collaboration to build a common understanding of the reservoir. This is important because the focus then becomes what is making the reservoir behave in a particular way, or how will the reservoir produce. Another important step is to develop a quantitative justification for cell dimensions. For example, the Ava Clastics grid cell size calculator embedded in the program helps geoscientists to establish maximum lateral and vertical cell dimensions required to capture the heterogeneity your analogue indicates. This means both the geologist and the engineer understand the critical controlling elements of the reservoir, and these can then be modelled in appropriate detail and scale. It also helps the team reduce inefficiencies due to miscommunication.
Within Ava Clastics, the asset team can actually review the set of analogues that were used to produce the parameters, and they can see example diagrams of the architectural and facies elements, as well as sample well and sedimentary log data that can all be exported into the reservoir model. The reservoir engineer can not only see what parameters were selected and which algorithms were used to express the analogues, but he/she can use the reference diagrams as a means of understanding the various geological layers.
In many cases, once you have a database or a set of reference analogues, you can benchmark your parameters fairly easily. Again, the critical thing is to ensure your sample size is representative of your reservoir conditions. One or two analogues aren’t likely to tell you much about the quality of your assumptions, but as we all know – some data is better than no data at all.
Some databases offer the ability for you to directly upload your existing model so you can visually see how your assumptions stack up against the ‘norms’. This is an easy process that doesn’t take much time, but can really improve your modelling accuracy. If you see that your parameters are outliers, you may want to reconsider or test some more plausible ranges.
In lieu of uploading your full model, you might also find some applications that will allow you to key in your parameters to test them against a filtered dataset. This can also give you an understanding of how accurately you may have represented the reservoir.
Once built, it’s always a good idea to compare and contrast your model against other similar models.
You Might Also Enjoy