There are several known challenges to incorporating analogues in reservoir modelling workflows, yet the whole industry is talking about the importance of relying on our past knowledge to help us predict the future quality and production capacity of hydrocarbon assets. A few of the primary concerns when using analogues include: 1) Quality of the analogue; 2) Representativeness to the reservoir in question; 3) Applying the analogue appropriately; 4) Understanding the impact to the reservoir model; and 5) Consistency in methodology.
To build a reservoir model, you must start with some critical datasets and an understanding of the structural evolution and depositional environment of the reservoir. The concern is knowing whether the datasets you have are wholly or partially representative of the reservoir, whether there is enough data to make a reasonable assessment, and whether your methodology is based on science and not strictly intuition. The below questions are meant to help geologists and petroleum engineers improve the accuracy of their models by simply questioning the foundational assumptions underpinning the model.
The 5 Essential Questions
- Do I have enough of the right kind of data?
- If I find I am lacking data or my data isn’t formatted in a way I can query easily, how do I expand my resource base?
- Well, you could commission an expensive overhaul of your company’s existing databases, so the information is provided in a useable format to support your questions. But the likelihood of that exercise being completed in the time necessary for you to provide your recommendation is slim to say the least.
- You could scour the internet, university archives, SPE or AAPG papers, or elsewhere for technical papers containing appropriate analogues for your particular reservoir type. Again the issue becomes time – how much time can you afford to spend researching when there is a deadline for the field development paperwork looming?
You could commission new fieldwork to gather the appropriate analogue data, or approve studies such as reprocessing of seismic data, routine core analysis, special core analysis or analysis of fluid samples. However this may prove economically challenging, and there are no guarantees you would get the results back in time to incorporate them into the model. For example, a typical lab test to calculate relative permeability can take several months to a year before you receive the results. Or perhaps you are working in an organization that can afford to drill a wildcat or exploratory well in which case you would have access to new well and log data.
However in the world of $50/bbl, most of options A-C are rather unrealistic. You could perhaps utilize option B effectively if you had a large team at your disposal with some more junior support to do the data collection, but it’s also somewhat of a pipe dream. So your last option, D, becomes a much more attractive solution.
- You could purchase third-party datasets containing analogues of the type of information you are seeking and in a format that makes sense for your queries. There are several of these types of databases from academically sponsored consortium projects to commercially available products such as Ava Clastics. Many of these databases provide a wealth of knowledge that has been validated by industry experts and in some cases, such as Ava Clastics, expressed in a way that enables benchmarking of your assumptions against hundreds of analogues.
- How can I apply my findings and test the results?
- How do I ensure the geologist and engineer are modelling at the same scale and that our understanding of the geological constraints is consistent?
- But what about models I am currently working on? How can I incorporate analogues without starting over?
Before getting to work building a reservoir model you will likely start by examining the available ‘fresh’ data for that particular region, for example seismic surveys, geological studies, gravmag reports, and anything else that might give you some anchor points to lay a foundation. Or maybe you have an older model of the target you can dust off? The point is you begin building a larger picture understanding of the area so you can ensure your model is moored to reality.
Then you might gather and reference data from other nearby wells for some correlative insight. You might be able to glean some valuable information from the well logs, core analysis reports or production data that will begin to give you an understanding of the reservoir properties and potential behaviour of the reservoir. But you will need more details to fill in the missing gaps. This is when you will often begin searching the company archives.
Most oil companies have treasure troves of data at their disposal. However, much of the information has not been synthesised or tagged with metadata in a manner that makes searching these repositories intuitive. Furthermore, they typically offer a limited view of geographically similar projects vs. project to project comparisons.
For example, you may be able to search the data on a basin or play-type specific criteria, e.g. all data available for the Bakken or within a county, but you might be less successful if you wanted to do a worldwide search of all the shallow-marine assets at a depth of x with a thickness of y. Or maybe your company organises subsurface data on a well by well basis. Imagine combing each well’s data one-by-one to see if you can find some meaningful analogues.
So how do you access the right data in the least amount of time for the reservoir your trying to model? This brings up question #2.
Regardless of which combination of the above methods you adopt, the ultimate goal is to benchmark your findings against an aggregate, so you can better eliminate bias and reduce uncertainty in your geomodel. It’s very easy to fall victim to the trap of using one or two analogues as anchors to your model, but you can see how doing so can negatively impact the viability of your results by reading this whitepaper that describes what happens when experts rely on too little data points to interpret well data.
There are several methods you can choose to incorporate analogue data into your reservoir model - some are faster than others.
Although not particularly time efficient, you could, through using a spreadsheet, for example, summarise your data that supports the particular parameter in question and then analyse the findings to come up with an approximation and a range of uncertainties. This assumes you have collected a sufficient enough sample size to reduce bias and be truly representative of the reservoir. You would then need to manually input those parameters into the appropriate modelling algorithm for the particular facies, geobody, interval, stratigraphy etc. in your model. This process can be laborious, especially if your model has many zones, and the input parameters vary by zone.
Some third-party databases offer the functionality to aggregate the appropriate analogues based on your specified search criteria, and will automatically generate the acceptable ranges for your parameter. For example, with Ava Clastics, you can incorporate your own assumptions to test the practicality against the database, and you can even add your own analogues or expertise so your company’s proprietary information can also be expressed. What’s unique about Ava Clastics is that it will not only automatically transform the analogues to parameters, but it will also express them in algorithms appropriate for immediate use in Petrel* E&P software platform. This whole process takes only about 10-15 minutes, which means you can test multiple scenarios rapidly and directly deploy them to your Petrel model. There are also databases that will allow you to test the relationship of one parameter against another, such as porosity and resistivity, as an example, where you could cross-plot the queried analogues and see where your values lie in comparison.