Discover how dGB Earth Sciences leverages machine learning to transform 2D seismic lines into pseudo-3D volumes. In this webinar, David Markus (Head of AI at dGB) and co-authors present workflows, case studies, and results from projects with Conrad Asia Energy. Learn about structurally guided interpolation, bandwidth extension, and practical applications that reduce uncertainty, enhance subsurface clarity, and support efficient 3D seismic planning.

#OpendTect #SeismicInterpretation #Pseudo3D #Geophysics #MachineLearningGeoscience #SubsurfaceImaging #dGBEarthSciences #SeismicDataTransformation

Duration: 26:13

--- Transcript ---

Welcome to dGB Earth Sciences webinar titled Transforming 2D Seismic into Pseudo 3D my name is David Markus I'm the head of AI globally for dGB Earth Sciences. And I'd like to thank my co-authors Kristoffer Rimaila and Paul de Groot from dGB. And Radhi Muammar from Conrad Asia Energy whose company kindly allowed us to present the images available in this webinar.

And also in the February first break publication of a similar title so feel free after the webinar to look up the first break article in February's issue for those who are unfamiliar with dGB in case you're joining our webinar series for the first time we're a global technology company founded in 1995. We provide seismic interpretation solutions including producing OpendTect the only open-source seismic interpretation system that. Also supports advanced commercial geoscience plugins we have offices in the Netherlands Koala Lumpur USA India Ho Chi Minh City.

And Jakarta we've always been a machine learning company even before the founding of dGB starting in 1991 research. And development at TNO including prototypes of the multi-layer perceptron UVQ for pseudo wells in 1995 dGB was started the first commercial product was a chimney cube the product was called dTect. And then 2003 the open source version of OpendTect was first released and that included the first commercial plugin which was the neural network plugin in 2020 on the back of all the advances in deep learning we began development.

And released our first commercial machine learning plugin and that still includes the neural network plugin and I only mentioned this because we talk about machine learning in this webinar. And I wanted to give the audience a little background on how much experience we have on this topic. So let's talk a little bit about the value of seismic data we should all know what is 2D seismic in a nutshell we use straight receiver lines to capture subsurface structures visualizing major features you know compared to 3D it's faster cheaper requires less permitting.

And surveying and also processing time it still remains crucial for companies in assessing new permits exploring new basins. And providing foundational insights that guide decision-making processes but of course 3D is more valuable 3d imaging provides continuous subsurface information while 2D seismic offers only strips of data the advantages include increased data clear subsurface views detailed structural stratigraphic features higher success rates. And thus longer well lifespans it can be considered to be more cost effective when we count mobilization time detail.

And reduced uncertainty but it's still more expensive even though per unit of data it may be more cost effective. But it is still much more expensive than 2D and so optimizing 3D data acquisition is crucial for maximizing savings. And profitability which brings us to the topic of today's webinar which is moving from 2D seismic to something called pseudo 3D.

And this would enhance 2D seismic data by converting it to 3D using machine learning and available migrated stacks. And other 3D vintages so let's talk a little bit about the process of transforming 2D lines into 3D seismic volumes leveraging machine learning as a tool that can aid us. And not necessarily replace all of the steps in the process but we can utilize it to our best abilities.

And so why would we want to use machine learning well machine learning can identify complex relationships in data that has varying accuracies. And scales it's applicable to many processes that are interesting in upstream exploration they can model arbitrary dependencies. But of course we need to be careful to avoid overfitting and they can still miss the global optimal solution.

So we must be careful when training our models so one example of a machine learning model that people might be familiar with is something called a generative adversarial network (GAN). And it basically consists of two neural networks contesting with each other in a zero-sum game framework where the generator network creates fake inputs the discriminator tries to distinguish between real. And fake inputs and these are often used for super resolution or generating novel outputs based on just a description of the inputs.

So here's an example of a GAN applied to seismic imaging in this case it's super sampling the bin spacing to increase fault imaging. And quality and thus this is an application of something akin to super resolution but it's not really applicable to what we're talking about in 2D. And let's have a look at why so 2D mapping of spatial features provides these gross outlines.

But it lacks detail especially in between the lines making it hard to identify smaller features as they intersect fewer lines. And then of course machine learning models can be trained in a natural image sense on thousands. And millions of faces enabling to generalize from this data and predict what the samples between the lines should look like regardless of resolution.

But different from a natural image in this case image of my daughter if we just did an interpolation of those 2D seismic lines we would have no idea that those eyes should exist. Now a neural network trained on human faces could generalize and assume that those eyes exist there. But we cannot do that kind of thing in 2D seismic and let's have a look at why.

And so unlike human faces with common features subsurface data lacks predictable relationships between those features making it harder to infer those unseen details from the 2D lines if we somehow could infer the features in between we could perform the ideal super resolution upscaling. But for 2D seismic we can't know details like this for certain because our training data doesn't contain enough information to predict those kinds of features. So then what does this have to do with seismic well pixel size in seismic could refer to X.

And Y bin spacing on time slices interpolating for super resolution in this sort of sense ignores vertical data in the Z direction. And seismic traces show event correlations revealing features like stratigraphy and faults so we need a method that respects seismic data's unique characteristics. So perhaps it's a good idea to look at some ideas tried before by others in terms of making pseudo 3D volumes.

So this is a traditional meaning non-machine learning enhanced pseudo 3D example from Abu Dhabi it's an example of merging post-stack 2D. And 3D data the pre-processing included things that made a lot of sense amplitude rescaling time gain frequency balancing phase corrections. And bulk time shifts for misties now the authors admit that they did very basic pre-processing you know phase corrections being sign flips.

And just simple bulk shifts for the misties and then they did horizon guided interpolation which was an innovation. But they did it on a very coarse grid and then they tried to upscale the data to 3D resolution using kriging to fill in the gaps. And their post-process was limited to smoothing as far as we could tell some key points of note in the study is that without structurally consistent guided interpolation grid line cancellation attenuation occurs resulting in smooth detail lacking interpolation that doesn't affect real seismic data a bulk tie shift bulk shift to tie the water bottom.

Or to some seismic datum is really insufficient misties tend to add up especially as we get deeper. And the structurally consistent interpolation cannot do a good job without those ties being handled everywhere and vertical gaps that are caused by misties between the lines. And subvertical gaps from misties can also result in zero areas handling the edges of data handling misties is a really important step.

And we'll discuss that later when we get to our workflow another pseudo 3D workflow without machine learning is a pre-stack pseudo 3D workflow where seismic gathers are de-migrated the data is matched. And merged globally adjusted for phase time and frequency misties and dips are corrected the method is unspecified as to how they do that in the publication. And horizons are projected using a least-squares algorithm to create gathers at each bin and finally the gathers are re-migrated to produce the final outputs.

Now although 3D migration more accurately positions events this process is time-intensive and requires high processing power something akin to 3D processing. So now let's look at how we can use machine learning to aid us in the pseudo 3D creation process well we want to use widely available post-stack data as opposed to pre-stack data for cost effectiveness. And quick results without reprocessing we want to apply machine learning to enhance those results and speed up workflows we aim for fidelity that's close to real 3D seismic.

And then we of course want to leverage the pseudo 3D to guide 3D seismic planning and support economic decisions. So looking at the data requirements we need one or more vintages of 2D seismic it can be full stack we can. Also use multiple angles and offsets and do them each independently we need very much for the structurally guided interpolation interpretive horizons.

And so at the very least we need a water bottom land near surface datum as many horizons as possible. And we want to capture as many geologic structural features as we can and then of course optionally we could have 3D stacks. So we can ground truth our work and also wells for tying the seismic at the very end the pseudo 3D process consists of five steps I'm not sure what happened on this slide.

But our new workflow builds on earlier work for machine learning seismic driven interpolation methods and consists of four main tasks which is harmonization interpolation post interpretation processing. And spectral enhancement where harmonization balances the amplitudes phase and frequencies it's essential for multi-vintage legacy data. But even in single vintage projects we can correct overprocessed data such as aggressive whitening and bandwidth extension that could deteriorate actually from the ability to get a good final image horizon guided interpolation generates a first pass pseudo 3D volume.

And depending on the line spacing we can use either a machine learning model as an interpolator or more conventional structurally consistent interpolation algorithms. And the post interpolation processing removes artifacts from the first pass 3D volume and in the final step we apply machine learning model to enhance the bandwidth. And resolution spectrally enhancing the interpolated data while maintaining amplitude variations laterally and vertically so here's an example from the Conrad data where three volumes sorry three vintages of data were spectrally.

And amplitude balanced and in this case what we do is we bring things to the lowest common denominator first. So the 2D survey in this case had very high resolution very high spectrum cutoff frequency I think it was something around 120 hertz the data was usable to it was really well processed data. And the two 3D surveys were much less and so we had to combine these in a way to appear that they were all processed the same way.

And so we did a spectral and amplitude balance as I mentioned before mistie analysis and compensation is very important. And being able to adjust the input seismic warping it in a not just believable manner but a way that doesn't disrupt the the frequency spectrum the seismic data doesn't pull the wavelets apart we think we've solved that. So here's an example of a mistie that's been done globally and corrected.

And although there are still some areas that are not especially up shallow that are not perfectly handled the target in this case was deeper. And so we didn't really care if the very near surface wasn't perfect. So once we have these horizons we can build this we call it a medium frequency model of the subsurface with structurally conformal interpolation.

And this is creating conformal 3D horizons from the corrected 2D horizons and we can use several tools that are available to us inside of our software OpendTect using inversion based flattening dip steer tracking geological constraints. And optionally doing correlation QC in the Wheeler domain so that we can focus on a structurally consistent medium frequency model that doesn't have high resolution noise. And this model enables fast 3D interpretation updates we can already start to pick 3D horizons on this.

And extract attributes RMS in a window around the target and things like this and of course we can do additional post-processing. So even at this stage we already have a very usable product that either used machine learning or or did not. And so if we use machine learning in the interpolation then it's there if we did one of our methods of structurally consistent interpolation without using machine learning.

Then we haven't used any yet and so let let's think about what we could do with machine learning in the next step knowing that we've reduced the frequency content of the spectrum in the balancing process. So keeping an eye on what happens if we didn't do structurally conformable interpolation here's an example of trying to interpolate without it. And then with it and we can see inside the red circles the large differences that would be I guess ungeological I want to say unphysical.

But I guess ungeological is more close to the truth and so we sort of prove to ourselves that we need structurally conformal interpolation. And so once we do that and we look at an example of this medium frequency interpolated model that results from this structurally guided interpolation we can see already that it looks like very usable seismic. But we remind ourselves that this the spectrum now limited compared to the original source data of our best 2D lines.

So what we'd like to do is we would like to perform a type of super resolution for bandwidth extension. And in this case the low resolution image is created from a high resolution one using blur. And noise in the GAN sense and so we want to learn to map from this low resolution to high resolution image using these degraded inputs.

And a high resolution ground truth and so for our seismic data our high-frequency images are predicted from these inherently blurred lower frequency images which are our medium frequency model. And so everywhere at every 2D line where we have our high-resolution best seismic in 2D we can extract from the medium frequency model essentially a blurred image of the same. And we can use that to train so we cannot train machine learning models or we can.

But we shouldn't we should never train machine learning models on data that wasn't part of this area we don't want to introduce geology from somewhere else that doesn't fit. And so in this case if we had very limited input data we can create misleading patterns in the high resolution image. And of course if we used a pre-trained model from another basin we would also introduce hallucinations perhaps have geological features that should not exist.

And so using various degradation functions and augmentations to enhance the model generalization we can create these diverse low-resolution or medium frequency pseudo 3D images. And then we can extract those and learn to infer those bandwidth extended patches across the pseudo 3D volume. And so on the left here we see a spectrum which is similar to our spectrally balanced spectrum.

And this is only calculated on what's inside the window so it's a bit noisy it's not as smooth as it could be. And in the same bandwidth extended image now we have a spectrum which has a a 20dB down of. Now in this case 100 hertz so we can see quite a lot of spectral broadening already.

And we see a lot more detail in this image and of course we could argue that well we're only looking at the intersection between the pseudo 3D the medium frequency pseudo 3D the blurred patches. And the 2D line so what happens in between so if we compare at the original line first what we have in the left panel is the medium frequency pseudo 3D on the right we have the original 2D line. And in the middle we have the bandwidth extended pseudo 3D and you can see there are.

Now a lot of features you can see them in the shallow area you can see throughout below the unconformity we have a lot more frequency content. And we have a lot more structurally consistent seismic that's predicted using the original 2D line. But that's all fine and good because we're at a 2D line so let let's see what happens when we go between two 2D lines.

So the image on the right stays the same because we don't have a ground truth between the two 2D lines in this case. And so looking at the medium frequency pseudo 3D on the left panel and then the bandwidth extended pseudo 3D in the right in the middle panel we can still see the increased resolution. And bandwidth and if we sort of go between the two we see how much better the bandwidth extended pseudo 3D is compared to the already usable medium frequency pseudo 3D.

So if we look at a a well transect through that pseudo 3D so this is through several wells that were drilled in the area. And so we create a transect as we would in a 3D volume and we get a very good result. And when I presented this at SEAPEX somebody in the audience screamed at me and said yeah this is all great because it's all pretty flat.

So the interpolation works just fine so I submit to you that the process of structurally conformal interpolation allows us to deal with complicated structure. And these are all arbitrary lines about between 30 to 50 ° say 45 ° off the 2D grid axis. And so we do have some artifacts that may be due to the lack of data and due to grid spacing.

And although the the Conrad asked me if I could create some magical filter to remove these. And make the data look like it didn't have these artifacts I pushed back and I said actually you know we need to know where the data is breaking down. And where we have areas of uncertainty and so yes we could apply some dip steered median filter or some other process to make this data u not have those kinds of gaps.

But those kinds of gaps signal to us that maybe we need to look at the survey grid. And understand you know what our uncertainty is and of course we could even create an attribute using OpendTect which tells us you know what our spacing is between the 2D lines. And we can apply that as a semi-transparent attribute and so we can know also by looking at the data what our uncertainty is.

So if we look at this as a volume rendering looking at the amplitudes and so we see this structurally consistent interpolation with the bandwidth extension provides us very realistic seismic data. And we believe that we have not introduced fake features we have only trained the model to learn from the 2D seismic what the features should look like in this area. And we've created a bandwidth extended version of the seismic using only the medium frequency structurally conformable interpolated data as input.

And only training data from this dataset and no other dataset so we think we're doing the best job that we possibly can. And to show that we think we did the best job that we possibly could we actually did a spectral decomposition on the output volume. And although I can't show the overlapping 3D volumes we can be certain that where we see channels in areas of the pseudo 3D we see the same channels inside the 3D volumes in of course at much higher resolution detail.

And our client project contact Radhi Muammar has told us and also at SEAPEX in Jakarta has illuminated the fact that there are channels in this pseudo 3D that they couldn't infer their connectivity from only the 2D lines there there was too much uncertainty. And that the geologist at Conrad believes that this pseudo 3D proves their hypothesis in in an area of this channel system where channels extend. And potentially would add to perspective area there and so this is we think an example where transforming this legacy 2D data into 3D post stack images can enhance structural clarity help with interpretation efficiency.

And provides a much more detailed subsurface model that in this case for Conrad improves targeting and reduce uncertainty. And allows for efficient 3D survey planning of course additional post-processing generation of attributes and even performing seismic inversion could provide added value to the pseudo 3D process we find in the end that where it is feasible pseudo 3D should be used as a standard process in 2D acquisition. And processing.