Within the last a couple of years, deep learning techniques, represented byconvolutional neural networks(CNNs), have been applied to fault detection problems on seismic data with an impressive outcome. As is true for all supervised learning techniques, the performance of a CNN fault detector highly depends on the training data, and post-classification regularization may greatly improve the result. Sometimes, a pure CNN-based fault detector that works perfectly on synthetic data may not perform well on field data. In this study, we investigate a fault detection workflow using both CNN and directional smoothing/sharpening. Applying both on a realistic synthetic fault model based on theSEAM(SEG Advanced Modeling) model and also field data from the Great South Basin, offshore New Zealand, we demonstrate that the proposed fault detection workflow can perform well on challenging synthetic and field data.
Benefited from its high flexibility in network architecture, convolutional neural networks (CNNs) are a supervised learning technique that can be designed to solve many challenging problems in exploration geophysics. Among these problems, detection of particular seismic facies of interest might be the most straightforward application of CNNs. The first published study applying CNN on seismic data might beWaldeland and Solberg (2017), in which the authors used a CNN model to classify salt versus non-salt features in a seismic volume. At about the same time as Waldeland and Solberg (2017),Araya-Polo et al. (2017)andHuang et al. (2017)reported success in fault detection using CNN models.
From a computer vision perspective, in seismic data, faults are a special group of edges. CNN has been applied to more general edge detection problems with great success (El- Sayed et al., 2013;Xie and Tu, 2015). However, faults in seismic data are fundamentally different from edges in images used in computer vision domain. The regions separated by edges in a traditional computer vision image are relatively homogeneous, whereas in seismic data such regions are defined by patterns of reflectors. Moreover, not all edges in seismic data are faults. In practice, although providing excellent fault images, traditional edge detection attributes such as coherence (Marfurt et al., 1999) are also sensitive to stratigraphic edges such as unconformities, channel banks, and karst collapses.Wu and Hale (2016)proposed a brilliant workflow for automatically extracting fault surfaces, in which a crucial step is computing the fault likelihood. CNN-based fault detection methods can be used as an alternative approach to generate such fault likelihood volumes, and the fault strike and dip can be then computed from the fault likelihood.
One drawback of supervised machine learning-based fault detection is its brute-force nature, meaning that instead of detecting faults following geological/geophysical principles, the detection purely depends on the training data. In reality, we will never have training data that covers all possible appearances of faults in seismic data, nor are our data noise- free. Therefore, although the raw output from the CNN classifier may adequately represent faults in synthetic data of simple structure and low noise, some post-processing steps are needed for the result to be useful on field data. Based on the traditional coherence attribute,Qi et al. (2017)introduced an image processing-based workflow to skeletonize faults. In this study, we regularize the raw output from a CNN fault detector with an image processing workflow built on Qi et al. (2017) to improve the fault images.
We use both a realistic synthetic data and field data to investigate the effectiveness of the proposed workflow. The synthetic data should ideally be a good approximation of field data and provide full control on the parameter set. We build our synthetic data based on the SEAM model (Fehler and Larner, 2008) by taking sub-volumes from the impedance model and inserting faults. After verifying the performance on the synthetic data, we then move on to field data acquired from the Great South Basin, offshore New Zealand, where extensive amount of faulting occurs. Results on both synthetic and field data show great potential of the proposed fault detection workflow which provides very clean fault images.
The proposed workflow starts with a CNN classifier which is used to produce a raw image of faults. In this study, we adopt a 3D patch-based CNN model that classifies each seismic sample using samples within a 3D window. An example of the CNN architecture used in this study is provided in Figure 1. Basic patch-based CNN model consists of several convolutional layers, pooling (downsampling) layers, and fully-connected layers. Given a 3D patch of seismic amplitudes, a CNN model first automatically extracts several high-level abstractions of the image (similar to seismic attributes) using the convolutional and pooling layers, then classifies the extracted attributes using the fully- connected layers, which behave similar to a traditional multilayer perceptron network. The output from the network is then a single value representing the facies label of the seismic sample centered at the 3D patch. In this study, the label is binary, representing “fault” or “non-fault”.
We then use a suite of image processing techniques to improve the quality of the fault images. First, we use a directionalLaplacian of Gaussian(LoG) filter (Machado et al., 2016) to enhance lineaments that are of high angle from layering reflectors and suppress anomalies close to reflector dip, while calculating the dip, azimuth, and dip magnitude of the faults. Taking these data, we then use a skeletonization step, redistributing the fault anomalies within a fault damage zone to the most likely fault plane. We then do a thresholding to generate a binary image for faults. Optionally, if the result is still noisy, we can continue with a median filter to reduce the random noise and iteratively perform the directional LoG and skeletonization to achieve a desirable result. Figure 2 summarizes the regularization workflow.
We first test the proposed workflow on synthetic data built on the SEAM model. To make the model a good approximation of real field data, we select a portion in the SEAM model where stacked channels and turbidites exist. We then randomly insert faults in the impedance model and convolve with a 40Hz Ricker wavelet to generate seismic volumes. The parameters used in random generation of five reverse faults in the 3D volume are provided in Table 1. Figure 3a shows one line from the generated synthetic data with faults highlighted in red. In this model, we observe strong layer deformation with amplitude change along reflectors due to the turbidites in the model. Therefore, such synthetic data are in fact quite challenging for a fault detection algorithm, because of the existence of other types of discontinuities.
We randomly use 20% of the samples on the fault planes and approximately the same amount of non-fault samples to train the CNN model. The total number of training sample is about 350,000, which represents <1% of the total samples in the seismic volume. Figure 3b shows the raw output from the CNN fault detector on the same line shown in Figure 3a. We observe that instead of sticks, faults appear as a small zone. Also, as expected, there are some misclassifications where data are quite challenging. We then perform the regularization steps excluding the optional steps in Figure 2. Figure 3c shows the result after directional LoG filter and skeletonization. Notice that these two steps have cleaned up much of the noise, and the faults are now thinner and more continuous. Finally, we perform a thresholding to generate a fault map where faults are labeled as “1” and “0” for everywhere else (Figure 3d). Figure 4 shows the fault detection result on a less challenging line. We observe that the result on such line is nearly perfect.
|Fault attribute||Values range|
|Dip angle (degree)||-15 to 15|
|Strike angle (degree)||-25 to 25|
|Displacement (m)||25 to 75|
Table 1.Parameter ranges used in generating faults in the synthetic model.
Field Data Test
We further verify the proposed workflow on field data from the Great South Basin, offshore New Zealand. The seismic data contain extensive faulting with complex geometry, as well as other types of coherence anomalies as shown in Figure 5. In this case, we manually picked training data on five seismic lines for regions representing fault and non-fault. An example line is given in Figure 6. As one may notice, although the training data consist very limited coverage in the whole volume, we try to include the most representative samples for the two classes. On the field data, we use the whole regularization workflow including the optional steps. Figure 7 gives the final output from the proposed workflow, and the result from using coherence in lieu of raw CNN output in the workflow. We observe that the result from CNN plus regularization gives clean fault planes with very limited noise from other types of discontinuities.
In this study, we introduce a fault detection workflow using both CNN-based classification and image processing regularization. We are able to train a CNN classifier to be sensitive to only faults, which greatly reduces the mixing between faults and other discontinuities in the produced faults images. To improve the resolution and further suppress non-fault features in the raw fault images, we then use an image processing-based regularization workflow to enhance the fault planes. The proposed workflow shows great potential on both challenging synthetic data and field data.
The authors thankGeophysical Insightsfor the permission to publish this work. We thank New Zealand Petroleum and Minerals for providing theGreat South Basin seismic datato the public. The CNN fault detector used in this study is implemented inTensorFlow, an open source library fromGoogle. The authors also thank Gary Jones at Geophysical Insights for valuable discussions on the SEAM model.
IDP leverages a deep learning network known as CNN (Convolutional Neural Networks) to learn patterns that naturally occur in photos. IDP is then able to adapt as new data is processed, using Imagenet, one of the biggest databases of labeled images, which has been instrumental in advancing computer vision.What is the difference between deep learning and image processing? ›
In contrast to traditional image processing techniques, DL helps achieve greater accuracy in tasks such as object detection, image classification, Simultaneous Localization and Mapping (SLAM), and semantic segmentation.Is Deep learning used in image recognition? ›
Deep learning is widely used in image recognition due to its advantages, such as strong feature extraction ability and high recognition accuracy. Comparing with some standard networks, such as RNN, Convolutional Neural Networks (CNN) has a noticeable effect on image recognition.Is image recognition machine learning or deep learning? ›
The images from the created dataset are fed into a neural network algorithm. This is the deep or machine learning aspect of creating an image recognition model. The training of an image recognition algorithm makes it possible for convolutional neural networks image recognition to identify specific classes.What are four different types of image processing methods? ›
Common image processing include image enhancement, restoration, encoding, and compression.What is the best deep learning algorithm for image classification? ›
Two popular algorithms used for unsupervised image classification are 'K-mean' and 'ISODATA. ' K-means is an unsupervised classification algorithm that groups objects into k groups based on their characteristics.Why we use deep learning for image classification? ›
In CNNs, the nodes in the hidden layers don't always share their output with every node in the next layer (known as convolutional layers). Deep learning allows machines to identify and extract features from images. This means they can learn the features to look for in images by analysing lots of pictures.What is an example of deep learning? ›
Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a lamppost.Which deep learning model is best for object detection? ›
R-CNN – Region-based Convolutional Neural Networks
Region-based convolutional neural networks or regions with CNN features (R-CNNs) are pioneering approaches that apply deep models to object detection.
CNN is a powerful algorithm for image processing. These algorithms are currently the best algorithms we have for the automated processing of images. Many companies use these algorithms to do things like identifying the objects in an image. Images contain data of RGB combination.
Some of the algorithms used in image recognition (Object Recognition, Face Recognition) are SIFT (Scale-invariant Feature Transform), SURF (Speeded Up Robust Features), PCA (Principal Component Analysis), and LDA (Linear Discriminant Analysis).Why deep learning is better than machine learning in image processing? ›
Machine learning requires less computing power; deep learning typically needs less ongoing human intervention. Deep learning can analyze images, videos, and unstructured data in ways machine learning can't easily do. Every industry will have career paths that involve machine and deep learning.Which machine learning algorithm is used for image processing? ›
Convolutional Neural Networks (CNN) take in an input image and use filters on it, in a way that it learns to do things like object detection, image segmentation and classification.What are some examples of image recognition? ›
The most common example of image recognition can be seen in the facial recognition system of your mobile. Facial recognition in mobiles is not only used to identify your face for unlocking your device; today, it is also being used for marketing.What is the purpose of image processing? ›
Image processing is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it. It is a type of signal processing in which input is an image and output may be image or characteristics/features associated with that image.What are the 3 method of image information? ›
3.1 Detection of Edges, Lines, and Corners. Our NL detection algorithm uses three image processing methods: edge detection, line detection, and corner detection. Edge detection transforms images into bitmaps where every pixel is classified as belonging or not belonging to an edge.Which software is used for digital image processing? ›
For processing digital images the most common software that used widely is Adobe Photoshop. Our Digital Image Processing Tutorial includes all topics of Digital Image Processing such as introduction, computer graphics, signals, photography, camera mechanism, pixel, transaction, types of Images, etc.Why CNN is used in image processing? ›
The Convolutional Neural Network (CNN or ConvNet) is a subtype of Neural Networks that is mainly used for applications in image and speech recognition. Its built-in convolutional layer reduces the high dimensionality of images without losing its information. That is why CNNs are especially suited for this use case.How do you create a deep learning model? ›
- Step-1) Load Data.
- Step-2) Define Keras Model.
- Step-3) Compile The Keras Model.
- Step-4) Start Training (Fit the Model)
- Step-5) Evaluate the Model.
- Step-6) Making Predictions.
3) In which of the following applications can we use deep learning to solve the problem? Solution: DWe can use a neural network to approximate any function so it can theoretically be used to solve any problem.
- Search for stereo matches (FullHD)
- Optical flow.
- Continuous integration system.
- CUDA-optimized architecture.
- Android version.
- Java API.
- Built-in performance testing system.
- List of Deep Learning Layers.
- Deep Learning Layers. Input Layers. Convolution and Fully Connected Layers. Sequence Layers. Activation Layers. Normalization Layers. Utility Layers. Resizing Layers. Pooling and Unpooling Layers. Combination Layers. Object Detection Layers. Output Layers.
- See Also.
- Related Topics.
Object detection using deep learning provides a fast and accurate means to predict the location of an object in an image. Deep learning is a powerful machine learning technique in which the object detector automatically learns image features required for detection tasks.What is deep learning images? ›
Deep Learning Image Processing
These are a special kind of framework that imitates the human brain to learn from data and make models. One familiar neural network architecture that made a significant breakthrough on image data is Convolution Neural Networks, also called CNNs.
Deep learning is an important element of data science, which includes statistics and predictive modeling. It is extremely beneficial to data scientists who are tasked with collecting, analyzing and interpreting large amounts of data; deep learning makes this process faster and easier.Why it is called deep learning? ›
Deep Learning is called Deep because of the number of additional “Layers” we add to learn from the data. If you do not know it already, when a deep learning model is learning, it is simply updating the weights through an optimization function. A Layer is an intermediate row of so-called “Neurons”.Which are common applications of deep learning? ›
- Fraud detection.
- Customer relationship management systems.
- Computer vision.
- Vocal AI.
- Natural language processing.
- Data refining.
- Autonomous vehicles.
Deep learning object detection is a fast and effective way to predict an object's location in an image, which can be helpful in many situations. RCNN or Region-based Convolutional Neural Networks, is one of the pioneering approaches that is utilised in object detection using deep learning.Why is CNN best for object detection? ›
R-CNN helps in localising objects with a deep network and training a high-capacity model with only a small quantity of annotated detection data. It achieves excellent object detection accuracy by using a deep ConvNet to classify object proposals.What is object detection in image processing? ›
Object detection is a computer vision technique for locating instances of objects in images or videos. Object detection algorithms typically leverage machine learning or deep learning to produce meaningful results.
A CNN is a kind of network architecture for deep learning algorithms and is specifically used for image recognition and tasks that involve the processing of pixel data. There are other types of neural networks in deep learning, but for identifying and recognizing objects, CNNs are the network architecture of choice.How AI is used in image processing? ›
Image processing with artificial intelligence can power face recognition and authentication functionality for ensuring security in public places, detecting and recognizing objects and patterns in images and videos, and so on.How is AI used in image recognition? ›
Image recognition examines each pixel in an image to extract relevant information in the same way that humans do. AI cams can detect and recognize a wide range of objects that have been trained in computer vision.Can CNN be used for object detection? ›
With the recent advancement in deep neural networks in image processing, classifying and detecting the object accurately is now possible. In this paper, Convolutional Neural Networks (CNN) is used to detect objects in the environment.What is the biggest advantage of deep learning? ›
One of the biggest advantages of using deep learning approach is its ability to execute feature engineering by itself. In this approach, an algorithm scans the data to identify features which correlate and then combine them to promote faster learning without being told to do so explicitly.What are the 3 different types of machine learning use examples to explain? ›
These are three types of machine learning: supervised learning, unsupervised learning, and reinforcement learning.What is an example of value created through the use of deep learning? ›
Deep learning models can identify and sort images based on location, items, and even people. Image analysis for obscenity on social media platforms helps create a safer environment for all. Visual recognition helps access the right images from the vast libraries of search engines.What type of machine learning is image recognition? ›
Computer vision is a field of IT that focuses on machines' ability to analyze and understand images and videos, and it goes through the task of image recognition in machine learning.What are the benefits of image recognition? ›
- While image recognition is a relatively new technology, more and more companies are developing new functionalities which take advantage of its potential. ...
- Better product recommendations. ...
- Faster searches. ...
- Services of a virtual stylist. ...
- Ability to create new sales channels.
Image recognition, in the context of machine vision, is the ability of software to identify objects, places, people, writing and actions in images. Computers can use machine vision technologies in combination with a camera and artificial intelligence software to achieve image recognition.
- Flatten the input image dimensions to 1D (width pixels x height pixels)
- Normalize the image pixel values (divide by 255)
- One-Hot Encode the categorical column.
- Build a model architecture (Sequential) with Dense layers.
Image processing basically includes the following three steps: Importing the image via image acquisition tools; Analysing and manipulating the image; Output in which result can be altered image or report that is based on image analysis.Why CNN is used in image processing? ›
The Convolutional Neural Network (CNN or ConvNet) is a subtype of Neural Networks that is mainly used for applications in image and speech recognition. Its built-in convolutional layer reduces the high dimensionality of images without losing its information. That is why CNNs are especially suited for this use case.How does digital image processing work? ›
DIP focuses on developing a computer system that is able to perform processing on an image. The input of that system is a digital image and the system process that image using efficient algorithms, and gives an image as an output. The most common example is Adobe Photoshop.How is AI used in image processing? ›
Image processing with artificial intelligence can power face recognition and authentication functionality for ensuring security in public places, detecting and recognizing objects and patterns in images and videos, and so on.What are examples of image processing? ›
- Rescaling Image (Digital Zoom)
- Correcting Illumination.
- Detecting Edges.
- Mathematical Morphology.
- Evaluation and Ranking of Segmentation Algorithms.
- Image Acquisition. Image acquisition is the first step in image processing. ...
- Image Enhancement. ...
- Image Restoration. ...
- Color Image Processing. ...
- Wavelets and Multiresolution Processing. ...
- Compression. ...
- Morphological Processing. ...
The widely used algorithms in this context include denoising, region growing, edge detection, etc. The contrast equalization is often performed in image-processing and contrast limited adaptive histogram equalization (CLAHE) is a very popular method as a preprocessing step to do it .How CNN works in deep learning? ›
CNN utilizes spatial correlations which exist with the input data. Each concurrent layer of the neural network connects some input neurons. This region is called a local receptive field. The local receptive field focuses on hidden neurons.What is CNN algorithm in deep learning? ›
Within Deep Learning, a Convolutional Neural Network or CNN is a type of artificial neural network, which is widely used for image/object recognition and classification. Deep Learning thus recognizes objects in an image by using a CNN.
Image processing includes the following three steps:
Importing the image via image acquisition tools; Analyzing and manipulating the image; Output in which result can be a modified image or a report that is based on the image analysis.
For processing digital images the most common software that used widely is Adobe Photoshop. Our Digital Image Processing Tutorial includes all topics of Digital Image Processing such as introduction, computer graphics, signals, photography, camera mechanism, pixel, transaction, types of Images, etc.What are the applications of image processing? ›
- 1) Image sharpening and restoration. It refers to the process in which we can modify the look and feel of an image. ...
- 2) Medical Field. ...
- 3) Robot vision. ...
- 4) Pattern recognition. ...
- 5) Video processing.
1. Convolutional Neural Networks (CNNs) CNN's, also known as ConvNets, consist of multiple layers and are mainly used for image processing and object detection.Which algorithm is used for image processing in machine learning? ›
Feature mapping using the scale-invariant feature transform (SIFT) algorithm. Image registration using the random sample consensus (RANSAC) algorithm. Image Classification using artificial neural networks. Image classification using convolutional neural networks (CNNs)Why is image processing important in machine learning? ›
ML-based Image Processing
This is because of considerable improvements in the access to data and increases in computational power, which allow practitioners to achieve meaningful results across several areas. Today, when it comes to image data, ML algorithms can interpret images the same way our brains do.