Artwork

תוכן מסופק על ידי Oracle Universtity and Oracle Corporation. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Oracle Universtity and Oracle Corporation או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

Encore Episode: Deep Learning

17:55
 
שתפו
 

Manage episode 416975100 series 3560727
תוכן מסופק על ידי Oracle Universtity and Oracle Corporation. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Oracle Universtity and Oracle Corporation או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Did you know that the concept of deep learning goes way back to the 1950s? However, it is only in recent years that this technology has created a tremendous amount of buzz (and for good reason!). A subset of machine learning, deep learning is inspired by the structure of the human brain, making it fascinating to learn about. In this episode, Lois Houston and Nikita Abraham interview Senior Principal OCI Instructor Hemant Gahankari about deep learning concepts, including how Convolution Neural Networks work, and help you get your deep learning basics right. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Himanshu Raj, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

00:00

The world of artificial intelligence is vast and everchanging. And with all the buzz around it lately, we figured it was the perfect time to revisit our AI Made Easy series. Join us over the next few weeks as we chat about all things AI, helping you to discover its endless possibilities. Ready to dive in? Let’s go!

00:33

Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

00:47

Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

Nikita: Hi everyone!

Lois: Today, we’re going to focus on the basics of deep learning with our Senior Principal OCI Instructor, Hemant Gahankari.

Nikita: Hi Hemant! Thanks for being with us today. So, to get started, what is deep learning?

01:14

Hemant: Hi Niki and hi Lois. So, deep learning is a subset of machine learning that focuses on training Artificial Neural Networks, abbreviated as ANN, to solve a task at hand. Say, for example, image classification. A very important quality of the ANN is that it can process raw data like pixels of an image and extract patterns from it. These patterns are treated as features to predict the outcomes.

Let us say we have a set of handwritten images of digits 0 to 9. As we know, everyone writes the digits in a slightly different way. So how do we train a machine to identify a handwritten digit? For this, we use ANN.

ANN accepts image pixels as inputs, extracts patterns like edges and curves and so on, and correlates these patterns to predict an outcome. That is what digit does the image has in this case.

02:17

Lois: Ok, so what you’re saying is given a bunch of pixels, ANN is able to process pixel data, learn an internal representation of the data, and predict outcomes. That’s so cool! So, why do we need deep learning?

Hemant: We need to specify features while we train machine learning algorithm. With deep learning, features are automatically extracted from the data. Internal representation of features and their combinations is built to predict outcomes by deep learning algorithms. This may not be feasible manually. Deep learning algorithms can make use of parallel computations. For this, usually data is split into small batches and processed parallelly. So these algorithms can process large amount of data in a short time to learn the features and their combinations. This leads to scalability and performance. In short, deep learning complements machine learning algorithms for complex data for which features cannot be described easily.

03:21

Nikita: What can you tell us about the origins of deep learning?

Hemant: Some of the deep learning concepts like artificial neuron, perceptron, and multilayer perceptron existed as early as 1950s. One of the most important concept of using backpropagation for training ANN came in 1980s. In 1990s, convolutional neural networks were also introduced for image analysis tasks. Starting 2000, GPUs were introduced. And 2010 onwards, GPUs became cheaper and widely available. This fueled the widespread adoption of deep learning uses like computer vision, natural language processing, speech recognition, text translation, and so on.

In 2012, major networks like AlexNet and Deep-Q Network were built. 2016 onward, generative use cases of the deep learning also started to come up. Today, we have widely adopted deep learning for a variety of use cases, including large language models and many other types of generative models.

04:32

Lois: Hemant, what are various applications of deep learning algorithms?

Hemant: Deep learning algorithms are targeted at a variety of data and applications. For data, we have images, videos, text, and audio. For images, applications can be image classification, object detection, and segmentation. For textual data, applications are to translate the text or detect a sentiment of a text. For audio, the applications can be music generation, speech to text, and so on.

05:05

Lois: It's important that we select the right deep learning algorithm based on the data and application, right? So how do we do that?

Hemant: For image tasks like image classification, object detection, image segmentation, or facial recognition, CNN is a suitable architecture. For text, we have a choice of the latest transformers or LSTM or even RNN. For generative tasks like text summarization or question answering, transformers is a good choice. For generating images, text to image generation, transformers, GANs, or diffusion models are available choices.

05:45

Nikita: Let’s dive a little deeper into Artificial Neural Networks. Can you tell us more about them, Hemant?

Hemant: Artificial Neural Networks are inspired by the human brain. They are made up of interconnected nodes called as neurons.

Nikita: And how are inputs processed by a neuron?

Hemant: In ANN, we assign weights to the connection between neurons. Weighted inputs are added up. And if the sum crosses a specified threshold, the neuron is fired. And the outputs of a layer of neuron become an input to another layer.

06:16

Lois: Hemant, tell us about the building blocks of ANN so we understand this better.

Hemant: So first building block is layers. We have input layer, output layer, and multiple hidden layers. The input layer and output layer are mandatory. And the hidden layers are optional. The layers consist of neurons. Neurons are computational units, which accept an input and produce an output. Weights determine the strength of connection between neurons. So the connections could be between input and a neuron, or it could be between a neuron and another neuron. Activation functions work on the weighted sum of inputs to the neuron and produce an output. Additional input to the neuron that allows a certain degree of flexibility is called as a bias.

07:05

Nikita: I think we’ve got the components of ANN straight but maybe you should give us an example. You mentioned this example earlier…of needing to train ANN to recognize handwritten digits from images. How would we go about that? Hemant: For that, we have to collect a large number of digit images, and we need to train ANN using these images. So, in this case, the images consist of 28 by 28 pixels, which act as input layer. For the output, we have 10 neurons which represent digits 0 to 9. And we have multiple hidden layers. So, for example, we have two hidden layers which are consisting of 16 neurons each.

The hidden layers are responsible for capturing the internal representation of the raw images. And the output layer is responsible for producing the desired outcomes. So, in this case, the desired outcome is the prediction of whether the digit is 0 or 1 or up to digit 9.

So how do we train this particular ANN? So the first thing we use is the backpropagation algorithm. During training, we show an image to the ANN. Let’s say it is an image of digit 2. So we expect output neuron for digit 2 to fire. But in real, let’s say output neuron of a digit 6 fired.

08:28

Lois: So, then, what do we do?

Hemant: We know that there is an error. So we correct an error. We adjust the weights of the connection between neurons based on a calculation, which we call as backpropagation algorithm. By showing thousands of images and adjusting the weights iteratively, ANN is able to predict correct outcomes for most of the input images. This process of adjusting weights through backpropagation is called as model training.

09:01

Do you have an idea for a new course or learning opportunity? We’d love to hear it! Visit the Oracle University Learning Community and share your thoughts with us on the Idea Incubator. Your suggestion could find a place in future development projects!

Visit mylearn.oracle.com to get started.

09:22

Nikita: Welcome back! Let’s move on to CNN. Hemant, what is a Convolutional Neural Network?

Hemant: CNN is a type of deep learning model specifically designed for processing and analyzing grid-like data, such as images and videos. In the ANN, the input image is converted to a single dimensional array and given as an input to the network. But that does not work well with the image data because image data is inherently two dimensional. CNN works better with the two dimensional data. The role of the CNN is to reduce the images into a form, which is easier to process and without losing features, which are critical for getting a good prediction.

10:10

Lois: A CNN has different layers, right? Could you tell us a bit about them?

Hemant: The first one is input layer. Input layer is followed by feature extraction layers, which is a combination and repetition of convolutional layer with ReLu activation and a pooling layer. And this is followed by a classification layer. These are the fully connected output layers, where the classification occurs as output classes. The class with the highest probability is the predicted class. And finally, we have the dropout layer. This layer is a regularization technique used to prevent overfitting in the network.

10:51

Nikita: And what are the top applications of CNN?

Hemant: One of the most widely used applications of CNNs is image classification. For example, classifying whether an image contains a specific object, say cat or a dog.

CNNs are also used for object detection tasks. The goal here is to draw bounding boxes around objects in an image. CNNs can perform pixel-level segmentation, where each pixel in the image is labeled to represent different objects or regions. CNNs are employed for face recognition tasks as well, identifying and verifying individuals based on facial features.

CNNs are widely used in medical image analysis, helping with tasks like tumor detection, diagnosis, and classification of various medical conditions. CNNs play an important role in the development of self-driving cars, helping them to recognize and understand the road traffic signs, pedestrians, and other vehicles.

12:02 Nikita: Hemant, let’s talk about sequence models. What are they and what are they used for?

Hemant: Sequence models are used to solve problems, where the input data is in the form of sequences. The sequences are ordered lists of data points or events.

The goal in sequence models is to find patterns and dependencies within the data and make predictions, classifications, or even generate new sequences.

12:31

Lois: Can you give us some examples of sequence models?

Hemant: Some common examples of the sequence models are in natural language processing, deep learning models are used for tasks, such as machine translation, sentiment analysis, or text generation. In speech recognition, deep learning models are used to convert a recorded audio into text.

Deep learning models can generate new music or create original compositions. Even sequences of hand gestures are interpreted by deep learning models for applications like sign language recognition. In fields like finance or weather prediction, time series data is used to predict future values.

13:15

Nikita: Which deep learning models can be used to work with sequence data?

Hemant: Recurrent Neural Networks, abbreviated as RNNs, are a class of neural network architectures specifically designed to handle sequential data. Unlike traditional feedforward neural network, RNNs have a feedback loop that allows information to persist across different timesteps.

The key features of RNNs is their ability to maintain an internal state often referred to as a hidden state or memory, which is updated as the network processes each element in the input sequence. The hidden state is used as input to the network for the next time step, allowing the model to capture dependencies and patterns in the data that are spread across time.

14:07

Nikita: Are there various types of RNNs?

Hemant: There are different types of RNN architectures based on application.

One of them is one to one. This is like feed forward neural network and is not suited for sequential data. A one to many model produces multiple output values for one input value. Music generation or sequence generation are some applications using this architecture.

A many to one model produces one output value after receiving multiple input values. Example is sentiments analysis based on the review. Many to many model produces multiple output values for multiple input values. Examples are machine translation and named entity recognition.

RNN does not perform well when it comes to capturing long-term dependencies. This is due to the vanishing gradients problem, which is overcome by using LSTM model.

15:11

Lois: Another acronym. What is LSTM, Hemant?

Hemant: Long Short-Term memory, abbreviated as LSTM, works by using a specialized memory cell and a gating mechanism to capture long term dependencies in the sequential data. The key idea behind LSTM is to selectively remember or forget information over time, enabling the model to maintain relevant information over long sequences, which helps overcome the vanishing gradients problem.

15:45

Nikita: Can you take us, step-by-step, through the working of LSTM?

Hemant: At each timestep, the LSTM takes an input vector representing the current data point in the sequence. The LSTM also receives the previous hidden state and cell state. These represent what the LSTM has remembered and forgotten up to the current point in the sequence.

The core of the LSTM lies in its gating mechanisms, which include three gates: the input gate, the forget gate, and the output gate. These gates are like the filters that control the flow of information within the LSTM cell. The input gate decides what new information from the current input should be added to the memory cell.

The forget gate determines what information in the current memory cell should be discarded or forgotten. The output gate regulates how much of the current memory cell should be exposed as the output of the current time step.

16:52

Lois: Thank you, Hemant, for joining us in this episode of the Oracle University Podcast. I learned so much today. If you want to learn more about deep learning, visit mylearn.oracle.com and search for the Oracle Cloud Infrastructure AI Foundations course.

And remember, the AI Foundations course and certification are free. So why not get started now?

Nikita: Right, Lois. In our next episode, we will discuss generative AI and language learning models. Until then, this is Nikita Abraham…

Lois: And Lois Houston signing off!

17:26

That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  continue reading

90 פרקים

Artwork
iconשתפו
 
Manage episode 416975100 series 3560727
תוכן מסופק על ידי Oracle Universtity and Oracle Corporation. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Oracle Universtity and Oracle Corporation או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Did you know that the concept of deep learning goes way back to the 1950s? However, it is only in recent years that this technology has created a tremendous amount of buzz (and for good reason!). A subset of machine learning, deep learning is inspired by the structure of the human brain, making it fascinating to learn about. In this episode, Lois Houston and Nikita Abraham interview Senior Principal OCI Instructor Hemant Gahankari about deep learning concepts, including how Convolution Neural Networks work, and help you get your deep learning basics right. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Himanshu Raj, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

00:00

The world of artificial intelligence is vast and everchanging. And with all the buzz around it lately, we figured it was the perfect time to revisit our AI Made Easy series. Join us over the next few weeks as we chat about all things AI, helping you to discover its endless possibilities. Ready to dive in? Let’s go!

00:33

Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

00:47

Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

Nikita: Hi everyone!

Lois: Today, we’re going to focus on the basics of deep learning with our Senior Principal OCI Instructor, Hemant Gahankari.

Nikita: Hi Hemant! Thanks for being with us today. So, to get started, what is deep learning?

01:14

Hemant: Hi Niki and hi Lois. So, deep learning is a subset of machine learning that focuses on training Artificial Neural Networks, abbreviated as ANN, to solve a task at hand. Say, for example, image classification. A very important quality of the ANN is that it can process raw data like pixels of an image and extract patterns from it. These patterns are treated as features to predict the outcomes.

Let us say we have a set of handwritten images of digits 0 to 9. As we know, everyone writes the digits in a slightly different way. So how do we train a machine to identify a handwritten digit? For this, we use ANN.

ANN accepts image pixels as inputs, extracts patterns like edges and curves and so on, and correlates these patterns to predict an outcome. That is what digit does the image has in this case.

02:17

Lois: Ok, so what you’re saying is given a bunch of pixels, ANN is able to process pixel data, learn an internal representation of the data, and predict outcomes. That’s so cool! So, why do we need deep learning?

Hemant: We need to specify features while we train machine learning algorithm. With deep learning, features are automatically extracted from the data. Internal representation of features and their combinations is built to predict outcomes by deep learning algorithms. This may not be feasible manually. Deep learning algorithms can make use of parallel computations. For this, usually data is split into small batches and processed parallelly. So these algorithms can process large amount of data in a short time to learn the features and their combinations. This leads to scalability and performance. In short, deep learning complements machine learning algorithms for complex data for which features cannot be described easily.

03:21

Nikita: What can you tell us about the origins of deep learning?

Hemant: Some of the deep learning concepts like artificial neuron, perceptron, and multilayer perceptron existed as early as 1950s. One of the most important concept of using backpropagation for training ANN came in 1980s. In 1990s, convolutional neural networks were also introduced for image analysis tasks. Starting 2000, GPUs were introduced. And 2010 onwards, GPUs became cheaper and widely available. This fueled the widespread adoption of deep learning uses like computer vision, natural language processing, speech recognition, text translation, and so on.

In 2012, major networks like AlexNet and Deep-Q Network were built. 2016 onward, generative use cases of the deep learning also started to come up. Today, we have widely adopted deep learning for a variety of use cases, including large language models and many other types of generative models.

04:32

Lois: Hemant, what are various applications of deep learning algorithms?

Hemant: Deep learning algorithms are targeted at a variety of data and applications. For data, we have images, videos, text, and audio. For images, applications can be image classification, object detection, and segmentation. For textual data, applications are to translate the text or detect a sentiment of a text. For audio, the applications can be music generation, speech to text, and so on.

05:05

Lois: It's important that we select the right deep learning algorithm based on the data and application, right? So how do we do that?

Hemant: For image tasks like image classification, object detection, image segmentation, or facial recognition, CNN is a suitable architecture. For text, we have a choice of the latest transformers or LSTM or even RNN. For generative tasks like text summarization or question answering, transformers is a good choice. For generating images, text to image generation, transformers, GANs, or diffusion models are available choices.

05:45

Nikita: Let’s dive a little deeper into Artificial Neural Networks. Can you tell us more about them, Hemant?

Hemant: Artificial Neural Networks are inspired by the human brain. They are made up of interconnected nodes called as neurons.

Nikita: And how are inputs processed by a neuron?

Hemant: In ANN, we assign weights to the connection between neurons. Weighted inputs are added up. And if the sum crosses a specified threshold, the neuron is fired. And the outputs of a layer of neuron become an input to another layer.

06:16

Lois: Hemant, tell us about the building blocks of ANN so we understand this better.

Hemant: So first building block is layers. We have input layer, output layer, and multiple hidden layers. The input layer and output layer are mandatory. And the hidden layers are optional. The layers consist of neurons. Neurons are computational units, which accept an input and produce an output. Weights determine the strength of connection between neurons. So the connections could be between input and a neuron, or it could be between a neuron and another neuron. Activation functions work on the weighted sum of inputs to the neuron and produce an output. Additional input to the neuron that allows a certain degree of flexibility is called as a bias.

07:05

Nikita: I think we’ve got the components of ANN straight but maybe you should give us an example. You mentioned this example earlier…of needing to train ANN to recognize handwritten digits from images. How would we go about that? Hemant: For that, we have to collect a large number of digit images, and we need to train ANN using these images. So, in this case, the images consist of 28 by 28 pixels, which act as input layer. For the output, we have 10 neurons which represent digits 0 to 9. And we have multiple hidden layers. So, for example, we have two hidden layers which are consisting of 16 neurons each.

The hidden layers are responsible for capturing the internal representation of the raw images. And the output layer is responsible for producing the desired outcomes. So, in this case, the desired outcome is the prediction of whether the digit is 0 or 1 or up to digit 9.

So how do we train this particular ANN? So the first thing we use is the backpropagation algorithm. During training, we show an image to the ANN. Let’s say it is an image of digit 2. So we expect output neuron for digit 2 to fire. But in real, let’s say output neuron of a digit 6 fired.

08:28

Lois: So, then, what do we do?

Hemant: We know that there is an error. So we correct an error. We adjust the weights of the connection between neurons based on a calculation, which we call as backpropagation algorithm. By showing thousands of images and adjusting the weights iteratively, ANN is able to predict correct outcomes for most of the input images. This process of adjusting weights through backpropagation is called as model training.

09:01

Do you have an idea for a new course or learning opportunity? We’d love to hear it! Visit the Oracle University Learning Community and share your thoughts with us on the Idea Incubator. Your suggestion could find a place in future development projects!

Visit mylearn.oracle.com to get started.

09:22

Nikita: Welcome back! Let’s move on to CNN. Hemant, what is a Convolutional Neural Network?

Hemant: CNN is a type of deep learning model specifically designed for processing and analyzing grid-like data, such as images and videos. In the ANN, the input image is converted to a single dimensional array and given as an input to the network. But that does not work well with the image data because image data is inherently two dimensional. CNN works better with the two dimensional data. The role of the CNN is to reduce the images into a form, which is easier to process and without losing features, which are critical for getting a good prediction.

10:10

Lois: A CNN has different layers, right? Could you tell us a bit about them?

Hemant: The first one is input layer. Input layer is followed by feature extraction layers, which is a combination and repetition of convolutional layer with ReLu activation and a pooling layer. And this is followed by a classification layer. These are the fully connected output layers, where the classification occurs as output classes. The class with the highest probability is the predicted class. And finally, we have the dropout layer. This layer is a regularization technique used to prevent overfitting in the network.

10:51

Nikita: And what are the top applications of CNN?

Hemant: One of the most widely used applications of CNNs is image classification. For example, classifying whether an image contains a specific object, say cat or a dog.

CNNs are also used for object detection tasks. The goal here is to draw bounding boxes around objects in an image. CNNs can perform pixel-level segmentation, where each pixel in the image is labeled to represent different objects or regions. CNNs are employed for face recognition tasks as well, identifying and verifying individuals based on facial features.

CNNs are widely used in medical image analysis, helping with tasks like tumor detection, diagnosis, and classification of various medical conditions. CNNs play an important role in the development of self-driving cars, helping them to recognize and understand the road traffic signs, pedestrians, and other vehicles.

12:02 Nikita: Hemant, let’s talk about sequence models. What are they and what are they used for?

Hemant: Sequence models are used to solve problems, where the input data is in the form of sequences. The sequences are ordered lists of data points or events.

The goal in sequence models is to find patterns and dependencies within the data and make predictions, classifications, or even generate new sequences.

12:31

Lois: Can you give us some examples of sequence models?

Hemant: Some common examples of the sequence models are in natural language processing, deep learning models are used for tasks, such as machine translation, sentiment analysis, or text generation. In speech recognition, deep learning models are used to convert a recorded audio into text.

Deep learning models can generate new music or create original compositions. Even sequences of hand gestures are interpreted by deep learning models for applications like sign language recognition. In fields like finance or weather prediction, time series data is used to predict future values.

13:15

Nikita: Which deep learning models can be used to work with sequence data?

Hemant: Recurrent Neural Networks, abbreviated as RNNs, are a class of neural network architectures specifically designed to handle sequential data. Unlike traditional feedforward neural network, RNNs have a feedback loop that allows information to persist across different timesteps.

The key features of RNNs is their ability to maintain an internal state often referred to as a hidden state or memory, which is updated as the network processes each element in the input sequence. The hidden state is used as input to the network for the next time step, allowing the model to capture dependencies and patterns in the data that are spread across time.

14:07

Nikita: Are there various types of RNNs?

Hemant: There are different types of RNN architectures based on application.

One of them is one to one. This is like feed forward neural network and is not suited for sequential data. A one to many model produces multiple output values for one input value. Music generation or sequence generation are some applications using this architecture.

A many to one model produces one output value after receiving multiple input values. Example is sentiments analysis based on the review. Many to many model produces multiple output values for multiple input values. Examples are machine translation and named entity recognition.

RNN does not perform well when it comes to capturing long-term dependencies. This is due to the vanishing gradients problem, which is overcome by using LSTM model.

15:11

Lois: Another acronym. What is LSTM, Hemant?

Hemant: Long Short-Term memory, abbreviated as LSTM, works by using a specialized memory cell and a gating mechanism to capture long term dependencies in the sequential data. The key idea behind LSTM is to selectively remember or forget information over time, enabling the model to maintain relevant information over long sequences, which helps overcome the vanishing gradients problem.

15:45

Nikita: Can you take us, step-by-step, through the working of LSTM?

Hemant: At each timestep, the LSTM takes an input vector representing the current data point in the sequence. The LSTM also receives the previous hidden state and cell state. These represent what the LSTM has remembered and forgotten up to the current point in the sequence.

The core of the LSTM lies in its gating mechanisms, which include three gates: the input gate, the forget gate, and the output gate. These gates are like the filters that control the flow of information within the LSTM cell. The input gate decides what new information from the current input should be added to the memory cell.

The forget gate determines what information in the current memory cell should be discarded or forgotten. The output gate regulates how much of the current memory cell should be exposed as the output of the current time step.

16:52

Lois: Thank you, Hemant, for joining us in this episode of the Oracle University Podcast. I learned so much today. If you want to learn more about deep learning, visit mylearn.oracle.com and search for the Oracle Cloud Infrastructure AI Foundations course.

And remember, the AI Foundations course and certification are free. So why not get started now?

Nikita: Right, Lois. In our next episode, we will discuss generative AI and language learning models. Until then, this is Nikita Abraham…

Lois: And Lois Houston signing off!

17:26

That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  continue reading

90 פרקים

כל הפרקים

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר