Artwork

Player FM - Internet Radio Done Right
Checked 6d ago
הוסף לפני one שנה
תוכן מסופק על ידי Oracle Universtity and Oracle Corporation. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Oracle Universtity and Oracle Corporation או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !
icon Daily Deals

MySQL Backup - Part 2

24:35
 
שתפו
 

Manage episode 467466445 series 3560727
תוכן מסופק על ידי Oracle Universtity and Oracle Corporation. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Oracle Universtity and Oracle Corporation או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Lois Houston and Nikita Abraham continue their conversation with MySQL expert Perside Foster, with a closer look at MySQL Enterprise Backup. They cover essential features like incremental backups for quick recovery, encryption for data security, and monitoring with MySQL Enterprise Monitor—all to help you manage backups smoothly and securely. MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

00:00

Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

00:25

Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs.

Lois: Hi there! Last week was the first of a two-part episode covering the different types of backups and why they're important. Today, we’ll look at how we can use MySQL Enterprise Backup for efficient and consistent backups.

00:52

Nikita: And of course, we’ve got Perside Foster with us again to walk us through all the details. Perside, could you give us an overview of MySQL Enterprise Backup?

Perside: MySQL Enterprise Backup is a form of physical backup at its core, so it's much faster for large data sets than logical backups, such as the most commonly used MySQL Dump. Because it backs up the data files, it's non-locking and enables either complete system backup or partial backup, focusing only on specific databases.

01:29

Lois: And what are the benefits of using MySQL Enterprise Backup?

Perside: You can back up to local storage or direct-to-common-cloud storage types. You can perform incremental backups, which can speed up your backup process greatly. Incremental backups enable point-in-time recovery. It's useful when you need to restore to a point in time before some application or human error occurred. Backups can be compressed to save archival storage requirements and encrypted for regulatory compliance and offline data security.

02:09

Nikita: So we know MySQL Enterprise Backup is an impressive tool, but could you talk more about some of the main features it supports for creating and managing backups? Specifically, which tools are integrated within MySQL Enterprise to support different backup scenarios?

Perside: MySQL Enterprise Backup supports SBT, implemented by many common Tape storage systems. MySQL Enterprise Backup supports optimistic backup. This process deals with busy tables separately from the rest of the database. It can record changes that happen in the database during the backup for consistency. In a large data set, this can make a huge difference in performance.

MySQL Enterprise Backup runs on all supported platforms. It's available when you have a MySQL Enterprise Edition license. And it comes with Enterprise Edition, but it also is available as a separate package. You can get the most recent version from eDelivery, where you can also get a trial version. If you need a previous release, you can get that from My Oracle Support.

It's also available in all versions of MySQL, whether you run a Long-Term support version or an Innovation Release. For LTS releases, MySQL Enterprise Backup supports MySQL instances of the same LTS release. For Innovation releases, it supports the previous LTS release and any subsequent Innovation version within the same LTS family.

04:03

Nikita: How does MySQL Enterprise Monitor manage and track backup processes?

Perside: MySQL Enterprise Monitor has a dashboard for monitoring MySQL Enterprise Backup. The dashboard monitors the health of backup process and usage throughout the entire Enterprise fleet, not just a single server. It supports drilling down into specific sub-operations within a backup job. You can see information about full backups, partial backups, and incremental backups. You can configure alerts that will notify you in the event of delays, failures, or backups that have not been performed in some configuration time period.

04:53

Lois: Ok…let’s get into the mechanics. I understand that MySQL Enterprise Backup uses binary logs as part of its backup process. Can you explain how these logs fit into the bigger picture of maintaining database integrity?

Perside: MySQL Enterprise Backup is a utility designed specifically for backing up MySQL systems in the most efficient and flexible way. At its simplest, it performs a physical backup of the data files, so it is fast.

However, it also records the changes that were made during the time it took to do the backup. So, the result is that you get a consistent backup of the data at the time the backup completed. This backup is not tied to the host system and can be moved to other hosts.

It can be used for archiving and is fully supported as part of the MySQL Enterprise Edition. It is, however, tied to the specific version of MySQL from which the backup was taken. So, you cannot use it for upgrades where the destination server is an upgrade from the source.

For example, if you take a backup from MySQL 5.7, you can't directly restore it to MySQL 8.0. As a part of MySQL Enterprise Edition, it's not part of the freely available Community Edition.

06:29

Lois: Perside, how do MySQL's binary logs track changes over time? And why is this so critical for backups?

Perside: The binary logs record changes to the database. These changes are recorded in a sequential set of files numbered incrementally. MySQL logs changes either in statement-based form, where each log entry records the statement that gives rise to the change, or in row-based form where the actual change row data is recorded.

If you select mixed format, then MySQL records statements for most operations and records row for changes where the statement might result in a different row value each time it's run, for example, where there's a generated value like autoincrement. The current log file grows as changes are recorded.

When it reaches its maximum configured size, that log file is closed, and the next sequential file is created for new logs. You can make this happen automatically by using the FLUSH BINARY LOGS command. This does not delete any existing log files.

07:59

Nikita: But what happens if you want to delete the log files?

Perside: If you want to delete all log files, you can do so manually with the PURGE BINARY LOGS command, either specifying a file or a date time.

08:14

Lois: When it comes to tracking transactions, MySQL provides a couple of methods, right? Can you explain the differences between Global Transaction Identifiers and the traditional log file sequence?

Perside: Log files positioning is one of two formats, either legacy, where you specify transactions with a log file in a sequence number, or by using global transaction identifiers, or GTIDs, where each transaction is identified with a universally unique identifier or UUID.

When you apply a transaction to the source server, that is when the GTID is attached to the transaction. This makes it particularly useful in replication topologies so that each transaction is uniquely identified by both its server ID and the transaction sequence number. When such a transaction is replicated to other hosts, the transaction retains its original GTID so that you can track when that transaction has propagated to the replicas and has been applied. The global transaction identifier is unique across the entire network.

09:49

Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like LLMs, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.comand find out more.

10:19

Nikita: Welcome back! Let’s move on to replication. How does MySQL’s legacy log format handle transactions, and what does that mean for replication timing across different servers?

Perside: Legacy format binary logs are non-transactional. This means that a transaction made up of multiple modifications is logged as a sequence of changes. It's possible that different hosts in a replication network apply those changes at different times. Each server that uses legacy binary logging maintain the current applied log position as coordinates based on a combination of binary log files in the position within that log file.

11:11

Nikita: Troubleshooting with legacy logs can be quite complex, right? So, how does the lack of unique transaction IDs make it more difficult to address replication issues?

Perside: Because each server has its own log with its own transactions, these modification could have entirely different coordinates, making it challenging to find the specific modification point if you need to do any deep dive troubleshooting, for example, if one replica fell partway through applying a transaction and you need to partially roll it back manually. On the other hand, when you enable GTIDs, the transaction applied on the source host has that globally unique identifier attached to the whole transaction as a sequence of unique IDs. When the second or subsequent servers apply those transactions, they have exactly the same identifier, making it both transaction-safe for MySQL and also easier to troubleshoot if you need to.

12:26

Lois: How can you use binary logs to perform a point-in-time recovery in MySQL?

Perside: First, you restore the last full backup. Once you've restarted the restart server, find the current log position of that backup. Either it's GTID or log sequence number. The SHOW BINARY LOG STATUS command shows this information.

Then you can use the MySQL binlog utility to replay events from the binary log files, specifying the start and stop position containing the range of log operations that you wish to apply. You can pipe the output of the MySQL bin log to the MySQL client if you want to execute the changes immediately, or you can redirect the output to a script file if you want to examine and perhaps edit the changes.

13:29

Nikita: And how do you save binary logs?

Perside: You can save binary logs to use in disaster recovery, for point-in-time restores, or for incremental backups. One way to do this is to flush the logs so that the log file closes and ready for copying. And then copy it to a different server to protect against hardware media failures.

You can also use the MySQL binlog utility to create a copy of a set of binary log files in the same format, but to a different file or set of files. This can be useful if you want to run MySQL binlog continuously, copying from the source server binary log to a new location, perhaps in network storage. If you do this, remember that MySQL binlog does not run as a service or daemon, so you'll need to monitor it to make sure it's running continually.

14:39

Lois: Can you take us through how the MySQL Enterprise Backup process works? What does it do when performing a backup?

Perside: First, it performs a physical file copy of necessary data and log files. This can be done while the server is fully operational, and it has minimal impact on performance.

Once this initial copy is taken, it applies a low impact backup lock on the instance. If you have any tables that are not using InnoDB, the backup cannot guarantee transaction-safe consistency for those tables. It applies a weed lock to those tables so that it can guarantee consistency.

Then it briefly locks all logging activity to take a consistent view of the current coordinates of various logs. It releases the weed lock on non-transactional tables. Using the log coordinates that were taken earlier in the process, it gathers all logs for transactions that have occurred since then.

Bear in mind that the backup process takes place while the system is active. So, for a consistent backup, it must record not only the data files, but all changes that occurred during the backup. Then it releases the backup lock. The last piece of information recorded is any metadata for the backup itself, including its timing and contents in the final redo log. That completes the backup operation.

16:30

Nikita: And where are the files stored?

Perside: The files contained in the backup are saved to the backup location, which can be on the local system or in network storage. The files contained in the backup location include files from the MySQL data directory. Some raw files include InnoDB tablespace, plus any InnoDB file per table tablespace files, and InnoDB log files. Other files might include data files belonging to other storage engines, perhaps MyISAM files. The various log files in instance configuration files are also retained.

17:20

Lois: What steps do you follow to restore a MySQL Enterprise Backup, and how do you guarantee consistency, especially when dealing with incremental backups?

Perside: To restore from a backup using MySQL Enterprise Backup, you must first remove any previous files from the data directory. The restore process will fail if you attempt to restore over an existing system or backup. Then you restore the database with appropriate options.

If you only restore a single backup, you can use copy, back, and apply log to ensure that the restored system has a consistency state. If you perform a full backup in subsequent incremental backups, you might need to restore multiple times using copy-back, and then use copy-back-and-apply-log only for the final consistent restore operation.

The restart server might be on the same host or might be a different host with different configuration. This means that you might have to change some configuration on the restored server, including the operating system ownership of the restored data directory and various MySQL configuration files.

If you want to retain the MySQL configuration files from the source server to reproduce on a new server, you should copy those files separately. MySQL Enterprise Backup focuses on the data rather than the server configuration. It does, however, produce configuration files appropriate for the backup.

These are similar to the MySQL configuration files, but only contain options relevant for the backup process itself. There's also variables that have been changed to non-default values and all global variable values. These files must be renamed and possibly edited before they are suitable to become configuration files in the newly restored server.

For example, the mysqld-auto.cnf file contains a JSON-formatted set of persisted variables. The backup process stores this as the newly named backup mysqld-auto.cnf. If you want to use it in the restored server, you must rename it and place it in the appropriate location so that the restored server can read it.

This also applies in part to the auto.cnf file, which contain identifying information for the server. If you are replacing the original server or restoring on the same host, then you can keep the original values. However, this information must be unique within a network. So, if you are restoring this backup to create a replica in a replication topology, you must not include that file and instead start MySQL without it so that it creates its own unique identifying information.

21:14

Nikita: Let’s discuss securing and optimizing backups. How does MySQL Enterprise Backup handle encryption and compression, and what are the critical considerations for each?

Perside: You can encrypt backups so that they are secure while moving them around or archiving them. The encrypt option performs the encryption. And you can specify the encryption key either on the command line as a string or a key file that has been generated with some cryptographic algorithm. Encryption only applies to image files, not to backup directories.

You can also compress backup with different levels of compression, with higher levels requiring more CPU, but resulting in greater savings in storage. Compression only works with InnoDB data files. If your organization has media management software for performing backups, perhaps to a tape array, then you can use the SBT interface supported in MySQL Enterprise Backup.

22:34

Lois: Before we wrap up, could you share how MySQL Enterprise Backup facilitates the management of backups across a multi-server environment?

Perside: As an enterprise solution, it's easy to run MySQL Enterprise Backup in a multi-server environment. We've already mentioned backing up to cloud storage, but you can, of course, back up to a directory or image on network storage that can be mounted locally, perhaps with NFS or some other file system.

The "with time" option enables multiple backups within the same backup directory, where each in its own subdirectory named with the timestamp. This is especially useful when you want to run the same backup script repeatedly.

23:32

Lois: Thank you for that detailed overview, Perside. This wraps up our discussion of the various backup types, their pros and cons, and how to select the right option for your needs. In our next session, we’ll explore the different MySQL monitoring strategies and look at the features as well as benefits of Heatwave.

Nikita: And if you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and take a look at the MySQL 8.4 Essentials course. Until then, this is Nikita Abraham…

Lois: And Lois Houston signing off!

24:06

That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  continue reading

116 פרקים

Artwork
iconשתפו
 
Manage episode 467466445 series 3560727
תוכן מסופק על ידי Oracle Universtity and Oracle Corporation. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Oracle Universtity and Oracle Corporation או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Lois Houston and Nikita Abraham continue their conversation with MySQL expert Perside Foster, with a closer look at MySQL Enterprise Backup. They cover essential features like incremental backups for quick recovery, encryption for data security, and monitoring with MySQL Enterprise Monitor—all to help you manage backups smoothly and securely. MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

00:00

Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

00:25

Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs.

Lois: Hi there! Last week was the first of a two-part episode covering the different types of backups and why they're important. Today, we’ll look at how we can use MySQL Enterprise Backup for efficient and consistent backups.

00:52

Nikita: And of course, we’ve got Perside Foster with us again to walk us through all the details. Perside, could you give us an overview of MySQL Enterprise Backup?

Perside: MySQL Enterprise Backup is a form of physical backup at its core, so it's much faster for large data sets than logical backups, such as the most commonly used MySQL Dump. Because it backs up the data files, it's non-locking and enables either complete system backup or partial backup, focusing only on specific databases.

01:29

Lois: And what are the benefits of using MySQL Enterprise Backup?

Perside: You can back up to local storage or direct-to-common-cloud storage types. You can perform incremental backups, which can speed up your backup process greatly. Incremental backups enable point-in-time recovery. It's useful when you need to restore to a point in time before some application or human error occurred. Backups can be compressed to save archival storage requirements and encrypted for regulatory compliance and offline data security.

02:09

Nikita: So we know MySQL Enterprise Backup is an impressive tool, but could you talk more about some of the main features it supports for creating and managing backups? Specifically, which tools are integrated within MySQL Enterprise to support different backup scenarios?

Perside: MySQL Enterprise Backup supports SBT, implemented by many common Tape storage systems. MySQL Enterprise Backup supports optimistic backup. This process deals with busy tables separately from the rest of the database. It can record changes that happen in the database during the backup for consistency. In a large data set, this can make a huge difference in performance.

MySQL Enterprise Backup runs on all supported platforms. It's available when you have a MySQL Enterprise Edition license. And it comes with Enterprise Edition, but it also is available as a separate package. You can get the most recent version from eDelivery, where you can also get a trial version. If you need a previous release, you can get that from My Oracle Support.

It's also available in all versions of MySQL, whether you run a Long-Term support version or an Innovation Release. For LTS releases, MySQL Enterprise Backup supports MySQL instances of the same LTS release. For Innovation releases, it supports the previous LTS release and any subsequent Innovation version within the same LTS family.

04:03

Nikita: How does MySQL Enterprise Monitor manage and track backup processes?

Perside: MySQL Enterprise Monitor has a dashboard for monitoring MySQL Enterprise Backup. The dashboard monitors the health of backup process and usage throughout the entire Enterprise fleet, not just a single server. It supports drilling down into specific sub-operations within a backup job. You can see information about full backups, partial backups, and incremental backups. You can configure alerts that will notify you in the event of delays, failures, or backups that have not been performed in some configuration time period.

04:53

Lois: Ok…let’s get into the mechanics. I understand that MySQL Enterprise Backup uses binary logs as part of its backup process. Can you explain how these logs fit into the bigger picture of maintaining database integrity?

Perside: MySQL Enterprise Backup is a utility designed specifically for backing up MySQL systems in the most efficient and flexible way. At its simplest, it performs a physical backup of the data files, so it is fast.

However, it also records the changes that were made during the time it took to do the backup. So, the result is that you get a consistent backup of the data at the time the backup completed. This backup is not tied to the host system and can be moved to other hosts.

It can be used for archiving and is fully supported as part of the MySQL Enterprise Edition. It is, however, tied to the specific version of MySQL from which the backup was taken. So, you cannot use it for upgrades where the destination server is an upgrade from the source.

For example, if you take a backup from MySQL 5.7, you can't directly restore it to MySQL 8.0. As a part of MySQL Enterprise Edition, it's not part of the freely available Community Edition.

06:29

Lois: Perside, how do MySQL's binary logs track changes over time? And why is this so critical for backups?

Perside: The binary logs record changes to the database. These changes are recorded in a sequential set of files numbered incrementally. MySQL logs changes either in statement-based form, where each log entry records the statement that gives rise to the change, or in row-based form where the actual change row data is recorded.

If you select mixed format, then MySQL records statements for most operations and records row for changes where the statement might result in a different row value each time it's run, for example, where there's a generated value like autoincrement. The current log file grows as changes are recorded.

When it reaches its maximum configured size, that log file is closed, and the next sequential file is created for new logs. You can make this happen automatically by using the FLUSH BINARY LOGS command. This does not delete any existing log files.

07:59

Nikita: But what happens if you want to delete the log files?

Perside: If you want to delete all log files, you can do so manually with the PURGE BINARY LOGS command, either specifying a file or a date time.

08:14

Lois: When it comes to tracking transactions, MySQL provides a couple of methods, right? Can you explain the differences between Global Transaction Identifiers and the traditional log file sequence?

Perside: Log files positioning is one of two formats, either legacy, where you specify transactions with a log file in a sequence number, or by using global transaction identifiers, or GTIDs, where each transaction is identified with a universally unique identifier or UUID.

When you apply a transaction to the source server, that is when the GTID is attached to the transaction. This makes it particularly useful in replication topologies so that each transaction is uniquely identified by both its server ID and the transaction sequence number. When such a transaction is replicated to other hosts, the transaction retains its original GTID so that you can track when that transaction has propagated to the replicas and has been applied. The global transaction identifier is unique across the entire network.

09:49

Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like LLMs, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.comand find out more.

10:19

Nikita: Welcome back! Let’s move on to replication. How does MySQL’s legacy log format handle transactions, and what does that mean for replication timing across different servers?

Perside: Legacy format binary logs are non-transactional. This means that a transaction made up of multiple modifications is logged as a sequence of changes. It's possible that different hosts in a replication network apply those changes at different times. Each server that uses legacy binary logging maintain the current applied log position as coordinates based on a combination of binary log files in the position within that log file.

11:11

Nikita: Troubleshooting with legacy logs can be quite complex, right? So, how does the lack of unique transaction IDs make it more difficult to address replication issues?

Perside: Because each server has its own log with its own transactions, these modification could have entirely different coordinates, making it challenging to find the specific modification point if you need to do any deep dive troubleshooting, for example, if one replica fell partway through applying a transaction and you need to partially roll it back manually. On the other hand, when you enable GTIDs, the transaction applied on the source host has that globally unique identifier attached to the whole transaction as a sequence of unique IDs. When the second or subsequent servers apply those transactions, they have exactly the same identifier, making it both transaction-safe for MySQL and also easier to troubleshoot if you need to.

12:26

Lois: How can you use binary logs to perform a point-in-time recovery in MySQL?

Perside: First, you restore the last full backup. Once you've restarted the restart server, find the current log position of that backup. Either it's GTID or log sequence number. The SHOW BINARY LOG STATUS command shows this information.

Then you can use the MySQL binlog utility to replay events from the binary log files, specifying the start and stop position containing the range of log operations that you wish to apply. You can pipe the output of the MySQL bin log to the MySQL client if you want to execute the changes immediately, or you can redirect the output to a script file if you want to examine and perhaps edit the changes.

13:29

Nikita: And how do you save binary logs?

Perside: You can save binary logs to use in disaster recovery, for point-in-time restores, or for incremental backups. One way to do this is to flush the logs so that the log file closes and ready for copying. And then copy it to a different server to protect against hardware media failures.

You can also use the MySQL binlog utility to create a copy of a set of binary log files in the same format, but to a different file or set of files. This can be useful if you want to run MySQL binlog continuously, copying from the source server binary log to a new location, perhaps in network storage. If you do this, remember that MySQL binlog does not run as a service or daemon, so you'll need to monitor it to make sure it's running continually.

14:39

Lois: Can you take us through how the MySQL Enterprise Backup process works? What does it do when performing a backup?

Perside: First, it performs a physical file copy of necessary data and log files. This can be done while the server is fully operational, and it has minimal impact on performance.

Once this initial copy is taken, it applies a low impact backup lock on the instance. If you have any tables that are not using InnoDB, the backup cannot guarantee transaction-safe consistency for those tables. It applies a weed lock to those tables so that it can guarantee consistency.

Then it briefly locks all logging activity to take a consistent view of the current coordinates of various logs. It releases the weed lock on non-transactional tables. Using the log coordinates that were taken earlier in the process, it gathers all logs for transactions that have occurred since then.

Bear in mind that the backup process takes place while the system is active. So, for a consistent backup, it must record not only the data files, but all changes that occurred during the backup. Then it releases the backup lock. The last piece of information recorded is any metadata for the backup itself, including its timing and contents in the final redo log. That completes the backup operation.

16:30

Nikita: And where are the files stored?

Perside: The files contained in the backup are saved to the backup location, which can be on the local system or in network storage. The files contained in the backup location include files from the MySQL data directory. Some raw files include InnoDB tablespace, plus any InnoDB file per table tablespace files, and InnoDB log files. Other files might include data files belonging to other storage engines, perhaps MyISAM files. The various log files in instance configuration files are also retained.

17:20

Lois: What steps do you follow to restore a MySQL Enterprise Backup, and how do you guarantee consistency, especially when dealing with incremental backups?

Perside: To restore from a backup using MySQL Enterprise Backup, you must first remove any previous files from the data directory. The restore process will fail if you attempt to restore over an existing system or backup. Then you restore the database with appropriate options.

If you only restore a single backup, you can use copy, back, and apply log to ensure that the restored system has a consistency state. If you perform a full backup in subsequent incremental backups, you might need to restore multiple times using copy-back, and then use copy-back-and-apply-log only for the final consistent restore operation.

The restart server might be on the same host or might be a different host with different configuration. This means that you might have to change some configuration on the restored server, including the operating system ownership of the restored data directory and various MySQL configuration files.

If you want to retain the MySQL configuration files from the source server to reproduce on a new server, you should copy those files separately. MySQL Enterprise Backup focuses on the data rather than the server configuration. It does, however, produce configuration files appropriate for the backup.

These are similar to the MySQL configuration files, but only contain options relevant for the backup process itself. There's also variables that have been changed to non-default values and all global variable values. These files must be renamed and possibly edited before they are suitable to become configuration files in the newly restored server.

For example, the mysqld-auto.cnf file contains a JSON-formatted set of persisted variables. The backup process stores this as the newly named backup mysqld-auto.cnf. If you want to use it in the restored server, you must rename it and place it in the appropriate location so that the restored server can read it.

This also applies in part to the auto.cnf file, which contain identifying information for the server. If you are replacing the original server or restoring on the same host, then you can keep the original values. However, this information must be unique within a network. So, if you are restoring this backup to create a replica in a replication topology, you must not include that file and instead start MySQL without it so that it creates its own unique identifying information.

21:14

Nikita: Let’s discuss securing and optimizing backups. How does MySQL Enterprise Backup handle encryption and compression, and what are the critical considerations for each?

Perside: You can encrypt backups so that they are secure while moving them around or archiving them. The encrypt option performs the encryption. And you can specify the encryption key either on the command line as a string or a key file that has been generated with some cryptographic algorithm. Encryption only applies to image files, not to backup directories.

You can also compress backup with different levels of compression, with higher levels requiring more CPU, but resulting in greater savings in storage. Compression only works with InnoDB data files. If your organization has media management software for performing backups, perhaps to a tape array, then you can use the SBT interface supported in MySQL Enterprise Backup.

22:34

Lois: Before we wrap up, could you share how MySQL Enterprise Backup facilitates the management of backups across a multi-server environment?

Perside: As an enterprise solution, it's easy to run MySQL Enterprise Backup in a multi-server environment. We've already mentioned backing up to cloud storage, but you can, of course, back up to a directory or image on network storage that can be mounted locally, perhaps with NFS or some other file system.

The "with time" option enables multiple backups within the same backup directory, where each in its own subdirectory named with the timestamp. This is especially useful when you want to run the same backup script repeatedly.

23:32

Lois: Thank you for that detailed overview, Perside. This wraps up our discussion of the various backup types, their pros and cons, and how to select the right option for your needs. In our next session, we’ll explore the different MySQL monitoring strategies and look at the features as well as benefits of Heatwave.

Nikita: And if you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and take a look at the MySQL 8.4 Essentials course. Until then, this is Nikita Abraham…

Lois: And Lois Houston signing off!

24:06

That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  continue reading

116 פרקים

Minden epizód

×
 
In this episode, Lois Houston and Nikita Abraham, along with Nick Wagner, focus on GoldenGate’s terminology and architectural evolution. Nick defines source and target systems, which are crucial for data replication, and then moves on to explain the data extraction and replication processes. He also talks about the new microservices architecture, which replaces the classic architecture, offering benefits like simplified management, enhanced security, and a user-friendly interface. Nick highlights how this architecture facilitates easy upgrades and provides a streamlined experience for administrators. Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston: Director of Innovation Programs. Lois: Hi there! Thanks for joining us again as we make our way through Oracle GoldenGate 23ai. Last week, we discussed all the new features introduced in 23ai and today, we’ll move on to the terminology, the different processes and what they do, and the architecture of the product at a high level. 00:56 Nikita: Back with us is Nick Wagner, Senior Director of Product Management for Oracle GoldenGate. Hi Nick! Let’s get into some of the terminology. What do we actually call stuff in GoldenGate? Nick: Within GoldenGate, we have our source systems and our target systems. The source is where we're going to be capturing data from, the targets, where we're going to be applying data into. And when we start talking about things like active-active or setting up GoldenGate for high availability, where your source can also be your target, it does become a little bit more complex. And so in some of those cases, we might refer to things as East and West, or America and Europe, or different versions of that. We also have a couple of different things within the product itself. We have what we call our Extract and our Replicat. The Extract is going to be the process that pulls the data out of the database, our capture technology. Our Replicat’s going to be the one that applies the data into the target system, or you can also look at it as a push technology. We have what we call our Distribution Path. Our Distribution Path is going to be how we're sending the data across the network. A lot of times when customers run GoldenGate, they don't have the luxury of just having a single server of GoldenGate that can pull data from one database and push data into another one. They need to set up multiple hops of that data. And so in that case, we would use what we call a Distribution Path to send that data from one system to the next. We also have what we call a Target Initiated Path. It's kind of a subset of your Distribution Path, but it allows you to communicate from a less secure environment into a more secure environment. 02:33 Lois: Nick, what about parameter names. I’ve seen them in uppercase…title case…does that matter? Nick: GoldenGate has a lot of parameters. This is something you'll see all over the place within GoldenGate itself. These parameters are in your Extract and Replicat parameter files during your distribution path parameter files. Parameters for GoldenGate are case insensitive. Within your own environments, you can set it up to have lowercase, mixed case, whatever you want, but just be aware that they are case insensitive. GoldenGate doesn't care, it's just for readability. And then we also have something called trail files. Trail files is where GoldenGate stores all the data before we're able to apply it into that target system. Think about it as our queuing mechanism, and we're queuing everything outside the database so that we're not overloading those database environments. And that's some of the terminology for the product itself. We also have microservices within GoldenGate. 03:31 Nikita: And at the heart of everything is the Service Manager, right? Talk to us about what it is and what it does. Nick: The service manager is responsible for making sure that everything else is up and running. If you are familiar with GoldenGate classic architecture, this is kind of similar to a GoldenGate manager where that process was there to make sure that processes were running the trail files, or excuse me, that certain error logs were getting written out. If a process went down, the manager would restart that process. The service manager is performing a lot of those same functions. Now attached to the service manager, we have our configuration service. This is new in GoldenGate 23ai. This configuration service is going to allow you to set up GoldenGate for highly available environments. So you can build HA into GoldenGate itself using the configuration service. 04:22 Lois: And what does this configuration service do? Nick: This configuration service essentially moves the checkpoint files that used to be on disk into a database so that everything can be stored inside of a database. Also attached to the service manager, we have the performance metric service. This is a service that is going to be gathering all the performance metrics of GoldenGate. So it's going to tell you how fast things are going, what the latencies are, how many bytes per second we're reading from, the transaction logs or writing to our trail files. How quickly a distribution path is sending data across a network. If you want to know any of your lag information, you'll get it from the performance metrics server. We also have the receiver service and the distribution service. These two work hand in hand to establish network communication between two GoldenGate environments. So on what we call our source system, we have a distribution service that's going to send the data to our target system. On the target system, a receiver service is going to receive that data and then rewrite the trail files. We also have the administration service that's responsible for authentication and authorization of the users, as well as making sure that people have access to the right information. 05:33 Nikita: Ok. Moving on the deployment, how is GoldenGate actually deployed, Nick? Nick: GoldenGate is kinda nice. So the way that the product is installed is you install the GoldenGate environment and that's what we call our service manager deployment under a specific GoldenGate home. So the software binaries themselves get installed under a home, we'll say U01/OGG23AI. Now once I've installed GoldenGate once, that's my OGG home. I can now have any number of service managers and deployments tied to that same home. 06:11 Lois: Ok, let’s work with an example to make this simpler. Let’s say I've got a service manager that's going be responsible for three different deployments: Accounting, Finance, and Sales. Nick: Each of these deployments is going to reside in its own directory. Each of these deployments is going to have its own set of microservices. And so this also means that each of these deployments can have their own set of users. So the people that access the GoldenGate accounting deployment can be different than the ones that access the sales deployment. This means with this distribution of roles that I can have somebody come in and administer the sales database, but they wouldn't have any information or any access to accounting or finance. And this is very important, it allows you to really pull that information apart and separate it. Each of these environments also has their own set of parameter files, Extract process, Replicat process, distribution services, and everything. So it's a very nice way of splitting things up, but all having them tied to the same GoldenGate home system. And this home is very important. So I can take a deployment, let's say my finance deployment, and if I want to move it to a new GoldenGate home and that GoldenGate home is a different version, like let's say that my original home is 23.4, my new GoldenGate home is 23.7, I simply stop that GoldenGate deployment. I stopped at a finance deployment. I changed its OGG home from 23.4 to 23.7. I restart the deployment, that deployment is automatically upgraded to the new environment and attached to the new system. So it makes upgrading very, very simple, very easy, very elegant. 07:53 Nikita: Ok. So, we’ve spoken about the services…some of the terminology. Let’s get into the architecture next. Nick: So when we talk about the architecture for GoldenGate, we used to have two different architectures. We had a classic architecture and a microservices architecture. Classic architecture was something that's been around since the very beginning of GoldenGate in the late '90s. We announced that, that architecture was deprecated in 19c. And Oracle deprecated means that feature is no longer going to be enhanced and it'll be patched selectively. And at some point in the future, it'll be entirely desupported. Well, GoldenGate 23ai is that future. And so in 23ai, the classic architecture is desupported, that means that it's no longer in the build at all. And so it's just microservices architecture. 08:41 Lois: Is there a tool to assist with this migration? Nick: We do have a migration utility that will convert an old classic architecture into the new microservices architecture. But there is quite a bit of learning curve to the new microservices architecture. So it's important that we go through how it works in the changes. 09:04 Are you looking to optimize your implementation strategies and improve efficiency? We have a solution for you! Our new Oracle Fusion Cloud Applications Foundations training and certification program. You’ll learn to leverage Oracle Modern Best Practice (OMBP) to re-imagine business processes using advanced technologies in Oracle Fusion Cloud Applications such as AI, mobile, analytics, and more. Visit mylearn.oracle.com to get started today. 09:37 Nikita: Welcome back! Nick, what are the benefits of this microservices architecture? Nick: It's got that simplified lifecycle for patching and upgrading. A lot of the GoldenGate patches that you get, especially these bundle patches, are complete installs as well. So you can go into My Oracle Support and download a complete install of a patch and that way, you don't have to use old patch to apply them. The only time you'll be using old patch is for one-off patches or smaller patches that need to be applied to your GoldenGate system. The microservices product has the same trusted Capture and Apply process that Classic did. There's almost no changes between the two except on how they communicate with their parent processes. And so the same logic that you use to pull data from Oracle or to apply data into Oracle is all the same. 10:25 Lois: And has the interface been upgraded as well? Nick: We've added a really nice, easy to use web interface for the microservices version of GoldenGate. Not only is this web interface work with all your standard browsers, but it's also mobile friendly too. So I can actually control and administer GoldenGate right through my mobile device. It also has new secure remote administration. This is something that the classic architecture was really missing. And so in the classic architecture, to use the command line interface, you had to log into the database server where GoldenGate was installed. Now, the command line interface, as well as the web interface and the REST API, all use remote administration and authentication. So that means that I can install the new command line interface or what we call admin client on my laptop locally and I can connect to any GoldenGate deployment as long as I have the username and password for that deployment. It's also more secure. GoldenGate microservices can also be deployed on premise or in OCI as a service and now also on these third-party clouds like Azure and Google Cloud. And it's also easier for developers to integrate in with the APIs themselves. Everything that GoldenGate does through the admin client as well as the web UI can all be traced. The REST API calls for GoldenGate are all fully published so you can get them right directly from the documentation, you can build your own web interface if you want to. So it makes it very easy. The REST APIs are also streamlined. With a single REST API call, I can do something like add an Extract process, create it, set up my parameter file, and set up the trail files all with a single API command. Whereas in the past, it would require multiple command line interface commands to do that same thing. So it's extremely elegant, very advanced. 12:16 Nikita: What does the microservices architecture look like? I know it’s a bit complicated when we’re not actually looking at a diagram of it, but just a high level, can you explain the different parts of it? Nick: It's pretty straightforward. But essentially what you've got on each system is a service manager. That service manager is then going to have a number of processes or services beneath it. It'll have the configuration service that stores the checkpoint information for GoldenGate. It'll have the administrative service for the authentication and users, the distribution service to send the data across a network, a receiver service to receive that information, performance metrics to get the performance statistics out of GoldenGate. And then of course, you also have your Extracts and Replicats that capture and apply technology. Each of those Extracts and Replicats will then connect to a database on the Extract side of things. That Extract is going to write to trail files. Those trail files are then going to be sent across the network where they're rebuilt on the target system and the Replicat’s going to consume them and apply them into the target database. So the Replicat behaves almost like an end user. So it's taking that trail file data and simply converting it to DML operations, insert, update, delete, or a DDL operation in the case of Oracle, alter table, create table, et cetera, to go into that target database. 13:39 Lois: To look at a diagram of this architecture and learn about it in more detail, check out the Oracle GoldenGate 23ai Fundamentals course on mylearn.oracle.com. So, Nick, if I’m looking to deploy GoldenGate, what should I primarily keep in mind? Nick: So as you go to install GoldenGate and you look at a deployment, there's a couple of important environment variables that you want to make sure you're aware of. So one of the first ones is your OGG_Home. This environment variable is extremely important. This is the location of the GoldenGate software itself. And I want to stress how important it is to always use version numbers when you're setting up your GoldenGate home. When you go to install the software, if you're installing GoldenGate 23.5, use 23.5 within the home directory structure. If you're installing GoldenGate 23.7, use 23.7 inside that directory structure. 14:33 Nikita: Right… that way I’ll always know which versions are which, and it’ll make it really easy to upgrade and move from one version to the next. Ok, got it. What else, Nick? Nick: There's a couple other important directories. You have your OGG_ETC_HOME. This is where things like the configuration files are going to reside, parameter files, all your certificates for security, including the wallets where we store the credentials for not only the database accounts, but also for the GoldenGate user accounts as well. We have our GoldenGate variable home directory or VAR home. This is where all the GoldenGate log files are residing. And these are the log files that allow you to see what's going on in GoldenGate for auditing purposes. Anytime anybody makes a change to GoldenGate, you're going to see information go into the log files on what was happening and how it was working and what they did, what time they did, what command they issued. Another big important feature about these log files is it also gives you error information and troubleshooting details. So if you ever need to find out what happened in GoldenGate, what went wrong, you would look at these log files to find out that information. And then you also have your OGG_DATA_HOME. This is where those trail files are going to go. Essentially, this is kind of the queuing or overflow for GoldenGate. There's a couple of other additional components. We've got the admin client. This is our command line utility. If you don't want to use a web browser or prefer a command line utility, you can use the admin client. The admin client is also fully scriptable. So if you wanted to write scripts that would go off and automate things in GoldenGate, you can do that. A lot of customers did that with GGSCI in the classic architecture. You can do the same thing now with the admin client. The other component is the microservices security authentication and authorization services. These handle communication security, especially making sure that any passwords or usernames and everything like that is all encrypted. And instead of using an actual username and password, everything through the product is going to be done through an alias. And then it also handles all the authorization authentication, permissions, user accountability, and roles within GoldenGate. 16:39 Lois: Anything else you’d like to talk about before we wrap up for today, Nick? Nick: I also wanted to take a minute to talk about the REST API. All the microservices provide REST APIs to administer them and all of these are fully documented. They can be used by any client that can make REST API calls. So if you wanted to use Python, cURL, a web browser, you can do that as well. They're all just HTTP or HTTPS calls, get, put, patch, the standard REST API standards. And then GoldenGate does provide our admin client as well as a WebUI that use these REST APIs under the covers if you ever wanted to get a more advanced look at how it works. 17:18 Nikita: Well, that’s all the time we have for today. Thanks for joining us, Nick. Lois: Yes, thanks Nick. We look forward to having you back next week to talk with us about security strategies and data recovery. Nikita: And if you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 17:43 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
In this episode, Lois Houston and Nikita Abraham continue their deep dive into Oracle GoldenGate 23ai, focusing on its evolution and the extensive features it offers. They are joined once again by Nick Wagner, who provides valuable insights into the product's journey. Nick talks about the various iterations of Oracle GoldenGate, highlighting the significant advancements from version 12c to the latest 23ai release. The discussion then shifts to the extensive new features in 23ai, including AI-related capabilities, UI enhancements, and database function integration. Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we introduced Oracle GoldenGate and its capabilities, and also spoke about GoldenGate 23ai. In today’s episode, we’ll talk about the various iterations of Oracle GoldenGate since its inception. And we’ll also take a look at some new features and the Oracle GoldenGate product family. 00:57 Lois: And we have Nick Wagner back with us. Nick is a Senior Director of Product Management for GoldenGate at Oracle. Hi Nick! I think the last time we had an Oracle University course was when Oracle GoldenGate 12c was out. I’m sure there’s been a lot of advancements since then. Can you walk us through those? Nick: GoldenGate 12.3 introduced the microservices architecture. GoldenGate 18c introduced support for Oracle Autonomous Data Warehouse and Autonomous Transaction Processing Databases. In GoldenGate 19c, we added the ability to do cross endian remote capture for Oracle, making it easier to set up the GoldenGate OCI service to capture from environments like Solaris, Spark, and HP-UX and replicate into the Cloud. Also, GoldenGate 19c introduced a simpler process for upgrades and installation of GoldenGate where we released something called a unified build. This means that when you install GoldenGate for a particular database, you don't need to worry about the database version when you install GoldenGate. Prior to this, you would have to install a version-specific and database-specific version of GoldenGate. So this really simplified that whole process. In GoldenGate 23ai, which is where we are now, this really is a huge release. 02:16 Nikita: Yeah, we covered some of the distributed AI features and high availability environments in our last episode. But can you give us an overview of everything that’s in the 23ai release? I know there’s a lot to get into but maybe you could highlight just the major ones? Nick: Within the AI and streaming environments, we've got interoperability for database vector types, heterogeneous capture and apply as well. Again, this is not just replication between Oracle-to-Oracle vector or Postgres to Postgres vector, it is heterogeneous just like the rest of GoldenGate. The entire UI has been redesigned and optimized for high speed. And so we have a lot of customers that have dozens and dozens of extracts and replicats and processes running and it was taking a long time for the UI to refresh those and to show what's going on within those systems. So the UI has been optimized to be able to handle those environments much better. We now have the ability to call database functions directly from call map. And so when you do transformation with GoldenGate, we have about 50 or 60 built-in transformation routines for string conversion, arithmetic operation, date manipulation. But we never had the ability to directly call a database function. 03:28 Lois: And now we do? Nick: So now you can actually call that database function, database stored procedure, database package, return a value and that can be used for transformation within GoldenGate. We have integration with identity providers, being able to use token-based authentication and integrate in with things like Azure Active Directory and your other single sign-on for the GoldenGate product itself. Within Oracle 23ai, there's a number of new features. One of those cool features is something called lock-free reservation columns. So this allows you to have a row, a single row within a table and you can identify a column within that row that's like an inventory column. And you can have multiple different users and multiple different transactions all updating that column within that same exact row at that same time. So you no longer have row-level locking for these reservation columns. And it allows you to do things like shopping carts very easily. If I have 500 widgets to sell, I'm going to let any number of transactions come in and subtract from that inventory column. And then once it gets below a certain point, then I'll start enforcing that row-level locking. 04:43 Lois: That’s really cool… Nick: The one key thing that I wanted to mention here is that because of the way that the lock-free reservations work, you can have multiple transactions open on the same row. This is only supported for Oracle to Oracle. You need to have that same lock-free reservation data type and availability on that target system if GoldenGate is going to replicate into it. 05:05 Nikita: Are there any new features related to the diagnosability and observability of GoldenGate? Nick: We've improved the AWR reports in Oracle 23ai. There's now seven sections that are specific to Oracle GoldenGate to allow you to really go in and see exactly what the GoldenGate processes are doing and how they're behaving inside the database itself. And there's a Replication Performance Advisor package inside that database, and that's been integrated into the Web UI as well. So now you can actually get information out of the replication advisor package in Oracle directly from the UI without having to log into the database and try to run any database procedures to get it. We've also added the ability to support a per-PDB Extract. So in the past, when GoldenGate would run on a multitenant database, a multitenant database in Oracle, all the redo data from any pluggable database gets sent to that one redo stream. And so you would have to configure GoldenGate at the container or root level and it would be able to access anything at any PDB. Now, there's better security and better performance by doing what we call per-PDB Extract. And this means that for a single pluggable database, I can have an extract that runs at that database level that's going to capture information just from that pluggable database. 06:22 Lois And what about non-Oracle environments, Nick? Nick: We've also enhanced the non-Oracle environments as well. For example, in Postgres, we've added support for precise instantiation using Postgres snapshots. This eliminates the need to handle collisions when you're doing Postgres to Postgres replication and initial instantiation. On the GoldenGate for big data side, we've renamed that product more aptly to distributed applications in analytics, which is really what it does, and we've added a whole bunch of new features here too. The ability to move data into Databricks, doing Google Pub/Sub delivery. We now have support for XAG within the GoldenGate for distributed applications and analytics. What that means is that now you can follow all of our MAA best practices for GoldenGate for Oracle, but it also works for the DAA product as well, meaning that if it's running on one node of a cluster and that node fails, it'll restart itself on another node in the cluster. We've also added the ability to deliver data to Redis, Google BigQuery, stage and merge functionality for better performance into the BigQuery product. And then we've added a completely new feature, and this is something called streaming data and apps and we're calling it AsyncAPI and CloudEvent data streaming. It's a long name, but what that means is that we now have the ability to publish changes from a GoldenGate trail file out to end users. And so this allows through the Web UI or through the REST API, you can now come into GoldenGate and through the distributed applications and analytics product, actually set up a subscription to a GoldenGate trail file. And so this allows us to push data into messaging environments, or you can simply subscribe to changes and it doesn't have to be the whole trail file, it can just be a subset. You can specify exactly which tables and you can put filters on that. You can also set up your topologies as well. So, it's a really cool feature that we've added here. 08:26 Nikita: Ok, you’ve given us a lot of updates about what GoldenGate can support. But can we also get some specifics? Nick: So as far as what we have, on the Oracle Database side, there's a ton of different Oracle databases we support, including the Autonomous Databases and all the different flavors of them, your Oracle Database Appliance, your Base Database Service within OCI, your of course, Standard and Enterprise Edition, as well as all the different flavors of Exadata, are all supported with GoldenGate. This is all for capture and delivery. And this is all versions as well. GoldenGate supports Oracle 23ai and below. We also have a ton of non-Oracle databases in different Cloud stores. On an non-Oracle side, we support everything from application-specific databases like FairCom DB, all the way to more advanced applications like Snowflake, which there's a vast user base for that. We also support a lot of different cloud stores and these again, are non-Oracle, nonrelational systems, or they can be relational databases. We also support a lot of big data platforms and this is part of the distributed applications and analytics side of things where you have the ability to replicate to different Apache environments, different Cloudera environments. We also support a number of open-source systems, including things like Apache Cassandra, MySQL Community Edition, a lot of different Postgres open source databases along with MariaDB. And then we have a bunch of streaming event products, NoSQL data stores, and even Oracle applications that we support. So there's absolutely a ton of different environments that GoldenGate supports. There are additional Oracle databases that we support and this includes the Oracle Metadata Service, as well as Oracle MySQL, including MySQL HeatWave. Oracle also has Oracle NoSQL Spatial and Graph and times 10 products, which again are all supported by GoldenGate. 10:23 Lois: Wow, that’s a lot of information! Nick: One of the things that we didn't really cover was the different SaaS applications, which we've got like Cerner, Fusion Cloud, Hospitality, Retail, MICROS, Oracle Transportation, JD Edwards, Siebel, and on and on and on. And again, because of the nature of GoldenGate, it's heterogeneous. Any source can talk to any target. And so it doesn't have to be, oh, I'm pulling from Oracle Fusion Cloud, that means I have to go to an Oracle Database on the target, not necessarily. 10:51 Lois: So, there’s really a massive amount of flexibility built into the system. 11:00 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now through May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 11:26 Nikita: Welcome back! Now that we’ve gone through the base product, what other features or products are in the GoldenGate family itself, Nick? Nick: So we have quite a few. We've kind of touched already on GoldenGate for Oracle databases and non-Oracle databases. We also have something called GoldenGate for Mainframe, which right now is covered under the GoldenGate for non-Oracle, but there is a licensing difference there. So that's something to be aware of. We also have the OCI GoldenGate product. We are announcing and we have announced that OCI GoldenGate will also be made available as part of the Oracle Database@Azure and Oracle Database@ Google Cloud partnerships. And then you'll be able to use that vendor's cloud credits to actually pay for the OCI GoldenGate product. One of the cool things about this is it will have full feature parity with OCI GoldenGate running in OCI. So all the same features, all the same sources and targets, all the same topologies be able to migrate data in and out of those clouds at will, just like you do with OCI GoldenGate today running in OCI. We have Oracle GoldenGate Free. This is a completely free edition of GoldenGate to use. It is limited on the number of platforms that it supports as far as sources and targets and the size of the database. 12:45 Lois: But it's a great way for developers to really experience GoldenGate without worrying about a license, right? What’s next, Nick? Nick: We have GoldenGate for Distributed Applications and Analytics, which was formerly called GoldenGate for big data, and that allows us to do all the streaming. That's also where the GoldenGate AsyncAPI integration is done. So in order to publish the GoldenGate trail files or allow people to subscribe to them, it would be covered under the Oracle GoldenGate Distributed Applications and Analytics license. We also have OCI GoldenGate Marketplace, which allows you to run essentially the on-premises version of GoldenGate but within OCI. So a little bit more flexibility there. It also has a hub architecture. So if you need that 99.99% availability, you can get it within the OCI Marketplace environment. We have GoldenGate for Oracle Enterprise Manager Cloud Control, which used to be called Oracle Enterprise Manager. And this allows you to use Enterprise Manager Cloud Control to get all the statistics and details about GoldenGate. So all the reporting information, all the analytics, all the statistics, how fast GoldenGate is replicating, what's the lag, what's the performance of each of the processes, how much data am I sending across a network. All that's available within the plug-in. We also have Oracle GoldenGate Veridata. This is a nice utility and tool that allows you to compare two databases, whether or not GoldenGate is running between them and actually tell you, hey, these two systems are out of sync. And if they are out of sync, it actually allows you to repair the data too. 14:25 Nikita: That’s really valuable…. Nick: And it does this comparison without locking the source or the target tables. The other really cool thing about Veridata is it does this while there's data in flight. So let's say that the GoldenGate lag is 15 or 20 seconds and I want to compare this table that has 10 million rows in it. The Veridata product will go out, run its comparison once. Once that comparison is done the first time, it's then going to have a list of rows that are potentially out of sync. Well, some of those rows could have been moved over or could have been modified during that 10 to 15 second window. And so the next time you run Veridata, it's actually going to go through. It's going to check just those rows that were potentially out of sync to see if they're really out of sync or not. And if it comes back and says, hey, out of those potential rows, there's two out of sync, it'll actually produce a script that allows you to resynchronize those systems and repair them. So it's a very cool product. 15:19 Nikita: What about GoldenGate Stream Analytics? I know you mentioned it in the last episode, but in the context of this discussion, can you tell us a little more about it? Nick: This is the ability to essentially stream data from a GoldenGate trail file, and they do a real time analytics on it. And also things like geofencing or real-time series analysis of it. 15:40 Lois: Could you give us an example of this? Nick: If I'm working in tracking stock market information and stocks, it's not really that important on how much or how far down a stock goes. What's really important is how quickly did that stock rise or how quickly did that stock fall. And that's something that GoldenGate Stream Analytics product can do. Another thing that it's very valuable for is the geofencing. I can have an application on my phone and I can track where the user is based on that application and all that information goes into a database. I can then use the geofencing tool to say that, hey, if one of those users on that app gets within a certain distance of one of my brick-and-mortar stores, I can actually send them a push notification to say, hey, come on in and you can order your favorite drink just by clicking Yes, and we'll have it ready for you. And so there's a lot of things that you can do there to help upsell your customers and to get more revenue just through GoldenGate itself. And then we also have a GoldenGate Migration Utility, which allows customers to migrate from the classic architecture into the microservices architecture. 16:44 Nikita: Thanks Nick for that comprehensive overview. Lois: In our next episode, we’ll have Nick back with us to talk about commonly used terminology and the GoldenGate architecture. And if you want to learn more about what we discussed today, visit mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 17:10 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
In a new season of the Oracle University Podcast, Lois Houston and Nikita Abraham dive into the world of Oracle GoldenGate 23ai, a cutting-edge software solution for data management. They are joined by Nick Wagner, a seasoned expert in database replication, who provides a comprehensive overview of this powerful tool. Nick highlights GoldenGate's ability to ensure continuous operations by efficiently moving data between databases and platforms with minimal overhead. He emphasizes its role in enabling real-time analytics, enhancing data security, and reducing costs by offloading data to low-cost hardware. The discussion also covers GoldenGate's role in facilitating data sharing, improving operational efficiency, and reducing downtime during outages. Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston: Director of Innovation Programs. Lois: Hi everyone! Welcome to a new season of the podcast. This time, we’re focusing on the fundamentals of Oracle GoldenGate. Oracle GoldenGate helps organizations manage and synchronize their data across diverse systems and databases in real time. And with the new Oracle GoldenGate 23ai release, we’ll uncover the latest innovations and features that empower businesses to make the most of their data. Nikita: Taking us through this is Nick Wagner, Senior Director of Product Management for Oracle GoldenGate. He’s been doing database replication for about 25 years and has been focused on GoldenGate on and off for about 20 of those years. 01:18 Lois: In today’s episode, we’ll ask Nick to give us a general overview of the product, along with some use cases and benefits. Hi Nick! To start with, why do customers need GoldenGate? Nick: Well, it delivers continuous operations, being able to continuously move data from one database to another database or data platform in efficiently and a high-speed manner, and it does this with very low overhead. Almost all the GoldenGate environments use transaction logs to pull the data out of the system, so we're not creating any additional triggers or very little overhead on that source system. GoldenGate can also enable real-time analytics, being able to pull data from all these different databases and move them into your analytics system in real time can improve the value that those analytics systems provide. Being able to do real-time statistics and analysis of that data within those high-performance custom environments is really important. 02:13 Nikita: Does it offer any benefits in terms of cost? Nick: GoldenGate can also lower IT costs. A lot of times people run these massive OLTP databases, and they are running reporting in those same systems. With GoldenGate, you can offload some of the data or all the data to a low-cost commodity hardware where you can then run the reports on that other system. So, this way, you can get back that performance on the OLTP system, while at the same time optimizing your reporting environment for those long running reports. You can improve efficiencies and reduce risks. Being able to reduce the amount of downtime during planned and unplanned outages can really make a big benefit to the overall operational efficiencies of your company. 02:54 Nikita: What about when it comes to data sharing and data security? Nick: You can also reduce barriers to data sharing. Being able to pull subsets of data, or just specific pieces of data out of a production database and move it to the team or to the group that needs that information in real time is very important. And it also protects the security of your data by only moving in the information that they need and not the entire database. It also provides extensibility and flexibility, being able to support multiple different replication topologies and architectures. 03:24 Lois: Can you tell us about some of the use cases of GoldenGate? Where does GoldenGate truly shine? Nick: Some of the more traditional use cases of GoldenGate include use within the multicloud fabric. Within a multicloud fabric, this essentially means that GoldenGate can replicate data between on-premise environments, within cloud environments, or hybrid, cloud to on-premise, on-premise to cloud, or even within multiple clouds. So, you can move data from AWS to Azure to OCI. You can also move between the systems themselves, so you don't have to use the same database in all the different clouds. For example, if you wanted to move data from AWS Postgres into Oracle running in OCI, you can do that using Oracle GoldenGate. We also support maximum availability architectures. And so, there's a lot of different use cases here, but primarily geared around reducing your recovery point objective and recovery time objective. 04:20 Lois: Ah, reducing RPO and RTO. That must have a significant advantage for the customer, right? Nick: So, reducing your RPO and RTO allows you to take advantage of some of the benefits of GoldenGate, being able to do active-active replication, being able to set up GoldenGate for high availability, real-time failover, and it can augment your active Data Guard and Data Guard configuration. So, a lot of times GoldenGate is used within Oracle's maximum availability architecture platinum tier level of replication, which means that at that point you've got lots of different capabilities within the Oracle Database itself. But to help eke out that last little bit of high availability, you want to set up an active-active environment with GoldenGate to really get true zero RPO and RTO. GoldenGate can also be used for data offloading and data hubs. Being able to pull data from one or more source systems and move it into a data hub, or into a data warehouse for your operational reporting. This could also be your analytics environment too. 05:22 Nikita: Does GoldenGate support online migrations? Nick: In fact, a lot of companies actually get started in GoldenGate by doing a migration from one platform to another. Now, these don't even have to be something as complex as going from one database like a DB2 on-premise into an Oracle on OCI, it could even be simple migrations. A lot of times doing something like a major application or a major database version upgrade is going to take downtime on that production system. You can use GoldenGate to eliminate that downtime. So this could be going from Oracle 19c to Oracle 23ai, or going from application version 1.0 to application version 2.0, because GoldenGate can do the transformation between the different application schemas. You can use GoldenGate to migrate your database from on premise into the cloud with no downtime as well. We also support real-time analytic feeds, being able to go from multiple databases, not only those on premise, but being able to pull information from different SaaS applications inside of OCI and move it to your different analytic systems. And then, of course, we also have the ability to stream events and analytics within GoldenGate itself. 06:34 Lois: Let's move on to the various topologies supported by GoldenGate. I know GoldenGate supports many different platforms and can be used with just about any database. Nick: This first layer of topologies is what we usually consider relational database topologies. And so this would be moving data from Oracle to Oracle, Postgres to Oracle, Sybase to SQL Server, a lot of different types of databases. So the first architecture would be unidirectional. This is replicating from one source to one target. You can do this for reporting. If I wanted to offload some reports into another server, I can go ahead and do that using GoldenGate. I can replicate the entire database or just a subset of tables. I can also set up GoldenGate for bidirectional, and this is what I want to set up GoldenGate for something like high availability. So in the event that one of the servers crashes, I can almost immediately reconnect my users to the other system. And that almost immediately depends on the amount of latency that GoldenGate has at that time. So a typical latency is anywhere from 3 to 6 seconds. So after that primary system fails, I can reconnect my users to the other system in 3 to 6 seconds. And I can do that because as GoldenGate’s applying data into that target database, that target system is already open for read and write activity. GoldenGate is just another user connecting in issuing DML operations, and so it makes that failover time very low. 07:59 Nikita: Ok…If you can get it down to 3 to 6 seconds, can you bring it down to zero? Like zero failover time? Nick: That's the next topology, which is active-active. And in this scenario, all servers are read/write all at the same time and all available for user activity. And you can do multiple topologies with this as well. You can do a mesh architecture, which is where every server talks to every other server. This works really well for 2, 3, 4, maybe even 5 environments, but when you get beyond that, having every server communicate with every other server can get a little complex. And so at that point we start looking at doing what we call a hub and spoke architecture, where we have lots of different spokes. At the end of each spoke is a read/write database, and then those communicate with a hub. So any change that happens on one spoke gets sent into the hub, and then from the hub it gets sent out to all the other spokes. And through that architecture, it allows you to really scale up your environments. We have customers that are doing up to 150 spokes within that hub architecture. Within active-active replication as well, we can do conflict detection and resolution, which means that if two users modify the same row on two different systems, GoldenGate can actually determine that there was an issue with that and determine what user wins or which row change wins, which is extremely important when doing active-active replication. And this means that if one of those systems fails, there is no downtime when you switch your users to another active system because it's already available for activity and ready to go. 09:35 Lois: Wow, that’s fantastic. Ok, tell us more about the topologies. Nick: GoldenGate can do other things like broadcast, sending data from one system to multiple systems, or many to one as far as consolidation. We can also do cascading replication, so when data moves from one environment that GoldenGate is replicating into another environment that GoldenGate is replicating. By default, we ignore all of our own transactions. But there's actually a toggle switch that you can flip that says, hey, GoldenGate, even though you wrote that data into that database, still push it on to the next system. And then of course, we can also do distribution of data, and this is more like moving data from a relational database into something like a Kafka topic or a JMS queue or into some messaging service. 10:24 Raise your game with the Oracle Cloud Applications skills challenge. Get free training on Oracle Fusion Cloud Applications, Oracle Modern Best Practice, and Oracle Cloud Success Navigator. Pass the free Oracle Fusion Cloud Foundations Associate exam to earn a Foundations Associate certification. Plus, there’s a chance to win awards and prizes throughout the challenge! What are you waiting for? Join the challenge today by visiting visit oracle.com/education. 10:58 Nikita: Welcome back! Nick, does GoldenGate also have nonrelational capabilities? Nick: We have a number of nonrelational replication events in topologies as well. This includes things like data lake ingestion and streaming ingestion, being able to move data and data objects from these different relational database platforms into data lakes and into these streaming systems where you can run analytics on them and run reports. We can also do cloud ingestion, being able to move data from these databases into different cloud environments. And this is not only just moving it into relational databases with those clouds, but also their data lakes and data fabrics. 11:38 Lois: You mentioned a messaging service earlier. Can you tell us more about that? Nick: Messaging replication is also possible. So we can actually capture from things like messaging systems like Kafka Connect and JMS, replicate that into a relational data, or simply stream it into another environment. We also support NoSQL replication, being able to capture from MongoDB and replicate it onto another MongoDB for high availability or disaster recovery, or simply into any other system. 12:06 Nikita: I see. And is there any integration with a customer’s SaaS applications? Nick: GoldenGate also supports a number of different OCI SaaS applications. And so a lot of these different applications like Oracle Financials Fusion, Oracle Transportation Management, they all have GoldenGate built under the covers and can be enabled with a flag that you can actually have that data sent out to your other GoldenGate environment. So you can actually subscribe to changes that are happening in these other systems with very little overhead. And then of course, we have event processing and analytics, and this is the final topology or flexibility within GoldenGate itself. And this is being able to push data through data pipelines, doing data transformations. GoldenGate is not an ETL tool, but it can do row-level transformation and row-level filtering. 12:55 Lois: Are there integrations offered by Oracle GoldenGate in automation and artificial intelligence? Nick: We can do time series analysis and geofencing using the GoldenGate Stream Analytics product. It allows you to actually do real time analysis and time series analysis on data as it flows through the GoldenGate trails. And then that same product, the GoldenGate Stream Analytics, can then take the data and move it to predictive analytics, where you can run MML on it, or ONNX or other Spark-type technologies and do real-time analysis and AI on that information as it's flowing through. 13:29 Nikita: So, GoldenGate is extremely flexible. And given Oracle's focus on integrating AI into its product portfolio, what about GoldenGate? Does it offer any AI-related features, especially since the product name has “23ai” in it? Nick: With the advent of Oracle GoldenGate 23ai, it's one of the two products at this point that has the AI moniker at Oracle. Oracle Database 23ai also has it, and that means that we actually do stuff with AI. So the Oracle GoldenGate product can actually capture vectors from databases like MySQL HeatWave, Postgres using pgvector, which includes things like AlloyDB, Amazon RDS Postgres, Aurora Postgres. We can also replicate data into Elasticsearch and OpenSearch, or if the data is using vectors within OCI or the Oracle Database itself. So GoldenGate can be used for a number of things here. The first one is being able to migrate vectors into the Oracle Database. So if you're using something like Postgres, MySQL, and you want to migrate the vector information into the Oracle Database, you can. Now one thing to keep in mind here is a vector is oftentimes like a GPS coordinate. So if I need to know the GPS coordinates of Austin, Texas, I can put in a latitude and longitude and it will give me the GPS coordinates of a building within that city. But if I also need to know the altitude of that same building, well, that's going to be a different algorithm. And GoldenGate and replicating vectors is the same way. When you create a vector, it's essentially just creating a bunch of numbers under the screen, kind of like those same GPS coordinates. The dimension and the algorithm that you use to generate that vector can be different across different databases, but the actual meaning of that data will change. And so GoldenGate can replicate the vector data as long as the algorithm and the dimensions are the same. If the algorithm and the dimensions are not the same between the source and the target, then you'll actually want GoldenGate to replicate the base data that created that vector. And then once GoldenGate replicates the base data, it'll actually call the vector embedding technology to re-embed that data and produce that numerical formatting for you. 15:42 Lois: So, there are some nuances there… Nick: GoldenGate can also replicate and consolidate vector changes or even do the embedding API calls itself. This is really nice because it means that we can take changes from multiple systems and consolidate them into a single one. We can also do the reverse of that too. A lot of customers are still trying to find out which algorithms work best for them. How many dimensions? What's the optimal use? Well, you can now run those in different servers without impacting your actual AI system. Once you've identified which algorithm and dimension is going to be best for your data, you can then have GoldenGate replicate that into your production system and we'll start using that instead. So it's a nice way to switch algorithms without taking extensive downtime. 16:29 Nikita: What about in multicloud environments? Nick: GoldenGate can also do multicloud and N-way active-active Oracle replication between vectors. So if there's vectors in Oracle databases, in multiple clouds, or multiple on-premise databases, GoldenGate can synchronize them all up. And of course we can also stream changes from vector information, including text as well into different search engines. And that's where the integration with Elasticsearch and OpenSearch comes in. And then we can use things like NVIDIA and Cohere to actually do the AI on that data. 17:01 Lois: Using GoldenGate with AI in the database unlocks so many possibilities. Thanks for that detailed introduction to Oracle GoldenGate 23ai and its capabilities, Nick. Nikita: We’ve run out of time for today, but Nick will be back next week to talk about how GoldenGate has evolved over time and its latest features. And if you liked what you heard today, head over to mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course to learn more. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 17:33 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
Discover how Oracle APEX leverages OCI AI services to build smarter, more efficient applications. Hosts Lois Houston and Nikita Abraham interview APEX experts Chaitanya Koratamaddi, Apoorva Srinivas, and Toufiq Mohammed about how key services like OCI Vision, Oracle Digital Assistant, and Document Understanding integrate with Oracle APEX. Packed with real-world examples, this episode highlights all the ways you can enhance your APEX apps. Oracle APEX: Empowering Low Code Apps with AI: https://mylearn.oracle.com/ou/course/oracle-apex-empowering-low-code-apps-with-ai/146047/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we looked at how generative AI powers Oracle APEX and in today’s episode, we’re going to focus on integrating APEX with OCI AI Services. Lois: That’s right, Niki. We’re going to look at how you can use Oracle AI services like OCI Vision, Oracle Digital Assistant, Document Understanding, OCI Generative AI, and more to enhance your APEX apps. 01:03 Nikita: And to help us with it all, we’ve got three amazing experts with us, Chaitanya Koratamaddi, Director of Product Management at Oracle, and senior product managers, Apoorva Srinivas and Toufiq Mohammed. In today’s episode, we’ll go through each Oracle AI service and look at how it interacts with APEX. Apoorva, let’s start with you. Can you explain what the OCI Vision service is? Apoorva: Oracle Cloud Infrastructure Vision is a serverless multi-tenant service accessible using the console or REST APIs. You can upload images to detect and classify objects in them. With prebuilt models available, developers can quickly build image recognition into their applications without machine learning expertise. OCI Vision service provides a fully managed model infrastructure. With complete integration with OCI Data Labeling, you can build custom models easily. OCI Vision service provides pretrained models-- Image Classification, Object Detection, Face Detection, and Text Recognition. You can build custom models for Image Classification and Object Detection. 02:24 Lois: Ok. What about its use cases? How can OCI Vision make APEX apps more powerful? Apoorva: Using OCI Vision, you can make images and videos discoverable and searchable in your APEX app. You can use OCI Vision to detect and classify objects in the images. OCI Vision also highlights the objects using a red rectangular box. This comes in handy in use cases such as detecting vehicles that have violated the rules in traffic images. You can use OCI Vision to identify visual anomalies in your data. This is a very popular use case where you can detect anomalies in cancer X-ray images to detect cancer. These are some of the most popular use cases of using OCI Vision with your APEX app. But the possibilities are endless and you can use OCI Vision for any of your image analysis. 03:29 Nikita: Let’s shift gears to Oracle Digital Assistant. Chaitanya, can you tell us what it’s all about? Chaitanya: Oracle Digital Assistant is a low-code conversational AI platform that allows businesses to build and deploy AI assistants. It provides natural language understanding, automatic speech recognition, and text-to-speech capabilities to enable human-like interactions with customers and employees. Oracle Digital Assistant comes with prebuilt templates for you to get started. 04:00 Lois: What are its key features and benefits, Chaitanya? How does it enhance the user experience? Chaitanya: Oracle Digital Assistant provides conversational AI capabilities that include generative AI features, natural language understanding and ML, AI-powered voice, and analytics and insights. Integration with enterprise applications become easier with unified conversational experience, prebuilt chatbots for Oracle Cloud applications, and chatbot architecture frameworks. Oracle Digital Assistant provides advanced conversational design tools, conversational designer, dialogue and domain trainer, and native multilingual support. Oracle Digital Assistant is open, scalable, and secure. It provides multi-channel support, automated bot-to-agent transfer, and integrated authentication profile. 04:56 Nikita: And what about the architecture? What happens at the back end? Chaitanya: Developers assemble digital assistants from one or more skills. Skills can be based on prebuilt skills provided by Oracle or third parties, custom developed, or based on one of the many skill templates available. 05:16 Lois: Chaitanya, what exactly are “skills” within the Oracle Digital Assistant framework? Chaitanya: Skills are individual chatbots that are designed to interact with users and fulfill specific type of tasks. Each skill helps a user complete a task through a combination of text messages and simple UI elements like select list. When a user request is submitted through a channel, the Digital Assistant routes the user's request to the most appropriate skill to satisfy the user's request. Skills can combine multilingual NLP deep learning engine, a powerful dialogflow engine, and integration components to connect to back-end systems. Skills provide a modular way to build your chatbot functionality. Now users connect with a chatbot through channels such as Facebook, Microsoft Teams, or in our case, Oracle APEX chatbot, which is embedded into an APEX application. 06:21 Nikita: That’s fascinating. So, what are some use cases of Oracle Digital Assistant in APEX apps? Chaitanya: Digital assistants streamline approval processes by collecting information, routing requests, and providing status updates. Digital assistants offer instant access to information and documentation, answering common questions and guiding users. Digital assistants assist sales teams by automating tasks, responding to inquiries, and guiding prospects through the sales funnel. Digital assistants facilitate procurement by managing orders, tracking deliveries, and handling supplier communication. Digital assistants simplify expense approvals by collecting reports, validating receipts, and routing them for managerial approval. Digital assistants manage inventory by tracking stock levels, reordering supplies, and providing real-time inventory updates. Digital assistants have become a common UX feature in any enterprise application. 07:28 Want to learn how to design stunning, responsive enterprise applications directly from your browser with minimal coding? The new Oracle APEX Developer Professional learning path and certification enables you to leverage AI-assisted development, including generative AI and Database 23ai, to build secure, scalable web and mobile applications with advanced AI-powered features. From now through May 15, 2025, we’re waiving the certification exam fee (valued at $245). So, what are you waiting for? Visit mylearn.oracle.com to get started today. 08:09 Nikita: Welcome back! Thanks for that, Chaitanya. Toufiq, let’s talk about the OCI Document Understanding service. What is it? Toufiq: Using this service, you can upload documents to extract text, tables, and other key data. This means the service can automatically identify and extract relevant information from various types of documents, such as invoices, receipts, contracts, etc. The service is serverless and multitenant, which means you don't need to manage any servers or infrastructure. You can access this service using the console, REST APIs, SDK, or CLI, giving you multiple ways to integrate. 08:55 Nikita: What do we use for APEX apps? Toufiq: For APEX applications, we will be using REST APIs to integrate the service. Additionally, you can process individual files or batches of documents using the ProcessorJob API endpoint. This flexibility allows you to handle different volumes of documents efficiently, whether you need to process a single document or thousands at once. With these capabilities, the OCI Document Understanding service can significantly streamline your document processing tasks, saving time and reducing the potential for manual errors. 09:36 Lois: Ok. What are the different types of models available? How do they cater to various business needs? Toufiq: Let us start with pre-trained models. These are ready-to-use models that come right out of the box, offering a range of functionalities. The available models are Optical Character Recognition (OCR) enables the service to extract text from documents, allowing you to digitize, scan the documents effortlessly. You can precisely extract text content from documents. Key-value extraction, useful in streamlining tasks like invoice processing. Table extraction can intelligently extract tabular data from documents. Document classification automatically categorizes documents based on their content. OCR PDF enables seamless extraction of text from PDF files. Now, what if your business needs go beyond these pre-trained models. That's where custom models come into play. You have the flexibility to train and build your own models on top of these foundational pre-trained models. Models available for training are key value extraction and document classification. 10:50 Nikita: What does the architecture look like for OCI Document Understanding? Toufiq: You can ingest or supply the input file in two different ways. You can upload the file to an OCI Object Storage location. And in your request, you can point the Document Understanding service to pick the file from this Object Storage location. Alternatively, you can upload a file directly from your computer. Once the file is uploaded, the Document Understanding service can process the file and extract key information using the pre-trained models. You can also customize models to tailor the extraction to your data or use case. After processing the file, the Document Understanding service stores the results in JSON format in the Object Storage output bucket. Your Oracle APEX application can then read the JSON file from the Object Storage output location, parse the JSON, and store useful information at local table or display it on the screen to the end user. 11:52 Lois: And what about use cases? How are various industries using this service? Toufiq: In financial services, you can utilize Document Understanding to extract data from financial statements, classify and categorize transactions, identify and extract payment details, streamline tax document management. Under manufacturing, you can perform text extraction from shipping labels and bill of lading documents, extract data from production reports, identify and extract vendor details. In the healthcare industry, you can automatically process medical claims, extract patient information from forms, classify and categorize medical records, identify and extract diagnostic codes. This is not an exhaustive list, but provides insights into some industry-specific use cases for Document Understanding. 12:50 Nikita: Toufiq, let’s switch to the big topic everyone’s excited about—the OCI Generative AI Service. What exactly is it? Toufiq: OCI Generative AI is a fully managed service that provides a set of state of the art, customizable large language models that cover a wide range of use cases. It provides enterprise grade generative AI with data governance and security, which means only you have access to your data and custom-trained models. OCI Generative AI provides pre-trained out-of-the-box LLMs for text generation, summarization, and text embedding. OCI Generative AI also provides necessary tools and infrastructure to define models with your own business knowledge. 13:37 Lois: Generally speaking, how is OCI Generative AI useful? Toufiq: It supports various large language models. New models available from Meta and Cohere include Llama2 developed by Meta, and Cohere's Command model, their flagship text generation model. Additionally, Cohere offers the Summarize model, which provides high-quality summaries, accurately capturing essential information from documents, and the Embed model, converting text to vector embeddings representation. OCI Generative AI also offers dedicated AI clusters, enabling you to host foundational models on private GPUs. It integrates LangChain and open-source framework for developing new interfaces for generative AI applications powered by language models. Moreover, OCI Generative AI facilitates generative AI operations, providing content moderation controls, zero downtime endpoint model swaps, and endpoint deactivation and activation capabilities. For each model endpoint, OCI Generative AI captures a series of analytics, including call statistics, tokens processed, and error counts. 14:58 Nikita: What about the architecture? How does it handle user input? Toufiq: Users can input natural language, input/output examples, and instructions. The LLM analyzes the text and can generate, summarize, transform, extract information, or classify text according to the user's request. The response is sent back to the user in the specified format, which can include raw text or formatting like bullets and numbering, etc. 15:30 Lois: Can you share some practical use cases for generative AI in APEX apps? Toufiq: Some of the OCI generative AI use cases for your Oracle APEX apps include text summarization. Generative AI can quickly summarize lengthy documents such as articles, transcripts, doctor's notes, and internal documents. Businesses can utilize generative AI to draft marketing copy, emails, blog posts, and product descriptions efficiently. Generative AI-powered chatbots are capable of brainstorming, problem solving, and answering questions. With generative AI, content can be rewritten in different styles or languages. This is particularly useful for localization efforts and catering to diverse audience. Generative AI can classify intent in customer chat logs, support tickets, and more. This helps businesses understand customer needs better and provide tailored responses and solutions. By searching call transcripts, internal knowledge sources, Generative AI enables businesses to efficiently answer user queries. This enhances information retrieval and decision-making processes. 16:47 Lois: Before we let you go, can you explain what Select AI is? How is it different from the other AI services? Toufiq: Select AI is a feature of Autonomous Database. This is where Select AI differs from the other AI services. Be it OCI Vision, Document Understanding, or OCI Generative AI, these are all freely managed standalone services on Oracle Cloud, accessible via REST APIs. Whereas Select AI is a feature available in Autonomous Database. That means to use Select AI, you need Autonomous Database. 17:26 Nikita: And what can developers do with Select AI? Toufiq: Traditionally, SQL is the language used to query the data in the database. With Select AI, you can talk to the database and get insights from the data in the database using human language. At the very basic, what Select AI does is it generates SQL queries using natural language, like an NL2SQL capability. 17:52 Nikita: How does it actually do that? Toufiq: When a user asks a question, the first step Select AI does is look into the AI profile, which you, as a developer, define. The AI profile holds crucial information, such as table names, the LLM provider, and the credentials needed to authenticate with the LLM service. Next, Select AI constructs a prompt. This prompt includes information from the AI profile and the user's question. Essentially, it's a packet of information containing everything the LLM service needs to generate SQL. The next step is generating SQL using LLM. The prompt prepared by Select AI is sent to the available LLM services via REST. Which LLM to use is configured in the AI profile. The supported providers are OpenAI, Cohere, Azure OpenAI, and OCI Generative AI. Once the SQL is generated by the LLM service, it is returned to the application. The app can then handle the SQL query in various ways, such as displaying the SQL results in a report format or as charts, etc. 19:05 Lois: This has been an incredible discussion! Thank you, Chaitanya, Apoorva, and Toufiq, for walking us through all of these amazing AI tools. If you’re ready to dive deeper, visit mylearn.oracle.com and search for the Oracle APEX: Empowering Low Code Apps with AI course. You’ll find step-by-step guides and demos for everything we covered today. Nikita: Until next week, this is Nikita Abraham… Lois: And Lois Houston signing off! 19:31 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
Get ready to explore how generative AI is transforming development in Oracle APEX. In this episode, hosts Lois Houston and Nikita Abraham are joined by Oracle APEX experts Apoorva Srinivas and Toufiq Mohammed to break down the innovative features of APEX 24.1. Learn how developers can use APEX Assistant to build apps, generate SQL, and create data models using natural language prompts. Oracle APEX: Empowering Low Code Apps with AI: https://mylearn.oracle.com/ou/course/oracle-apex-empowering-low-code-apps-with-ai/146047/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome back to another episode of the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and I’m joined by Lois Houston, Director of Innovation Programs. Lois: Hi everyone! In our last episode, we spoke about Oracle APEX and AI. We covered the data and AI -centric challenges businesses are up against and explored how AI fits in with Oracle APEX. Niki, what’s in store for today? Nikita: Well, Lois, today we’re diving into how generative AI powers Oracle APEX. With APEX 24.1, developers can use the Create Application Wizard to tell APEX what kind of application they want to build based on available tables. Plus, APEX Assistant helps create, refine, and debug SQL code in natural language. 01:16 Lois: Right. Today’s episode will focus on how generative AI enhances development in APEX. We’ll explore its architecture, the different AI providers, and key use cases. Joining us are two senior product managers from Oracle—Apoorva Srinivas and Toufiq Mohammed. Thank you both for joining us today. We’ll start with you, Apoorva. Can you tell us a bit about the generative AI service in Oracle APEX? Apoorva: It is nothing but an abstraction to the popular commercial Generative AI products, like OCI Generative AI, OpenAI, and Cohere. APEX makes use of the existing REST infrastructure to authenticate using the web credentials with Generative AI Services. Once you configure the Generative AI Service, it can be used by the App Builder, AI Assistant, and AI Dynamic Actions, like Show AI Assistant and Generate Text with AI, and also the APEX_AI PL/SQL API. You can enable or disable the Generative AI Service on the APEX instance level and on the workspace level. 02:31 Nikita: Ok. Got it. So, Apoorva, which AI providers can be configured in the APEX Gen AI service? Apoorva: First is the popular OpenAI. If you have registered and subscribed for an OpenAI API key, you can just enter the API key in your APEX workspace to configure the Generative AI service. APEX makes use of the chat completions endpoint in OpenAI. Second is the OCI Generative AI Service. Once you have configured an OCI API key on Oracle Cloud, you can make use of the chat models. The chat models are available from Cohere family and Meta Llama family. The third is the Cohere. The configuration of Cohere is similar to OpenAI. You need to have your Cohere OpenAI key. And it provides a similar chat functionality using the chat endpoint. 03:29 Lois: What is the purpose of the APEX_AI PL/SQL public API that we now have? How is it used within the APEX ecosystem? Apoorva: It models the chat operation of the popular Generative AI REST Services. This is the same package used internally by the chat widget of the APEX Assistant. There are more procedures around consent management, which you can configure using this package. 03:58 Lois: Apoorva, at a high level, how does generative AI fit into the APEX environment? Apoorva: APEX makes use of the existing REST infrastructure—that is the web credentials and remote server—to configure the Generative AI Service. The inferencing is done by the backend Generative AI Service. For the Generative AI use case in APEX, such as NL2SQL and creation of an app, APEX performs the prompt enrichment. 04:29 Nikita: And what exactly is prompt enrichment? Apoorva: Let's say you provide a prompt saying "show me the average salary of employees in each department." APEX will take this prompt and enrich it by adding in more details. It elaborates on the prompt by mentioning the requirements, such as Oracle SQL syntax statement, and providing some metadata from the data dictionary of APEX. Once the prompt enrichment is complete, it is then passed on to the LLM inferencing service. Therefore, the SQL query provided by the AI Assistant is more accurate and in context. 05:15 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now to May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 05:41 Nikita: Welcome back! Let’s talk use cases. Apoorva, can you share some ways developers can use generative AI with APEX? Apoorva: SQL is an integral part of building APEX apps. You use SQL everywhere. You can make use of the NL2SQL feature in the code editor by using the APEX Assistant to generate SQL queries while building the apps. The second is the prompt-based app creation. With APEX Assistant, you can now generate fully functional APEX apps by providing prompts in natural language. Third is the AI Assistant, which is a chat widget provided by APEX in all the code editors and for creation of apps. You can chat with the AI Assistant by providing your prompts and get responses from the Generative AI Services. 06:37 Lois: Without getting too technical, can you tell us how to create a data model using AI? Apoorva: A SQL Workshop utility called Create Data Model Using AI uses AI to help you create your own data model. The APEX Assistant generates a script to create tables, triggers, and constraints in either Oracle SQL or Quick SQL format. You can also insert sample data into these tables. But before you use this feature, you must create a generative AI service and enable the Used by App Builder setting. If you are using the Oracle SQL format, when you click on Create SQL Script, APEX generates the script and brings you to this script editor page. Whereas if you are using the Quick SQL format, when you click on Review Quick SQL, APEX generates the Quick SQL code and brings you to the Quick SQL page. 07:39 Lois: And to see a detailed demo of creating a custom data model with the APEX Assistant, visit mylearn.oracle.com and search for the "Oracle APEX: Empowering Low Code Apps with AI" course. Apoorva, what about creating an APEX app from a prompt. What’s that process like? Apoorva: APEX 24.1 introduces a new feature where you can generate an application blueprint based on a prompt using natural language. The APEX Assistant leverages the APEX Dictionary Cache to identify relevant tables while suggesting the pages to be created for your application. You can iterate over the application design by providing further prompts using natural language and then generating an application based on your needs. Once you are satisfied, you can click on Create Application, which takes you to the Create Application Wizard in APEX, where you can further customize your application, such as application icon and other features, and finally, go ahead to create your application. 08:53 Nikita: Again, you can watch a demo of this on MyLearn. So, check that out if you want to dive deeper. Lois: That’s right, Niki. Thank you for these great insights, Apoorva! Now, let's turn to Toufiq. Toufiq, can you tell us more about the APEX Assistant feature in Oracle APEX. What is it and how does it work? Toufiq: APEX Assistant is available in Code Editors in the APEX App Builder. It leverages generative AI services as the backend to answer your questions asked in natural language. APEX Assistant makes use of the APEX dictionary cache to identify relevant tables while generating SQL queries. Using the Query Builder mode enables Assistant. You can generate SQL queries from natural language for Form, Report, and other region types which support SQL queries. Using the general assistance mode, you can generate PL/SQL JavaScript, HTML, or CSS Code, and seek further assistance from generative AI. For example, you can ask the APEX Assistant to optimize the code, format the code for better readability, add comments, etc. APEX Assistant also comes with two quick actions, Improve and Explain, which can help users improve and understand the selected code. 10:17 Nikita: What about the Show AI Assistant dynamic action? I know that it provides an AI chat interface, but can you tell us a little more about it? Toufiq: It is a native dynamic action in Oracle APEX which renders an AI chat user interface. It leverages the generative AI services that are configured under Workspace utilities. This AI chat user interface can be rendered inline or as a dialog. This dynamic action also has configurable system prompt and welcome message attributes. 10:52 Lois: Are there attributes you can configure to leverage even more customization? Toufiq: The first attribute is the initial prompt. The initial prompt represents a message as if it were coming from the user. This can either be a specific item value or a value derived from a JavaScript expression. The next attribute is use response. This attribute determines how the AI Assistant should return responses. The term response refers to the message content of an individual chat message. You have the option to capture this response directly into a page item, or to process it based on more complex logic using JavaScript code. The final attribute is quick actions. A quick action is a predefined phrase that, once clicked, will be sent as a user message. Quick actions defined here show up as chips in the AI chat interface, which a user can click to send the message to Generative AI service without having to manually type in the message. 12:05 Lois: Thank you, Toufiq and Apoorva, for joining us today. Like we were saying, there’s a lot more you can find in the “Oracle APEX: Empowering Low Code Apps with AI” course on MyLearn. So, make sure you go check that out. Nikita: Join us next week for a discussion on how to integrate APEX with OCI AI Services. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 12:28 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
Lois Houston and Nikita Abraham kick off a new season of the podcast, exploring how Oracle APEX integrates with AI to build smarter low-code applications. They are joined by Chaitanya Koratamaddi, Director of Product Management at Oracle, who explains the basics of Oracle APEX, its global adoption, and the challenges it addresses for businesses managing and integrating data. They also explore real-world use cases of AI within the Oracle APEX ecosystem Oracle APEX: Empowering Low Code Apps with AI: https://mylearn.oracle.com/ou/course/oracle-apex-empowering-low-code-apps-with-ai/146047/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Thank you for joining us as we begin a new season of the podcast, this time focused on Oracle APEX and how it integrates with AI to help you create powerful applications. This season is for everyone—from beginners and SQL developers to DBA data scientists and low-code enthusiasts. So, if you’re interested in using Oracle APEX to build low-code applications that have custom generative AI features, you’ll want to stay tuned in. 01:07 Lois: That’s right, Niki. Today, we're going to discuss Oracle APEX at a high level, starting with what it is. Then, we’ll cover a few business challenges related to data and AI innovation that organizations face, and learn how the powerful combination of APEX and AI can help overcome these challenges. 01:27 Nikita: To take us through it all, we’ve got Chaitanya Koratamaddi with us. Chaitanya is Director of Product Management for Oracle APEX. Hi Chaitanya! For anyone new to Oracle APEX, can you explain what it is and why it's so widely used? Chaitanya: Oracle APEX is the world's most popular enterprise low code application platform. APEX enables you to build secure and scalable enterprise-scale applications with world class features that can be deployed anywhere, cloud or on-premises. And with APEX, you can build applications 20 times faster with 100 times less code. APEX delivers the most productive way to develop and deploy mobile and web applications everywhere. 02:18 Lois: That’s impressive. So, what’s the adoption rate like for Oracle APEX? Chaitanya: As of today, there are 19 million plus APEX applications created globally. 5,000 plus APEX applications are created on a daily basis and there are 800,000 plus APEX developers worldwide. 60,000 plus customers in 150 countries across various industry verticals. And 75% of Fortune 500 companies use Oracle APEX. 02:56 Nikita: Wow, the numbers really speak for themselves, right? But Chaitanya, why are organizations adopting Oracle APEX at this scale? Or to put it differently, what’s the core business challenge that Oracle APEX is addressing? Chaitanya: From databases to all data, you know that the world is more connected and automated than ever. To drive new business value, organizations need to explore and exploit new sources of data that are generated from this connected world. That can be sounds, feeds, sensors, videos, images, and more. Businesses need to be able to work with all types of data and also make sure that it is available to be used together. Typically, businesses need to work on all data at a massive scale. For example, supply chains are no longer dependent just on inventory, demand, and order management signals. A manufacturer should be able to understand data describing global weather patterns and how it impacts their supply chains. Businesses need to pull in data from as many social sources as possible to understand how customer sentiment impacts product sales and corporate brands. Our customers need a data platform that ensures all this data works together seamlessly and easily. 04:38 Lois: So, you’re saying Oracle APEX is the platform that helps businesses manage and integrate data seamlessly. But data is just one part of the equation, right? Then there’s AI. How are the two related? Chaitanya: Before we start talking about Oracle AI, let's first talk about what customers are looking for and where they are struggling within their AI innovation. It all starts with data. For decades, working with data has largely involved dealing with structured data, whether it is your customer records in your CRM application and orders from your ERP database. Data was organized into database and tables, and when you needed to find some insights in your data, all you need to do is just use stored procedures and SQL queries to deliver the answers. But today, the expectations are higher. You want to use AI to construct sophisticated predictions, find anomalies, make decisions, and even take actions autonomously. And the data is far more complicated. It is in an endless variety of formats scattered all over your business. You need tools to find this data, consume it, and easily make sense of it all. And now capabilities like natural language processing, computer vision, and anomaly detection are becoming very essential just like how SQL queries used to be. You need to use AI to analyze phone call transcripts, support tickets, or email complaints so you can understand what customers need and how they feel about your products, customer service, and brand. You may want to use a data source as noisy and unstructured as social media data to detect trends and identify issues in real time. Today, AI capabilities are very essential to accelerate innovation, assess what's happening in your business, and most importantly, exceed the expectations of your customers. So, connecting your application, data, and infrastructure allows everyone in your business to benefit from data. 07:32 Raise your game with the Oracle Cloud Applications skills challenge. Get free training on Oracle Fusion Cloud Applications, Oracle Modern Best Practice, and Oracle Cloud Success Navigator. Pass the free Oracle Fusion Cloud Foundations Associate exam to earn a Foundations Associate certification. Plus, there’s a chance to win awards and prizes throughout the challenge! What are you waiting for? Join the challenge today by visiting oracle.com/education. 08:06 Nikita: Welcome back! So, let’s focus on AI across the Oracle Cloud ecosystem. How does Oracle bring AI into the mix to connect applications, data, and infrastructure for businesses? Chaitanya: By embedding AI throughout the entire technology stack from the infrastructure that businesses run on through the applications for every line of business, from finance to supply chain and HR, Oracle is helping organizations pragmatically use AI to improve performance while saving time, energy, and resources. Our core cloud infrastructure includes a unique AI infrastructure layer based on our supercluster technology, leveraging the latest and greatest hardware and uniquely able to get the maximum out of the AI infrastructure technology for scenarios such as large language processing. Then there is generative AI and ML for data platforms. On top of the AI infrastructure, our database layer embeds AI in our products such as autonomous database. With autonomous database, you can leverage large language models to use natural language queries rather than writing a SQL when interacting with the autonomous database. This enables you to achieve faster adoption in your application development. Businesses and their customers can use the Select AI natural language interface combined with Oracle Database AI Vector Search to obtain quicker, more intuitive insights into their own data. Then we have AI services. AI services are a collection of offerings, including generative AI with pre-built machine learning models that make it easier for developers to apply AI to applications and business operations. The models can be custom-trained for more accurate business results. 10:17 Nikita: And what specific AI services do we have at Oracle, Chaitanya? Chaitanya: We have Oracle Digital Assistant Speech, Language, Vision, and Document Understanding. Then we have Oracle AI for Applications. Oracle delivers AI built for business, helping you make better decisions faster and empowering your workforce to work more effectively. By embedding classic and generative AI into its applications, Fusion Apps customers can instantly access AI outcomes wherever they are needed without leaving the software environment they use every day to power their business. 11:02 Lois: Let’s talk specifically about APEX. How does APEX use the Gen AI and machine learning models in the stack to empower developers. How does it help them boost productivity? Chaitanya: Starting APEX 24.1, you can choose your preferred large language models and leverage native generative AI capabilities of APEX for AI assistants, prompt-based application creation, and more. Using native OCI capabilities, you can leverage native platform capabilities from OCI, like AI infrastructure and object storage, etc. Oracle APEX running on autonomous infrastructure in Oracle Cloud leverages its unique native generative AI capabilities tuned specifically on your data. These language models are schema aware, data aware, and take into account the shape of information, enabling your applications to take advantage of large language models pre-trained on your unique data. You can give your users greater insights by leveraging native capabilities, including vector-based similarity search, content summary, and predictions. You can also incorporate powerful AI features to deliver personalized experiences and recommendations, process natural language prompts, and more by integrating directly with a suite of OCI AI services. 12:38 Nikita: Can you give us some examples of this? Chaitanya: You can leverage OCI Vision to interpret visual and text inputs, including image recognition and classification. Or you can use OCI Speech to transcribe and understand spoken language, making both image and audio content accessible and actionable. You can work with disparate data sources like JSON, spatial, graphs, vectors, and build AI capabilities around your own business data. So, low-code application development with APEX along with AI is a very powerful combination. 13:22 Nikita: What are some use cases of AI-powered Oracle APEX applications? Chaitanya: You can build APEX applications to include conversational chatbots. Your APEX applications can include image and object detection capability. Your APEX applications can include speech transcription capability. And in your applications, you can include code generation that is natural language to SQL conversion capability. Your applications can be powered by semantic search capability. Your APEX applications can include text generation capability. 14:00 Lois: So, there’s really a lot we can do! Thank you, Chaitanya, for joining us today. With that, we’re wrapping up this episode. We covered Oracle APEX, the key challenges businesses face when it comes to AI innovation, and how APEX and AI work together to give businesses an AI edge. Nikita: Yeah, and if you want to know more about Oracle APEX, visit mylearn.oracle.com and search for the Oracle APEX: Empowering Low Code Apps with AI course. Join us next week for a discussion on AI-assisted development in Oracle APEX. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 14:39 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
In this special episode of the Oracle University Podcast, Bill Lawson and Nikita Abraham chat with Peter Fernandez, Senior Director of Cloud Certification at Oracle University, about the exciting new Raise Your Game challenge. They discuss how the initiative is designed to enhance participants' skills in Oracle Fusion Cloud Applications and Oracle Cloud Success Navigator. They also cover key details about the challenge, such as how to get started, who can participate, the way it is structured, and the prizes up for grabs. Raise Your Game: https://education.oracle.com/raise-your-game-saas Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Bill: Welcome to the Oracle University Podcast. I’m Bill Lawson, Senior Director of Cloud Applications Product Management with Oracle University, and with me is Nikita Abraham, Team Lead of Editorial Services. Nikita: Hi everyone! Last week, we concluded our three-part series on multicloud, and today, we’re shifting gears and exploring an exciting new challenge that’s been thrown down by Oracle University. To tell us all about it, we have Peter Fernandez joining us. Peter is Senior Director of Cloud Certification at Oracle University. Hi Peter! We’re thrilled to have you with us today! Peter: Hi Niki, hi Bill! I’m delighted to be here. 01:02 Bill: So, Peter, let’s get straight into it. What’s this new challenge all about? Peter: The challenge, which we’re calling Raise Your Game, is an incredible opportunity for anyone looking to gain knowledge and gain professional skills about Oracle’s Fusion Cloud Applications. We launched a skills challenge on Feb 14, and it will continue until May 15, 2025. This challenge encourages you to build expertise in two key areas: Oracle Fusion Cloud Applications and Oracle Cloud Success Navigator. This training is geared towards anyone who could be a student in higher ed or someone pursuing a business degree, and Oracle customers and partners who are new to Oracle’s Applications or experienced consultants implementing business applications. 01:55 Nikita: And how exactly does the challenge help in building this expertise? Peter: The challenge has two levels. In Level 1, you’ll need to complete an Oracle Fusion Cloud Apps Foundations course and pass the corresponding exam. These courses are designed to deepen your understanding of the technology enablers in Oracle’s Fusion Cloud Applications and learn about Oracle’s Modern Best Practice, or OMBP. These are extremely helpful throughout all phases in the journey when implementing and using Oracle Fusion Cloud Applications. The Foundation training itself covers a wide range of topics, including core OMBP processes, key performance metrics, implementation considerations, and technology enablers like AI, ML, mobile, and analytics. 02:49 Bill: Before we move on, Peter, can you tell us more about Oracle Modern Best Practice? We discussed it a few weeks back, but for anyone who missed that episode, it’ll be nice to get a quick refresher. Peter: Sure, Bill. Implementing Oracle Fusion Cloud Applications successfully is more than just technology—it’s about following best practices that drive efficiency and success and tie back to business requirements. Oracle Modern Best Practice represent years of accumulated experience, industry insights, and proven methodologies. It serves as a guiding framework for implementing efficient business processes within Oracle Fusion Cloud Applications. These best practices map the features and innovations within Oracle applications to the processes that customers perform every day, and that is key. These curated, industry-leading practices detail how the features that we have built using the most modern technologies can be leveraged to optimize operations. Having a solid grasp of an OMBP and its associated technology enablers will empower you to ensure smoother business operations and higher customer satisfaction. It will show you how to automate activities, streamline tasks, improve results, and set your team up for continued success. The goal of these courses is to make it easy for implementers, global process owners, IT teams to identify every opportunity to improve an organization’s business processes with Oracle Fusion Cloud Applications. 04:33 Bill: So, getting back to Level 1, what do I earn when I complete it? Peter: When you complete this level, you’ll earn a Level 1 Oracle University Learning Community badge. This recognizes that you have foundational knowledge in your chosen Fusion application. 04:48 Bill: That sounds exciting. And then there’s a Level 2? Peter: There is also a Level 2 and where things get even more exciting. You’re going to take your knowledge to the next level by completing the Oracle Cloud Success Navigator Essentials course and passing the associated assessment. This level in the challenge focuses on particularly applying the knowledge you gained in Level 1 where you’ll explore the Oracle Cloud Success Navigator's features and functionality, and get the skills you need to lead organizations through their Oracle Fusion Cloud Applications implementation journey. 05:25 Nikita: And when I complete Level 2, I earn another badge? Peter: That’s right, Niki. When you successfully complete Level 2, you’ll earn a special Level 2 Oracle University Learning Community badge. The goal of Raise Your Game is to reach the Summit by completing both Level 1 and Level 2 challenges with the fastest time and the highest pass scores. Both these combined determine your position on the leaderboard and your position in the Top 500, which will be awarded separate prizes at the end of the challenge. 05:59 Nikita: So, when you’re done, you’ll have both theoretical and practical knowledge. And I understand that there are some fantastic prizes up for grabs? Peter: Absolutely, Niki. This not only helps with both theoretical but also practical knowledge. Learners also have a chance to be featured on the leaderboard in the Oracle University Learning Community. The leaderboard showcases the people who have achieved Level 1 and Level 2 with the fastest times and the highest scores. Along with the badges I told you about, at the end of the promotion, the top 500 people who complete both Level 1 and Level 2 with the fastest time and highest pass scores will receive an Oracle-branded cap, an Oracle Success Navigator pin, and a special Oracle University Community Success Navigator digital badge. 06:52 Bill: So, Peter, who can participate in this challenge, and are there any prerequisites? Peter: The challenge is open to anyone interested in expanding their knowledge of Oracle Cloud Applications. And while there are no strict prerequisites, a basic understanding of business concepts and some familiarity with Oracle Cloud Applications is recommended. This will ensure that you’re able to make the most of the learning materials and engage with the content effectively. You can always check the program overview on the website if you have more questions about this challenge. We’ve got an FAQ posted there that should answer most anything you are curious about. 07:31 Bill: That’s good to know, Peter. And the fact that I get started no matter my level of experience is great news, too. Peter: Absolutely, Bill. Even if you are a beginner fresh out of college, or a seasoned pro like I mentioned earlier who has been implementing Oracle Fusion Cloud Applications (or other applications) for years, I would recommend the challenge and training to you. Basically, this training is for everyone. This program provides foundational knowledge to improve the implementation approach using Oracle Modern Best Practices. Even those individuals that are certified in Cloud Applications will benefit from learning how these modern best practices fit into their work. 08:12 Nikita: Ok, Peter, I’m ready to do it. How do I get started with the challenge? Peter: That’s great. The first step, of course, is to register. And you can do this by visiting oracle.com/education. That's Oracle's main site. oracle.com/education. And select the first tile that you'll see on the webpage, which is the Raise Your Game challenge. If you don't already have an Oracle MyLearn account, you'll need to create one and you'll be prompted to create one. This account gives you access to the Oracle MyLearning platform. Once you're registered, you'll have access to a curated list of learning paths and corresponding certifications. It's important that you review the official rules and promotion details before proceeding with the challenge. 09:12 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now to May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 09:40 Nikita: Welcome back! Ok, Peter, I’ve registered. What’s next? Peter: After you're done registering, you need to select the Cloud Application course that aligns with your interests and goals. We have courses on four different areas: Human Capital Management, Enterprise Resource Planning, Supply Chain Management, and Customer Experience. Once you complete the training and certification, you're done with Level 1. For Level 2, we have the Oracle Cloud Success Navigator Essentials course and assessment that you will need to complete. You can check your status on the leaderboard in the Oracle University Learning Community and share your progress on social media. Like I was saying, the time taken to complete each of these levels and the higher scores earned determines the Top 500 winners. 10:36 Bill: And the best part of this challenge is that it's completely free, right? Peter: Absolutely. There is no cost associated with participating in the skills challenge. It is completely free for anyone, anywhere in the world to participate as long as they comply with the official rules of the promotion. You can take any or all of the Foundation Associate certification exams at no cost. With multiple free attempts, there is no time limit for completing the exams, but to be eligible for the prizes, you must complete the exams and assessments by May 15, 2025. That's midnight GMT. 11:16 Nikita: What if someone doesn't pass the certification exam on their first attempt? Peter: If someone does not pass the certification exam on their first attempt, we understand that not everyone does. We've made provisions for that. If you don't pass the foundations associate certification exam, you have the option to retake the exam many times over. 11:37 Nikita: Now, Peter, let's say someone has already registered for the Fusion Cloud Applications Foundations Associate certification exam before joining the skills challenge. Will their exam be considered for the prizes? Peter: Well, that's a great question, Niki. If someone has already registered for the exam before joining the challenge, their exam will be considered for the prizes as long as they first join the skills challenge. This ensures that everyone who engages with the challenge has a fair chance to win. 12:06 Bill: Does course content consumed before the start of the challenge count towards the awards and badges? Peter: Unfortunately no, Bill. Any content consumed or purchased before Feb 14, 2025, that's again 12 AM GMT, does not apply retroactively to awards or prizes in the Raise Your Game challenge. We want everyone to start on an equal footing here. 12:29 Nikita: What about certifications earned before the challenge began? Peter: Again, certifications earned before Feb 14, 2025, again 12 AM GMT, do not qualify for the promotion. That ensures again that the challenge is fair for all participants. 12:48 Nikita: Now, Peter, how many free exam attempts do participants get as part of the challenge? Peter: Since all the Oracle Fusion Cloud Applications Foundations Associate Certification exams are free, there is no limit to the number of attempts. Participants can take these exams as many times as they need to. 13:05 Bill: And, Peter, say I want to take more than one of the Foundations courses and exams. Can I do that? Peter: Absolutely. This is a great way for someone to learn about the different areas of business that they may be familiar with. As I mentioned earlier, the Oracle Fusion Cloud Applications Foundations training is a program to provide you with knowledge of OMBPs, Oracle Modern Best Practices, that is, and Fusion Cloud Applications. So, it's a great opportunity to cross-skill. You can earn all four certifications if you choose. 13:38 Bill: Peter, thank you so much for joining us today and telling us all about this challenge. It is a really fantastic opportunity for everyone, whether you’re new to Fusion Cloud Applications or an experienced implementation professional, to boost your Oracle Cloud Apps expertise. We’re really excited to try it out ourselves! Peter: A sincere thank you to you, Bill and Niki. It's been an absolute pleasure. I'd really encourage everyone to jump on this challenge. It's a great way to enhance your learning journey and have some fun along the way. Nikita: I couldn’t agree more! Thanks Peter. That’s a wrap on this episode. Join us next week for another episode of the Oracle University Podcast. Until then, this is Nikita Abraham… Bill: And Bill Lawson, signing off! 14:19 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
The final episode of the multicloud series focuses on Oracle Database@Azure, a powerful cloud database solution. Hosts Lois Houston and Nikita Abraham, along with Senior Manager of CSS OU Cloud Delivery Samvit Mishra, discuss how this service allows customers to run Oracle databases within the Microsoft Azure data center, simplifying deployment and management. The discussion also highlights the benefits of native integration with Azure services, eliminating the need for complex networking setups. Oracle Cloud Infrastructure Multicloud Architect Professional: https://mylearn.oracle.com/ou/course/oracle-cloud-infrastructure-multicloud-architect-professional-2025-/144474 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! For the last two weeks, we’ve been talking about different aspects of multicloud. In the final episode of this three-part series, Samvit Mishra, Senior Manager of CSS OU Cloud Delivery, joins us once again to tell us about the Oracle Database@Azure service. Hi Samvit! Thanks for being here today. Samvit: Hi Niki! Hi Lois! Happy to be back. 01:01 Lois: In our last episode, we spoke about the strategic partnership between Oracle and Microsoft, and specifically discussed the Oracle Interconnect for Azure. Nikita: Yeah, and Oracle Database@Azure is yet another addition to this partnership. What can you tell us about this service, Samvit? Samvit: The Oracle Database@Azure service, which was made generally available in 2023, runs right inside the Microsoft Azure data center and uses Azure networking. The entire Oracle Cloud Database Service infrastructure resides in the Azure data center, while it is managed by an expert Oracle Cloud Infrastructure operations team. It provides customers simple and secure access to Oracle Cloud database services within their chosen Azure deployment region, without getting into the complexity of managing networking between the cloud vendors. It is natively integrated with various Microsoft Azure services. This provides a seamless user experience when configuring and using the different Azure services with OCI Oracle database, since much of the complexity associated with the configuration is greatly simplified. There is no need to set up a private interconnect between Microsoft Azure and OCI because the service itself resides within the Azure data center and uses the Azure network. This is very beneficial in terms of strategic deployment because customers can experience microseconds network latency between the endpoints, while receiving a high-performance database environment. 02:42 Nikita: How do I get started with the Oracle Database@Azure service? Samvit: You begin by purchasing the subscription from Oracle and setting up your billing account. Then you provision the database, resources, and service. With that you are ready to configure your application to connect to the database and work on the remaining deployment. As you continue using the service, you can monitor the different resource metrics using the Azure monitoring services and analyze those logs using Azure Log Analytics. 03:15 Lois: So, the adoption is pretty easy, then. What about the responsibilities? Who is responsible for what? Samvit: The Oracle Cloud operations team is entirely responsible for managing the Exadata Database Infrastructure and the VM cluster resources that are provisioned in the Microsoft Azure data center. Oracle is responsible for maintaining the service software and infrastructure by applying updates as they are released. Any issues arising from the OCI Database Service and the resources will be addressed by Oracle Support. You have to raise a support ticket for them to investigate and provide a resolution. And as Azure customers, you have to do rightsizing, based on your workload needs, and provision the Exadata Database Infrastructure and VM cluster in the OCI pod within the Azure data center. You have to provision the database in Exadata Database Service, apply the database and system updates, and take advantage of the cloud automation to maintain and manage the database. You have to load data, establish the connectivity, and support development on your database. As a customer, you monitor the database and infrastructure metrics and events, and also analyze those logs using the Microsoft Azure-provided native services. 04:42 Nikita: Samvit, what sort of challenges were being faced by customers that necessitated the creation of the Oracle Database@Azure service? Samvit: A common deployment scenario in customer environments was that a lot of critical applications, which could be packaged applications, in-house applications, or customized third-party applications, used Oracle Database as their primary database solution. These Oracle databases were deployed in Exadata Infrastructure on-premises or even in Enterprise Server hardware. Some customers evaluated and migrated many of their packaged and other applications to Microsoft Azure compute. Since Oracle Exadata was not supported in Azure, they had to configure a hybrid deployment in order to use Oracle databases that reside in the Exadata infrastructure on-premises. They needed to configure a dedicated and secure network between the Azure data center and their on-premises data center. This added complexity, incurred high costs, had a latency effect, and was even unreliable. There were also cases where customers migrated Oracle databases on Enterprise Server on-premises to Oracle databases hosted on Azure compute. This did not boost efficiency to a large scale. And those were the only options available when provisioning Oracle Database in Azure because Exadata was not available earlier in Azure. 06:18 Lois: And how has that been resolved now? Samvit: With the Oracle Database@Azure service, customer requirements have been aptly met by allowing them to host their Oracle databases on Exadata infrastructure, right next to their application in the Azure data center. Customers, while migrating their applications to Azure compute, can also migrate their Oracle databases on-premises on Exadata infrastructure directly to Exadata Database Service in Azure. And Oracle databases that are on Enterprise Server on-premises can be consolidated directly into Exadata Database Service in Azure, providing them the benefits of scalability, security, performance, and availability, all that are inherent property of OCI Oracle Exadata Database Service. Customers can see growth in the operational efficiency, saving on the overall cost. 07:17 Nikita: Can you take us through the process of deployment? Samvit: It’s quite simple, actually. First, you deploy the Exadata Database Service that is plugged into Azure VNET. Next, you provision the required number of databases, which might be migrated as is or with a consolidated exercise. You can use any of the Oracle database tools or utilities to do the migration or even use the Oracle Zero Downtime Migration method to automate the entire Oracle database migration. Finally, migrate your enterprise application into the Azure environment. Establish the required network configuration to allow communication between the migrated applications and Oracle databases. And then you are all set to publish your application that is running entirely in Azure. You can leverage other Azure services, like monitoring, log analytics, Power BI, or DevOps tools, to enhance existing or even build and deploy newer enterprise applications that are powered by OCI Oracle Database Service in the back end. 08:25 Lois: What about multi-cloud deployment scenarios where applications reside in Azure, but the Oracle databases are deployed on third-party cloud providers, either as a native solution or in computes? Samvit: These Oracle databases can be migrated to Exadata Database Service in the Oracle Database@Azure service. There is no need for the complex cross-cloud connectivity setup between the vendors. And at the same time, you experience the lowest latency between the application and the database deployment. 09:05 Want to learn how to design stunning, responsive enterprise applications directly from your browser with minimal coding? The new Oracle APEX Developer Professional learning path and certification enables you to leverage AI-assisted development, including generative AI and Database 23ai, to build secure, scalable web and mobile applications with advanced AI-powered features. From now through May 15, 2025, we’re waiving the certification exam fee (valued at $245). So, what are you waiting for? Visit mylearn.oracle.com to get started today. 09:45 Nikita: Welcome back! Samvit, what’s the onboarding process like? Samvit: You have to complete the onboarding process to use the service in Microsoft Azure. But before you do that, you first have to complete the subscription process. You must have an active Microsoft Azure account subscription that will be used for subscribing and onboarding the Oracle Database@Azure service. To subscribe to Oracle Database@Azure, you need to purchase an Oracle Database@Azure private offer from Azure Marketplace. As a customer, you will first reach out to Oracle Sales and negotiate a price for the service. Oracle will provide you with the billing account ID and contact details of the person within the organization who will be handling the service. After this, Oracle will create a private offer in Azure Marketplace. 10:40 Lois: Sorry to interrupt you, but what’s a private offer? Samvit: That’s alright, Lois. Private offers are basically solutions or services created for customers by a Microsoft partner, which, in this case, is Oracle. Purchase of those private offers happens from the private offer management page of Azure Marketplace. But there is a prerequisite. The Azure account must be enabled to make private offer purchases on the subscription from Azure Marketplace. You can refer to the Azure documentation to enable the account, if it is not enabled. You review the offer terms and accept the purchase offer, which will take you to the Create Oracle Subscription page. You validate the subscription and other particulars and proceed with the process. After the service is deployed, the purchase status of the private offer changes to subscribed. There are a few points to note here. Billing and payment are done via Azure, and you can use Microsoft Azure Consumption Commitment. You can also use your on-premises licenses with the Bring Your Own License option and the Unlimited License Agreements to pay towards your service consumption. And you also receive Oracle Support rewards for every dollar spent on the service. 12:02 Nikita: OK, now that I’m subscribed, what’s next? Samvit: After you complete the subscription step, Oracle Database@Azure will appear as an Azure resource, just like any other Azure service, and you can move on to onboarding. Onboarding begins with the linking of your OCI account, which will be used for provisioning and managing database resources. The account is also used for provisioning infrastructure and software maintenance updates for the database service. You can either provide an existing OCI account or create a new one. Then you set up Identity Federation between the Azure account and the OCI tenancy. This can authenticate login to the OCI portal using Azure credentials, which you require while performing certain operations in OCI. For example, provisioning databases, getting infrastructure and software maintenance updates, and so on. This is an optional step, but it is recommended that you complete the Federation. The last step is to authorize users by assigning groups and roles in order to have the needed privileges to perform different operations. For example, some groups of users can manage Exadata Database Service resources in Azure, while some can manage the databases in OCI. You can refer to OCI documentation to get detailed descriptions of roles and group names. 13:31 Lois: Right. That will ensure you assign the correct permissions to the appropriate users. Samvit: Exactly. Assigning the correct roles and permissions to individuals inside the organization is a necessary step for transacting in the marketplace and guaranteeing a smooth purchasing experience. Azure Marketplace uses Azure Role-Based Access Control to enable you to acquire solutions certified to run on Azure. Those are then going to determine the purchasing privileges within the organization. 14:03 Nikita: There’s so much more we can discuss about Oracle Database@Azure, but we have to stop somewhere! Thank you so much, Samvit, for joining us over these last three episodes. Lois: Yeah, it’s been great to have you, Samvit. Samvit: Thank you for having me. Nikita: Remember, we also have the Oracle Database@Google Cloud service. So, if you want to learn about that, or even if you want to dive deeper into the topics we covered today, go to mylearn.oracle.com and search for the Oracle Cloud Infrastructure Multicloud Architect Professional course. Lois: There are a lot of demonstrations that you’ll surely find useful. Well, that’s all we have for today. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 14:43 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
Join Lois Houston and Nikita Abraham as they interview Samvit Mishra, Senior Manager of CSS OU Cloud Delivery, on Oracle Interconnect for Azure. Learn how this interconnect revolutionizes the customer experience by providing a direct, private link between Oracle Cloud Infrastructure and Microsoft Azure. From use cases to bandwidth considerations, get an in-depth look into how Oracle and Azure come together to create a unified cloud experience. Oracle Cloud Infrastructure Multicloud Architect Professional: https://mylearn.oracle.com/ou/course/oracle-cloud-infrastructure-multicloud-architect-professional-2025-/144474 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hey there! Last week, we spoke about multicloud, discussing what it is, and the new partnerships we have with Microsoft Azure, Google Cloud, and Amazon Web Services. If you haven’t gotten to the episode yet, we suggest you go back and listen to it before you dive into this one. 00:56 Nikita: Joining us again is Samvit Mishra, Senior Manager of CSS OU Cloud Delivery, and we’re going to ask him about Oracle Interconnect for Azure. We'll look at the scenarios around Oracle Interconnect for Azure and talk about some considerations too. Hi Samvit! Thanks for being with us today. Samvit: Hi Niki! Hi Lois! Lois: Samvit, you introduced Oracle Interconnect for Azure last week, but tell us, how does it improve the customer experience? What benefits does it offer? 01:25 Samvit: Oracle Interconnect for Azure can be established with a one-time setup, eliminating the need for an intermediary network provider. This cross-cloud direct connection also helps you migrate to the cloud or build cloud-native applications by using the best of OCI and Microsoft Azure. Now, because it is a private connection between Oracle Cloud Infrastructure and Microsoft Azure, you get consistent network performance… around 2 millisecond latency. The interconnect also enables joint customers to take advantage of a unified Identity and Access Management platform. So, you can set up single sign-on between Microsoft Azure and OCI for your Oracle applications, like PeopleSoft and e-Business Suite. 02:16 Nikita: That makes the integration pretty seamless, right? Samvit: Exactly, Niki. Having a federated single sign-on means you authenticate only once to access multiple applications, without signing in separately to access each application. And you also get a secure inter-cloud connection that bypasses the public internet. 02:38 Nikita: How extensive is the global reach of Oracle Cloud Infrastructure and Azure in terms of the number of cloud regions available? Samvit: OCI has the fastest growing network of global data centers, with 50 cloud regions available. And there are 12 Azure interconnect regions. For example, Ashburn in the US is an OCI-Azure interconnect region. 03:01 Lois: Samvit, what is the architecture of Oracle Interconnect for Azure like? How is data transferred securely between a Virtual Cloud Network in Oracle Cloud Infrastructure and a Virtual Network in Microsoft Azure? Samvit: A Virtual Network in a Microsoft Azure region is connected to a Virtual Cloud Network in an OCI region using a private interconnection composed of Azure ExpressRoute and OCI FastConnect. Now, on the OCI side, the FastConnect virtual circuit terminates at a dynamic routing gateway, which is attached to the Virtual Cloud Network. On the Microsoft Azure side, the ExpressRoute connection ends at a virtual network gateway, which is attached to a virtual network. So, traffic from Azure to OCI is routed through the virtual network gateway in Microsoft Azure to the dynamic routing gateway in OCI. What’s important to note is that in both directions, the traffic never leaves the private network. 04:05 Nikita: Wow, ok. Samvit, what are some common use cases of Oracle Interconnect for Azure? Can you give us an example of a supported deployment option? Samvit: We can have a .NET application running in Azure that can access an Oracle database in OCI. Similarly, you can also have custom cloud-native applications running on Azure using Oracle Autonomous Database on the OCI side. 04:29 Lois: And are there any prerequisites when you configure Oracle Interconnect for Azure? Samvit: Yes, there are. Remember, on the Azure side, you must have a virtual network with subnets and a virtual network gateway and on the OCI side, you must have a VCN with subnets and an attached dynamic routing gateway. 04:50 Lois: Let’s talk about the networking components that are involved in each site of the connection. Can you run us through the comparison? Samvit: Now, if we talk about the virtual network component, on the OCI side, there is a Virtual Cloud Network and on the Azure side, there is a Virtual Network. From a virtual circuit standpoint, in OCI, there is the FastConnect virtual circuit… on the Azure side, there is the ExpressRoute circuit. When it comes to the gateway, on the OCI side, there is the dynamic routing gateway and on the Azure side, there is the virtual network gateway. Similarly, for routing, there are route tables in OCI and Microsoft Azure. From a security standpoint, in OCI, you can configure security lists as well as network security groups and on the Azure side, you have network security groups. 05:44 Nikita: What are the benefits of this partnership? Samvit: This partnership allows you to innovate using the best combination of Oracle's and Microsoft’s cloud services based on their features, performance, and pricing. So, in a way, you can combine the capabilities of both cloud vendors. 06:01 Nikita: So, a one-stop shop. Samvit: Exactly, Niki. This partnership also gives you a highly optimized, secure, and unified cross-cloud experience so you can use the best of services from Oracle Cloud Infrastructure and Microsoft Azure. And the best part is you continue to leverage any existing investment in Oracle and Microsoft technologies. 06:24 Lois: I wanted to ask you about the typical scenarios where Oracle Interconnect for Azure is supported. Samvit: There are many scenarios where this Interconnect is supported. Let me run you through a couple of them. You could connect an OCI Virtual Cloud Network to an Azure Virtual Network. That’s a scenario that is supported. You could connect peered OCI VCNs in the same region to Azure. You could connect peered OCI VCNs in different regions to Azure. You could also connect services in Oracle Services Network to Azure. 06:59 Lois: And are there any scenarios where this interconnect is not supported? Samvit: When the scenario involves connecting an on-premises environment to Azure via OCI VCN and vice versa, that is not supported. 07:16 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now through May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 07:42 Nikita: Welcome back! I want to explore these scenarios in a little more detail, Samvit. Samvit: OK. Imagine you have OCI on one side and Azure on the other. In this scenario, we have a dynamic routing gateway in OCI and a virtual network gateway in Azure. This is a basic configuration. With Oracle FastConnect and Azure ExpressRoute, customers can create a private interconnection between their OCI and Azure environments. Now in another scenario, we have VCNs in OCI, and they're peered together using a dynamic routing gateway. With this local peering, the peered VCN can talk to Azure through Oracle Interconnect for Azure. Here’s another scenario. We have VCNs in different OCI regions: one VCN in OCI Region 1 and another in OCI Region 2, with Azure sitting alongside. They have established a remote peering connection, and each VCN has its own dynamic routing gateway. Here's the kicker—the peered VCN in this architecture can also converse with Azure using the interconnect. Now think about this scenario. We have the dynamic routing gateway, but we have also added a service gateway to the VCN in OCI. This service gateway allows your VCN to privately access specific Oracle services without exposing data to the public internet. No internet gateway or NAT gateway is required to reach those specific services. Now, traffic from the VCN to the Oracle Services Network travels over the Oracle network fabric and never traverses the internet. Using Oracle Interconnect for Azure, resources in Azure can also privately access resources in Oracle Services Network. 09:38 Nikita: What are the bandwidth and cost considerations? Samvit: Pricing is based solely on the port capabilities of OCI FastConnect and your ExpressRoute. One thing you need to understand is that the cost of FastConnect is the same across all OCI regions. And there are no separate ingress or egress data charges. The cost of Azure ExpressRoute varies across regions and Oracle recommends that you use the local setting, which has no separate ingress or egress charges. Azure ExpressRoute supports up to 10 GB as bandwidth. FastConnect is available in 1, 2, 5, or 10 Gbps. So, the recommendation here is to choose one of these matching bandwidth options under ExpressRoute. 10:27 Lois: Thank you, Samvit, for taking the time to talk to us about Oracle Interconnect for Azure. Samvit: Thank you for having me. Nikita: Remember, Oracle also offers an interconnect solution with Google Cloud, which is very similar to the one with Azure. It too provides a direct, high-performance, and secure network connection with Oracle Cloud Infrastructure. So, if you want to learn more about it, head over to mylearn.oracle.com and search for the Oracle Cloud Infrastructure Multicloud Architect Professional course. Lois: In our next episode, we’ll take a close look at Oracle Database@Azure service. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 11:07 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
This week, hosts Lois Houston and Nikita Abraham are shining a light on multicloud, a game-changing strategy involving the use of multiple cloud service providers. Joined by Senior Manager of CSS OU Cloud Delivery Samvit Mishra, they discuss why multicloud is becoming essential for businesses, offering freedom from vendor lock-in and the ability to cherry-pick the best services. They also talk about Oracle's pioneering role in multicloud and its partnerships with Microsoft Azure, Google Cloud, and Amazon Web Services. Oracle Cloud Infrastructure Multicloud Architect Professional: https://mylearn.oracle.com/ou/course/oracle-cloud-infrastructure-multicloud-architect-professional-2025-/144474 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Today, we’re moving on to multicloud. In our next three episodes, we’ll be discussing what multicloud is and why there’s so much of a buzz around it. With us is Samvit Mishra, Senior Manager of CSS OU Cloud Delivery. Hi Samvit! Thanks for joining us today. 00:55 Samvit: Hi Niki! Hi Lois! Happy to be here. Lois: So Samvit, we know that Oracle has been an early adopter of multicloud and a pioneer in multicloud services. But for anyone who isn’t familiar with what multicloud is, can you explain what it means? Samvit: Absolutely, Lois. Multicloud is a very simple, basic concept. It is the coordinated use of cloud services from more than one cloud service provider. 01:21 Nikita: But why would someone want to use more than one cloud service provider? Samvit: There are many reasons why a customer might want to leverage two or more cloud service providers. First, it addresses the very real concern of mitigating or avoiding vendor lock-in. By using multiple providers, companies can avoid being tied down to one vendor and maintain their flexibility. 01:45 Lois: That’s like not putting all your eggs in one basket, so to speak. Samvit: Exactly. Another reason is that customers want the best of breed. What that means is basically leveraging or utilizing the best product from one cloud service provider and pairing it against the best product from another cloud service provider. Getting a solution out of the combined products…out of the coordinated use of those services. 02:14 Nikita: So, it sounds like multicloud is becoming the new normal. And as we were saying before, Oracle was a pioneer in this space. But why did we embrace multicloud so wholeheartedly? Samvit: We recognized that our customers were already moving in this direction. Independent studies from Flexera found that 89% of the subjects of the study used multicloud. And we conducted our own study and came to similar numbers. Over 90% of our customers use two or more cloud service providers. HashiCorp, the big infrastructure as code company, came to similar numbers as well, 94%. They basically asked companies if multicloud helped them advance their business goals. And 94% said yes. And all this is very recent data. 03:04 Lois: Can you give us the backstory of Oracle’s entry into the multicloud space? Samvit: Sure. So back in 2019, Oracle and Microsoft Azure joined forces and announced the interconnect service between Oracle Cloud Infrastructure and Microsoft Azure. The interconnect was between Oracle’s FastConnect and Microsoft Azure’s ExpressRoute. This was a big step, as it allowed for a direct connection between the two providers without needing a third-party. And now we have several of our data centers interconnected already. So, out of the 48 regions, 12 of them are already interconnected. And more are coming. And you can very easily configure the interconnect. This interconnectivity guarantees low latency, high throughput, and predictable performance. And also, on the OCI side, there are no egress or ingress charges for your data. There's also a product called Oracle Database@Azure, where Oracle and Microsoft deliver Oracle Database services in Microsoft Azure data centers. 04:12 Lois: That’s exciting! And what are the benefits of this product? Samvit: The main advantage is the co-location. Being co-located with the Microsoft Azure data center offers you native integration between Azure and OCI resources. No manual configuration of a private interconnect between the two providers is needed. You're going to get microsecond latency between your applications and the Oracle Database. The OCI-native Exadata Database Service is available on Oracle Database@Azure. This enables you to get the highest level of Oracle Database performance, scalability, security, and availability. And your tech support can be provided either from Microsoft or from Oracle. 05:03 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now through May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 05:30 Nikita: Welcome back. Samvit, there have been some new multicloud milestones from OCI, right? Can you tell us about them? Samvit: That’s right, Niki. I am thrilled to share the latest news on Oracle’s multicloud partnerships. We now have agreements with Microsoft Azure, Google Cloud, and Amazon Web Services. So, as we were discussing earlier, with Azure, we have the Oracle Interconnect for Azure and Oracle Database@Azure. Now, with Google Cloud, we have the Oracle Interconnect for Google Cloud. And it is very similar to the Oracle Interconnect for Azure. With Google Cloud, we have physically interconnected data centers and they provide a sub-2 millisecond latency private interconnection. So, you can come in and provision virtual circuits going from Oracle FastConnect to Google Cloud Interconnect. And the best thing is that there are no egress or ingress charges for your data. The way it is structured is you have your Oracle Cloud Infrastructure on one side, with your virtual cloud network, your subnets, and your resources. And on the other side, you have your Google Cloud router with your virtual private cloud subnet and your resources interconnecting. You initiate the connectivity on the Google Cloud side, retrieve the service key and provide that service key to Oracle Cloud Infrastructure, and complete the interconnection on the OCI side. So, for example, our US East Ashburn interconnect will match with us-east4 on the Google Cloud side. 07:08 Lois: Now, wasn’t the other major announcement Oracle Database@Google Cloud? Tell us more about that, please. Samvit: With Oracle Database@Google Cloud, you can run your applications on Google Cloud and the database as well inside the Google Cloud platform. That's the Oracle Cloud Infrastructure database co-located in Google Cloud platform data centers. It allows you to run native integration between GCP and OCI resources with no manual configuration of private interconnect between these two cloud service providers. That means no FastConnect, no Interconnect because, again, the database is located in the Google Cloud data center. And you're going to get microsecond latency and the OCI native Exadata Database Service. So, you're going to gain the highest level of Oracle Database performance, scalability, security, and availability. 08:04 Lois: And how is the tech support managed? Samvit: The technical support is a collaboration between Google Cloud and Oracle Cloud Infrastructure. That means you can either have the technical support provided to completion by Google Cloud or by Oracle. One of us will provide you with an end-to-end solution. 08:22 Nikita: During CloudWorld last year, we also announced Oracle Database@AWS, right? Samvit: Yes, Niki. That’s where Oracle and Amazon Web Services deliver the Oracle Database service on Oracle Cloud Infrastructure in your AWS data center. This will provide you with native integration between AWS and OCI resources, with no manual configuration of private interconnect between AWS and OCI. And you're getting microsecond latency with the OCI-native Exadata Database Service. And again, as with Oracle Database@Google Cloud and Oracle Database@Azure, you're gaining the highest level of Oracle Database performance, scalability, security, and availability. And the technical support is provided by either AWS or Oracle all the way to completion. Now, Oracle Database@AWS is currently available in limited preview, with broader availability in the coming months as it expands to new regions to meet the needs of our customers. 09:28 Lois: That’s great. Now, how does Oracle fare when it comes to pricing, especially compared to our major cloud competitors? Samvit: Our pricing is pretty consistent. You’ll see that in all cases across the world, we have the less expensive solution for you and the highest performance as well. 09:45 Nikita: Let’s move on to some use cases, Samvit. How might a company use the multicloud setup? Samvit: Let’s start with the split-stack architecture between Oracle Cloud Infrastructure and Microsoft Azure. Like I was saying earlier, this partnership dates back to 2019. And basically, we eliminated the FastConnect partner from the middle. And this will provide you with high throughput, low latency, and very predictable performance, all of this on highly available links. These links are redundant, ensuring business continuity between OCI and Azure. And you can have your database on the OCI side and your application on Microsoft Azure side or the other way around. You can have SQL Server on Azure and the application running on Oracle Cloud Infrastructure. And this is very easy to configure. 10:34 Lois: It really sounds like Oracle is at the forefront of the multicloud revolution. Thanks so much, Samvit, for shedding light on this exciting topic. Samvit: It was my pleasure. Nikita: That's a wrap for today. To learn more about what we discussed, head over to mylearn.oracle.com and search for the Oracle Cloud Infrastructure Multicloud Architect Professional course. In our next episode, we’ll take a close look at Oracle Interconnect for Azure. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 11:05 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
In this special episode of the Oracle University Podcast, hosts Lois Houston and Nikita Abraham dive into Oracle Fusion Cloud Applications and the new courses and certifications on offer. They are joined by Oracle Fusion Apps experts Patrick McBride and Bill Lawson who introduce the concept of Oracle Modern Best Practice (OMBP), explaining how it helps organizations maximize results by mapping Fusion Application features to daily business processes. They also discuss how the new courses educate learners on OMBP and its role in improving Fusion Cloud Apps implementations. OMBP: https://www.oracle.com/applications/modern-best-practice/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! For the last two months, we’ve been focusing on all things MySQL. But today, we wanted to share some really exciting news about new courses and certifications on Oracle Fusion Cloud Applications that feature Oracle Modern Best Practice, or OMBP, and Oracle Cloud Success Navigator. 00:57 Nikita: And to tell us more about this, we have two very special guests joining us today. Patrick McBride is a Senior Director from the Fusion Application Development organization. He leads the Oracle Modern Best Practice Program office for Oracle. And Bill Lawson is a Senior Director for Cloud Applications Product Management here at Oracle University. We’ll first ask Patrick about Oracle Modern Best Practice and then move on to Bill for details about the new training and certification we’re offering. Patrick, Bill, thanks for being here today. Patrick: Hey, Niki and Lois, thanks for the invitation. Happy to be here. Bill: Hi Niki, Lois. 01:32 Lois: Patrick, let’s start with some basic information about what OMBP are. Can you tell us a little about why they were created? Patrick: Sure, love to. So, modern best practices are more than just a business process. They’re really about translating features and technology into actionable capabilities in our product. So, we've created these by curating industry leading best practices we've collected from our customers over the years. And ensure that the most modern technologies that we've built into the Fusion Application stack are represented inside of those business processes. Our goal is really to help you as customers improve your business operations by easily finding and applying those technologies to what you do every day. 02:18 Nikita: So, by understanding these modern best practice and the technology that enables it, you’re really unlocking the full potential of Fusion Apps. Patrick: Absolutely. So, the goal is that modern best practice make it really easy for customers, implementers, partners, to see the opportunity and take action. 02:38 Lois: That’s great. OK, so, let’s talk about implementations, Patrick. How do Oracle Modern Best Practice support customers throughout the lifecycle of an Oracle Fusion Cloud implementation? Patrick: What we found during many implementers’ journey with taking our solution and trying to apply it with customers is that customers come in with a long list of capabilities that they're asking us to replicate. What they've always done in the past. And what modern best practice is trying to do is help customers to reimage the art of the possible…what's possible with Fusion by taking advantage of innovative features like AI, like IoT, like, you know, all of the other solutions that we built in to help you automate your processes to help you get the most out of the solution using the latest and greatest technology. So, if you're an implementer, there's a number of ways a modern best practice can help during an implementation. First is that reimagine exercise where you can help the customer see what's possible. And how we can do it in a better way. I think more importantly though, as you go through your implementation, many customers aren't able to get everything done by the time they have to go live. They have a list of things they’ve deferred and modern best practices really establishes itself as a road map for success, so you can go back to it at the completion and see what's left for the opportunity to take advantage of and you can use it to track kind of the continuous innovation that Oracle delivers with every release and see what's changed with that business process and how can I get the most out of it. 04:08 Nikita: Thanks, Patrick. That’s a great primer on OMBP that I’m sure everyone will find very helpful. Patrick: Thanks, Niki. We want our customers to understand the value of modern best practices so they can really maximize their investment in Oracle technology today and in the future as we continue to innovate. 04:24 Lois: Right. And the way we’re doing that is through new training and certifications that are closely aligned with OMBP. Bill, what can you tell us about this? Bill: Yes, sure. So, the new Oracle Fusion Applications Foundations training program is designed to help partners and customers understand Oracle Modern Best Practice and how they improve the entire implementation journey with Fusion Cloud Applications. As a learner, you will understand how to adhere to these practices and how they promise a greater level of success and customer satisfaction. So, whether you’re designing, or implementing, or going live, you’ll be able to get it right on day one. So, like Patrick was saying, these OMBPs are reimagined, industry-standard business processes built into Fusion Applications. So, you'll also discover how technologies like AI, Mobile, and Analytics help you automate tasks and make smarter decisions. You’ll see how data flows between processes and get tips for successful go-lives. So, the training we’re offering includes product demonstrations, key metrics, and design considerations to give you a solid understanding of modern best practice. It also introduces you to Oracle Cloud Success Navigator and how it can be leveraged and relied upon as a trusted source to guide you through every step of your cloud journey, so from planning, designing, and implementation, to user acceptance testing and post-go-live innovations with each quarterly new release of Fusion Applications and those new features. And then, the training also prepares you for Oracle Cloud Applications Foundations certifications. 05:55 Nikita: Which applications does the training focus on, Bill? Bill: Sure, so the training focuses on four key pillars of Fusion Apps and the associated OMBP with them. For Human Capital Management, we cover Human Resources and Talent Management. For Enterprise Resource Planning, it’s all about Financials, Project Management, and Risk Management. In Supply Chain Management, you’ll look at Supply Chain, Manufacturing, Inventory, Procurement, and more. And for Customer Experience, we’ll focus on Marketing, Sales, and Service. 06:24 Lois: That’s great, Bill. Now, who is the training and certification for? Bill: That’s a great question. So, it’s really for anyone who wants to get the most out of Oracle Fusion Cloud Applications. It doesn’t matter if you’re an experienced professional or someone new to Fusion Apps, this is a great place to start. It’s even recommended for professionals with experience in implementing other applications, like on-premise products. The goal is to give you a solid foundation in Oracle Modern Best Practice and show you how to use them to improve your implementation approach. We want to make it easy for anyone, whether you’re an implementer, a global process owner, or an IT team employee, to identify every way Fusion Applications can improve your organization. So, if you’re new to Fusion Apps, you’ll get a comprehensive overview of Oracle Fusion Applications and how to use OMBP to improve business operations. If you're already certified in Oracle Cloud Applications and have years of experience, you'll still benefit from learning how OMBP fits into your work. If you’re an experienced Fusion consultant who is new to Oracle Modern Best Practice processes, this is a good place to begin and learn how to apply them and the latest technology enablers during implementations. And, lastly, if you’re an on-premise or you have non-Fusion consultant skills looking to upskill to Fusion, this is a great way to begin acquiring the knowledge and skills needed to transition to Fusion and migrate your existing expertise. 07:53 Raise your game with the Oracle Cloud Applications skills challenge. Get free training on Oracle Fusion Cloud Applications, Oracle Modern Best Practice, and Oracle Cloud Success Navigator. Pass the free Oracle Fusion Cloud Foundations Associate exam to earn a Foundations Associate certification. Plus, there’s a chance to win awards and prizes throughout the challenge! What are you waiting for? Join the challenge today by visiting oracle.com/education. 08:27 Nikita: Welcome back! Bill, how long is it going to take me to complete this training program? Bill: So, we wanted to make this program detailed enough so our learners find it valuable, obviously. But at the same time, we didn’t want to make it too long. So, each course is approximately 5 hours or more, and provides folks with all the requisite knowledge they need to get started with Oracle Modern Best Practice and Fusion Applications. 08:51 Lois: Bill, is there anything that I need to know before I take this course? Are there any prerequisites? Bill: No, Lois, there are no prerequisites. Like I was saying, whether you’re fresh out of college or a seasoned professional, this is a great place to start your journey into Fusion Apps and Oracle Modern Best Practice. 09:06 Nikita: That’s great, you know, that there are no barriers to starting. Now, Bill, what can you tell us about the certification that goes along with this new program? Bill: The best part, Niki, is that it’s free. In fact, the training is also free. We have four courses and corresponding Foundation Associate–level certifications for Human Capital Management, Enterprise Resource Planning, Supply Chain Management, and Customer Experience. So, completing the training prepares you for an hour-long exam with 25 questions. It’s a pretty straightforward way to validate your expertise in Oracle Modern Best Practice and Fusion Apps implementation considerations. 09:40 Nikita: Ok. Say I take this course and certification. What can I do next? Where should my learning journey take me? Bill: So, you’re building knowledge and expertise with Fusion Applications, correct? So, once you take this training and certification, I recommend that you identify a product area you want to specialize in. So, if you take the Foundations training for HCM, you can dive deeper into specialized paths focused on implementing Human Resources, Workforce Management, Talent Management, or Payroll applications, for example. The same goes for other product areas. If you finish the certification for Foundations in ERP, you may choose to specialize in Finance or Project Management and get your professional certifications there as your next step. So, once you have this foundational knowledge, moving on to advanced learning in these areas becomes much easier. We offer various learning paths with associated professional-level certifications to deepen your knowledge and expertise in Oracle Fusion Cloud Applications. So, you can learn more about these courses by visiting oracle.com/education/training/ to find out more of what Oracle University has to offer. 10:43 Lois: Right. I love that we have a clear path from foundational-level training to more advanced levels. So, as your skills grow, we’ve got the resources to help you move forward. Nikita: That’s right, Lois. Thanks for walking us through all this, Patrick and Bill. We really appreciate you taking the time to join us on the podcast. Bill: Yeah, it’s always a pleasure to join you on the podcast. Thank you very much. Patrick: Oh, thanks for having me, Lois. Happy to be here. Lois: Well, that’s all the time we have for today. If you have questions or suggestions about anything we discussed today, you can write to us at ou-podcast_ww@oracle.com. That’s ou-podcast_ww@oracle.com. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 11:29 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
In this episode, Lois Houston and Nikita Abraham chat with MySQL expert Perside Foster on the importance of keeping MySQL performing at its best. They discuss the essential tools for monitoring MySQL, tackling slow queries, and boosting overall performance. They also explore HeatWave, the powerful real-time analytics engine that brings machine learning and cross-cloud flexibility into MySQL. MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last two episodes, we spoke about MySQL backups, exploring their critical role in data recovery, error correction, data migration, and more. Lois: Today, we’re switching gears to talk about monitoring MySQL instances. We’ll also explore the features and benefits of HeatWave with Perside Foster, a MySQL Principal Solution Engineer at Oracle. 01:02 Nikita: Hi, Perside! We’re thrilled to have you here for one last time this season. So, let’s start by discussing the importance of monitoring systems in general, especially when it comes to MySQL. Perside: Database administrators face a lot of challenges, and these sometimes appear in the form of questions that a DBA must answer. One of the most basic question is, why is the database slow? To address this, the next step is to determine which queries are taking the longest. Queries that take a long time might be because they are not correctly indexed. Then we get to some environmental queries or questions. How can we find out if our replicas are out of date? If lag is too much of a problem? Can I restore my last backup? Is the database storage likely to fill up any time soon? Can and should we consider adding more servers and scaling out the system? And when it comes to users and making sure they're behaving correctly, has the database structure changed? And if so, who did it and what did they do? And more generally, what security issues have arisen? How can I see what has happened and how can I fix it? Performance is always at the top of the list of things a DBA worries about. The underlying hardware will always be a factor but is one of the things a DBA has the least flexibility with changing over the short time. The database structure, choice of data types and the overall size of retained data in the active data set can be a problem. 03:01 Nikita: What are some common performance issues that database administrators encounter? Perside: The sort of SQL queries that the application runs can be an issue. 90% of performance problems come from the SQL index and schema group. 03:18 Lois: Perside, can you give us a checklist of the things we should monitor? Perside: Make sure your system is working. Monitor performance continually. Make sure replication is working. Check your backup. Keep an eye on disk space and how it grows over time. Check when long running queries block your application and identify those queries. Protect your database structure from unauthorized changes. Make sure the operating system itself is working fine and check that nothing unusual happened at that level. Keep aware of security vulnerabilities in your software and operating system and ensure that they are kept updated. Verify that your database memory usage is under control. 04:14 Lois: That’s a great list, Perside. Thanks for that. Now, what tools can we use to effectively monitor MySQL? Perside: The slow query log is a simple way to monitor long running queries. Two variables control the log queries. Long_query_time. If a query takes longer than this many seconds, it gets logged. And then there's min_exam_row_limit. If a query looks at more than this many rows, it gets logged. The slow query log doesn't ordinarily record administrative statements or queries that don't use indexes. Two variables control this, log_slow_admin_statements and log_queries_not_using_indexes. Once you have found a query that takes a long time to run, you can focus on optimizing the application, either by limiting this type of query or by optimizing it in some way. 05:23 Nikita: Perside, what tools can help us optimize slow queries and manage data more efficiently? Perside: To help you with processing the slow query log file, you can use the MySQL dump slow command to summarize slow queries. Another important monitoring feature of MySQL is the performance schema. It's a system database that provides statistics of how MySQL executes at a low level. Unlike user databases, performance schema does not persist data to disk. It uses its own storage engine that is flushed every time we start MySQL. And it has almost no interaction with the storage media, making it very fast. This performance information belongs only to the specific instance, so it's not replicated to other systems. Also, performance schema does not grow infinitely large. Instead, each row is recorded in a fixed size ring buffer. This means that when it's full, it starts again at the beginning. The SYS schema is another system database that's strongly related to performance schema. 06:49 Nikita: And how can the SYS schema enhance our monitoring efforts in MySQL? Perside: It contains helper objects like views and stored procedures. They help simplify common monitoring tasks and can help monitor server health and diagnose performance issues. Some of the views provide insights into I/O hotspots, blocking and locking issues, statements that use a lot of resources in various statistics on your busiest tables and indexes. 07:26 Lois: Ok… can you tell us about some of the features within the broader Oracle ecosystem that enhance our ability to monitor MySQL? Perside: As an Oracle customer, you also have access to Oracle Enterprise Manager. This tool supports a huge range of Oracle products. And for MySQL, it's used to monitor performance, system availability, your replication topology, InnoDB performance characteristics and locking, bad queries caught by the MySQL Enterprise firewall, and events that are raised by the MySQL Enterprise audit. 08:08 Nikita: What would you say are some of the standout features of Oracle Enterprise Manager? Perside: When you use MySQL in OCI, you have access to some really powerful features. HeatWave MySQL enables continuous monitoring of query statistics and performance. The health monitor is part of the MySQL server and gathers raw data about the performance of queries. You can see summaries of this information in the Performance Hub in the OCI Console. For example, you can see average statement latency or top 100 statements executed. MySQL metrics lets you drill in with your own custom monitoring queries. This works well with existing OCI features that you might already know. The observability and management framework lets you filter by resource type and across several dimensions. And you can configure OCI alarms to be notified when some condition is reached. 09:20 Lois: Perside, could you tell us more about MySQL metrics? Perside: MySQL metrics uses the raw performance data gathered by the health monitor to measure the important characteristic of your servers. This includes CPU and storage usage and information relevant to your database connection and queries executed. With MySQL metrics, you can create your own custom monitoring queries that you can use to feed graphics. This gives you an up to the minute representation of all the performance characteristics that you're interested in. You can also create alarms that trigger on some performance condition. And you can be notified through the OCI alarms framework so that you can be aware instantly when you need to deal with some issue. 10:22 Are you keen to stay ahead in today's fast-paced world? We’ve got your back! Each quarter, Oracle rolls out game-changing updates to its Fusion Cloud Applications. And to make sure you’re always in the know, we offer New Features courses that give you an insider’s look at all of the latest advancements. Don't miss out! Head over to mylearn.oracle.com to get started. 10:47 Nikita: Welcome back! Now, let’s dive into the key features of HeatWave, the cloud service that integrates with MySQL. Can you tell us what HeatWave is all about? Perside: HeatWave is the cloud service for MySQL. MySQL is the world's leading database for web applications. And with HeatWave, you can run your online transaction processing or OLTP apps in the cloud. This gives you all the benefits of cloud deployments while keeping your MySQL-based web application running just like they would on your own premises. As well as OLTP applications, you need to run reports with Business Intelligence and Analytics Dashboards or Online Analytical Processing, or OLAP reports. The HeatWave cluster provides accelerated analytics queries without requiring extraction or transformation to a separate reporting system. This is achieved with an in-memory analytics accelerator, which is part of the HeatWave service. In addition, HeatWave enables you to create Machine Learning models to embed artificial intelligence right there in the database. The ML accelerator performs classification, regression, time-series forecasting, anomaly detection, and other functions provided by the various models that you can embed in your architecture. HeatWave can also work directly with storage outside the database. With HeatWave Lakehouse, you can run queries directly on data stored in object storage in a variety of formats without needing to import that data into your MySQL database. 12:50 Lois: With all of these exciting features in HeatWave, Perside, what core MySQL benefits can users continue to enjoy? Perside: The reason why you chose MySQL in the first place, it's still a relational database and with full transactional support, low latency, and high throughput for your online transaction processing app. It has encryption, compression, and high availability clustering. It also has the same large database support with up to 256 terabytes support. It has advanced security features, including authentication, data masking, and database firewall. But because it's part of the cloud service, it comes with automated patching, upgrades, and backup. And it is fully supported by the MySQL team. 13:50 Nikita: Ok… let’s get back to what the HeatWave service entails. Perside: The HeatWave service is a fully managed MySQL. Through the web-based console, you can deploy your instances and manage backups, enable high availability, resize your instances, create read replicas, and perform many common administration tasks without writing a single line of SQL. It brings with it the power of OCI and MySQL Enterprise Edition. As a managed service, many routine DBA tests are automated. This includes keeping the instances up to date with the latest version and patches. You can run analytics queries right there in the database without needing to extract and transform your databases, or load them in another dedicated analytics system. 14:52 Nikita: Can you share some common use cases for HeatWave? Perside: You have your typical OLTP workloads, just like you'd run on prem, but with the benefit of being managed in the cloud. Analytic queries are accelerated by HeatWave. So your reporting applications and dashboards are way faster. You can run both OLTP and analytics workloads from the same database, keeping your reports up to date without needing a separate reporting infrastructure. 15:25 Lois: I’ve heard a lot about HeatWave AutoML. Can you explain what that is? Perside: HeatWave AutoML enables in-database artificial intelligence and Machine Learning. Externally sourced data stores, such as sensor data exported to CSV, can be read directly from object store. And HeatWave generative AI enables chatbots and LLM content creation. 15:57 Lois: Perside, tell us about some of the key features and benefits of HeatWave. Perside: Autopilot is a suite of AI-powered tools to improve the performance and applicability of your HeatWave queries. Autopilot includes two features that help cut costs when you provision your service. There's auto provisioning and auto shape prediction. They analyze your existing use case and tell you exactly which shape you must provision for your nodes and how many nodes you need. Auto parallel loading is used when you import data into HeatWave. It splits the import automatically into an optimum number of parallel streams to speed up your import. And then there's auto data placement. It distributes your data across the HeatWave cluster node to improve your query retrieval performance. Auto encoding chooses the correct data storage type for your string data, cutting down storage and retrieval time. Auto error recovery automatically recovers a fail node and reloads data if that node becomes unresponsive. Auto scheduling prioritizes incoming queries intelligently. An auto change propagation brings data optimally from your DB system to the acceleration cluster. And then there's auto query time estimation and auto query plan improvement. They learn from your workload. They use those statistics to perform on node adaptive optimization. This optimization allows each query portion to be executed on every local node based on that node's actual data distribution at runtime. Finally, there's auto thread pooling. It adjusts the enterprise thread pool configuration to maximize concurrent throughput. It is workload-aware, and minimizes resource contention, which can be caused by too many waiting transactions. 18:24 Lois: How does HeatWave simplify analytics within MySQL and with external data sources? Perside: HeatWave in Oracle Cloud Infrastructure provides all the features you need for analytics, all in one system. Your classic OLTP application run on the MySQL database that you know and love, provision in a DB system. On-line analytical processing is done right there in the database without needing to extract and load it to another analytic system. With HeatWave Lakehouse, you can even run your analytics queries against external data stores without loading them to your DB system. And you can run your machine learning models and LLMs in the same HeatWave service using HeatWave AutoML and generative AI. HeatWave is not just available in Oracle Cloud Infrastructure. If you're tied to another cloud vendor, such as AWS or Azure, you can use HeatWave from your applications in those cloud too, and at a great price. 19:43 Nikita: That's awesome! Thank you, Perside, for joining us throughout this season on MySQL. These conversations have been so insightful. If you’re interested in learning more about the topics we discussed today, head over to mylearn.oracle.com and search for the MySQL 8.4: Essentials course. Lois: This wraps up our season on the essentials of MySQL. But before we go, we just want to remind you to write to us if you have any feedback, questions, or ideas for future episodes. Drop us an email at ou-podcast_ww@oracle.com. That’s ou-podcast_ww@oracle.com. Nikita: Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 20:33 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
Lois Houston and Nikita Abraham continue their conversation with MySQL expert Perside Foster, with a closer look at MySQL Enterprise Backup. They cover essential features like incremental backups for quick recovery, encryption for data security, and monitoring with MySQL Enterprise Monitor—all to help you manage backups smoothly and securely. MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! Last week was the first of a two-part episode covering the different types of backups and why they're important. Today, we’ll look at how we can use MySQL Enterprise Backup for efficient and consistent backups. 00:52 Nikita: And of course, we’ve got Perside Foster with us again to walk us through all the details. Perside, could you give us an overview of MySQL Enterprise Backup? Perside: MySQL Enterprise Backup is a form of physical backup at its core, so it's much faster for large data sets than logical backups, such as the most commonly used MySQL Dump. Because it backs up the data files, it's non-locking and enables either complete system backup or partial backup, focusing only on specific databases. 01:29 Lois: And what are the benefits of using MySQL Enterprise Backup? Perside: You can back up to local storage or direct-to-common-cloud storage types. You can perform incremental backups, which can speed up your backup process greatly. Incremental backups enable point-in-time recovery. It's useful when you need to restore to a point in time before some application or human error occurred. Backups can be compressed to save archival storage requirements and encrypted for regulatory compliance and offline data security. 02:09 Nikita: So we know MySQL Enterprise Backup is an impressive tool, but could you talk more about some of the main features it supports for creating and managing backups? Specifically, which tools are integrated within MySQL Enterprise to support different backup scenarios? Perside: MySQL Enterprise Backup supports SBT, implemented by many common Tape storage systems. MySQL Enterprise Backup supports optimistic backup. This process deals with busy tables separately from the rest of the database. It can record changes that happen in the database during the backup for consistency. In a large data set, this can make a huge difference in performance. MySQL Enterprise Backup runs on all supported platforms. It's available when you have a MySQL Enterprise Edition license. And it comes with Enterprise Edition, but it also is available as a separate package. You can get the most recent version from eDelivery, where you can also get a trial version. If you need a previous release, you can get that from My Oracle Support. It's also available in all versions of MySQL, whether you run a Long-Term support version or an Innovation Release. For LTS releases, MySQL Enterprise Backup supports MySQL instances of the same LTS release. For Innovation releases, it supports the previous LTS release and any subsequent Innovation version within the same LTS family. 04:03 Nikita: How does MySQL Enterprise Monitor manage and track backup processes? Perside: MySQL Enterprise Monitor has a dashboard for monitoring MySQL Enterprise Backup. The dashboard monitors the health of backup process and usage throughout the entire Enterprise fleet, not just a single server. It supports drilling down into specific sub-operations within a backup job. You can see information about full backups, partial backups, and incremental backups. You can configure alerts that will notify you in the event of delays, failures, or backups that have not been performed in some configuration time period. 04:53 Lois: Ok…let’s get into the mechanics. I understand that MySQL Enterprise Backup uses binary logs as part of its backup process. Can you explain how these logs fit into the bigger picture of maintaining database integrity? Perside: MySQL Enterprise Backup is a utility designed specifically for backing up MySQL systems in the most efficient and flexible way. At its simplest, it performs a physical backup of the data files, so it is fast. However, it also records the changes that were made during the time it took to do the backup. So, the result is that you get a consistent backup of the data at the time the backup completed. This backup is not tied to the host system and can be moved to other hosts. It can be used for archiving and is fully supported as part of the MySQL Enterprise Edition. It is, however, tied to the specific version of MySQL from which the backup was taken. So, you cannot use it for upgrades where the destination server is an upgrade from the source. For example, if you take a backup from MySQL 5.7, you can't directly restore it to MySQL 8.0. As a part of MySQL Enterprise Edition, it's not part of the freely available Community Edition. 06:29 Lois: Perside, how do MySQL's binary logs track changes over time? And why is this so critical for backups? Perside: The binary logs record changes to the database. These changes are recorded in a sequential set of files numbered incrementally. MySQL logs changes either in statement-based form, where each log entry records the statement that gives rise to the change, or in row-based form where the actual change row data is recorded. If you select mixed format, then MySQL records statements for most operations and records row for changes where the statement might result in a different row value each time it's run, for example, where there's a generated value like autoincrement. The current log file grows as changes are recorded. When it reaches its maximum configured size, that log file is closed, and the next sequential file is created for new logs. You can make this happen automatically by using the FLUSH BINARY LOGS command. This does not delete any existing log files. 07:59 Nikita: But what happens if you want to delete the log files? Perside: If you want to delete all log files, you can do so manually with the PURGE BINARY LOGS command, either specifying a file or a date time. 08:14 Lois: When it comes to tracking transactions, MySQL provides a couple of methods, right? Can you explain the differences between Global Transaction Identifiers and the traditional log file sequence? Perside: Log files positioning is one of two formats, either legacy, where you specify transactions with a log file in a sequence number, or by using global transaction identifiers, or GTIDs, where each transaction is identified with a universally unique identifier or UUID. When you apply a transaction to the source server, that is when the GTID is attached to the transaction. This makes it particularly useful in replication topologies so that each transaction is uniquely identified by both its server ID and the transaction sequence number. When such a transaction is replicated to other hosts, the transaction retains its original GTID so that you can track when that transaction has propagated to the replicas and has been applied. The global transaction identifier is unique across the entire network. 09:49 Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like LLMs, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.comand find out more. 10:19 Nikita: Welcome back! Let’s move on to replication. How does MySQL’s legacy log format handle transactions, and what does that mean for replication timing across different servers? Perside: Legacy format binary logs are non-transactional. This means that a transaction made up of multiple modifications is logged as a sequence of changes. It's possible that different hosts in a replication network apply those changes at different times. Each server that uses legacy binary logging maintain the current applied log position as coordinates based on a combination of binary log files in the position within that log file. 11:11 Nikita: Troubleshooting with legacy logs can be quite complex, right? So, how does the lack of unique transaction IDs make it more difficult to address replication issues? Perside: Because each server has its own log with its own transactions, these modification could have entirely different coordinates, making it challenging to find the specific modification point if you need to do any deep dive troubleshooting, for example, if one replica fell partway through applying a transaction and you need to partially roll it back manually. On the other hand, when you enable GTIDs, the transaction applied on the source host has that globally unique identifier attached to the whole transaction as a sequence of unique IDs. When the second or subsequent servers apply those transactions, they have exactly the same identifier, making it both transaction-safe for MySQL and also easier to troubleshoot if you need to. 12:26 Lois: How can you use binary logs to perform a point-in-time recovery in MySQL? Perside: First, you restore the last full backup. Once you've restarted the restart server, find the current log position of that backup. Either it's GTID or log sequence number. The SHOW BINARY LOG STATUS command shows this information. Then you can use the MySQL binlog utility to replay events from the binary log files, specifying the start and stop position containing the range of log operations that you wish to apply. You can pipe the output of the MySQL bin log to the MySQL client if you want to execute the changes immediately, or you can redirect the output to a script file if you want to examine and perhaps edit the changes. 13:29 Nikita: And how do you save binary logs? Perside: You can save binary logs to use in disaster recovery, for point-in-time restores, or for incremental backups. One way to do this is to flush the logs so that the log file closes and ready for copying. And then copy it to a different server to protect against hardware media failures. You can also use the MySQL binlog utility to create a copy of a set of binary log files in the same format, but to a different file or set of files. This can be useful if you want to run MySQL binlog continuously, copying from the source server binary log to a new location, perhaps in network storage. If you do this, remember that MySQL binlog does not run as a service or daemon, so you'll need to monitor it to make sure it's running continually. 14:39 Lois: Can you take us through how the MySQL Enterprise Backup process works? What does it do when performing a backup? Perside: First, it performs a physical file copy of necessary data and log files. This can be done while the server is fully operational, and it has minimal impact on performance. Once this initial copy is taken, it applies a low impact backup lock on the instance. If you have any tables that are not using InnoDB, the backup cannot guarantee transaction-safe consistency for those tables. It applies a weed lock to those tables so that it can guarantee consistency. Then it briefly locks all logging activity to take a consistent view of the current coordinates of various logs. It releases the weed lock on non-transactional tables. Using the log coordinates that were taken earlier in the process, it gathers all logs for transactions that have occurred since then. Bear in mind that the backup process takes place while the system is active. So, for a consistent backup, it must record not only the data files, but all changes that occurred during the backup. Then it releases the backup lock. The last piece of information recorded is any metadata for the backup itself, including its timing and contents in the final redo log. That completes the backup operation. 16:30 Nikita: And where are the files stored? Perside: The files contained in the backup are saved to the backup location, which can be on the local system or in network storage. The files contained in the backup location include files from the MySQL data directory. Some raw files include InnoDB tablespace, plus any InnoDB file per table tablespace files, and InnoDB log files. Other files might include data files belonging to other storage engines, perhaps MyISAM files. The various log files in instance configuration files are also retained. 17:20 Lois: What steps do you follow to restore a MySQL Enterprise Backup, and how do you guarantee consistency, especially when dealing with incremental backups? Perside: To restore from a backup using MySQL Enterprise Backup, you must first remove any previous files from the data directory. The restore process will fail if you attempt to restore over an existing system or backup. Then you restore the database with appropriate options. If you only restore a single backup, you can use copy, back, and apply log to ensure that the restored system has a consistency state. If you perform a full backup in subsequent incremental backups, you might need to restore multiple times using copy-back, and then use copy-back-and-apply-log only for the final consistent restore operation. The restart server might be on the same host or might be a different host with different configuration. This means that you might have to change some configuration on the restored server, including the operating system ownership of the restored data directory and various MySQL configuration files. If you want to retain the MySQL configuration files from the source server to reproduce on a new server, you should copy those files separately. MySQL Enterprise Backup focuses on the data rather than the server configuration. It does, however, produce configuration files appropriate for the backup. These are similar to the MySQL configuration files, but only contain options relevant for the backup process itself. There's also variables that have been changed to non-default values and all global variable values. These files must be renamed and possibly edited before they are suitable to become configuration files in the newly restored server. For example, the mysqld-auto.cnf file contains a JSON-formatted set of persisted variables. The backup process stores this as the newly named backup mysqld-auto.cnf. If you want to use it in the restored server, you must rename it and place it in the appropriate location so that the restored server can read it. This also applies in part to the auto.cnf file, which contain identifying information for the server. If you are replacing the original server or restoring on the same host, then you can keep the original values. However, this information must be unique within a network. So, if you are restoring this backup to create a replica in a replication topology, you must not include that file and instead start MySQL without it so that it creates its own unique identifying information. 21:14 Nikita: Let’s discuss securing and optimizing backups. How does MySQL Enterprise Backup handle encryption and compression, and what are the critical considerations for each? Perside: You can encrypt backups so that they are secure while moving them around or archiving them. The encrypt option performs the encryption. And you can specify the encryption key either on the command line as a string or a key file that has been generated with some cryptographic algorithm. Encryption only applies to image files, not to backup directories. You can also compress backup with different levels of compression, with higher levels requiring more CPU, but resulting in greater savings in storage. Compression only works with InnoDB data files. If your organization has media management software for performing backups, perhaps to a tape array, then you can use the SBT interface supported in MySQL Enterprise Backup. 22:34 Lois: Before we wrap up, could you share how MySQL Enterprise Backup facilitates the management of backups across a multi-server environment? Perside: As an enterprise solution, it's easy to run MySQL Enterprise Backup in a multi-server environment. We've already mentioned backing up to cloud storage, but you can, of course, back up to a directory or image on network storage that can be mounted locally, perhaps with NFS or some other file system. The "with time" option enables multiple backups within the same backup directory, where each in its own subdirectory named with the timestamp. This is especially useful when you want to run the same backup script repeatedly. 23:32 Lois: Thank you for that detailed overview, Perside. This wraps up our discussion of the various backup types, their pros and cons, and how to select the right option for your needs. In our next session, we’ll explore the different MySQL monitoring strategies and look at the features as well as benefits of Heatwave. Nikita: And if you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and take a look at the MySQL 8.4 Essentials course. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 24:06 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
Join Lois Houston and Nikita Abraham as they kick off a two-part episode on MySQL backups with MySQL expert Perside Foster. In this conversation, they explore the critical role of backups in data recovery, error correction, data migration, and more. Perside breaks down the differences between logical and physical backups, discussing their pros and cons, and shares valuable insights on how to create a reliable backup strategy to safeguard your data. MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead of Editorial Services. Nikita: Hi everyone! This is Episode 6 in our series on MySQL, and today we’re focusing on how to back up our MySQL instances. This is another two-parter and we’ve got Perside Foster back with us. 00:49 Lois: Perside is a MySQL Principal Solution Engineer at Oracle and she’s here to share her insights on backup strategies and tools. In this episode, we’ll be unpacking the types of backups available and discussing their pros and cons. Nikita: But first let’s start right at the beginning. Perside, why is it essential for us to back up our databases? 01:10 Perside: The whole point of a database is to store and retrieve your business data, your intellectual property. When you back up your data, you are able to do disaster recovery so that your business can continue after some catastrophic event. You can recover from error and revert to a previous known good version of the data. You can migrate effectively from one system to another, or you can create replicas for load balancing or parallel system. You can retain data for archival purposes. Also, you can move large chunks of data to other systems, for example, to create a historical reporting application. And then you can create test environments for applications that are in development and that need real world test data. 02:10 Lois: Yes, and creating a robust backup strategy takes planning, doesn’t it? Perside: As with any complex business critical process, there are challenges with coming up with a backup strategy that you can trust. This requires some careful planning. Any backup process needs to read the data. And in a production system, this will involve adding input/output operations to what might be an already busy system. The resources required might include memory or disk I/O operation and of course, you'll want to avoid downtime, so you might need to schedule the backup for a time when the system is not at peak usage. You'll also need to consider whether the backup is on network storage or some local storage so that you don't exceed limitations for those resources. It isn't enough just to schedule the backup. You'll also need to ensure that they succeed, which you can do with monitoring and consistency check. No backup is effective unless you can use it to restore your data, so you should also test your restore process regularly. If you have business requirements or regulatory commitments that control your data storage policies, you need to ensure your backup also align with those policies. Remember, every backup is a copy of your data at that moment in time. So it is subject to all of your data retention policies, just like your active data. 04:02 Nikita: Let’s talk backup types. Perside, can you break them down for us? Perside: The first category is logical backup. A logical backup creates a script of SQL statements that will re-create the data structure and roles of the live database. Descript can be moved to another server as required. And because it's a script, it needs to be created by and executed on a running server. Because of this, the backup process takes up resources from the source server and is usually slower than a physical media backup. 04:45 Nikita: Ok… what’s the next type? Perside: The next category is physical backup. This is a backup of the actual data file in the server. Bear in mind that the file copy process takes time, and if the database server is active during that time, then the later parts of the copy data will be inconsistent with those parts copied earlier. Ideally, the file must be stable during the backup so that the database state at the start of the copy process is consistent with the state at the end. If there is inconsistency in the data file, then MySQL detects that when the server starts up and it performs a crash recovery. From MySQL’s perspective, there is no difference between a database backup copied from a running server and restarting a server after a crash. In each case, the data files were not saved in a consistent state and crash recovery can take a lot of time on large databases. 06:02 Lois: I see… how can MySQL Enterprise Backup help with this? Perside: MySQL Enterprise Backup has features that enable a consistent backup from a running server. If you create file system copies, either by copying the data files or by performing a file system snapshot, then you must either shut the server down before the copy and undergo crash recovery on the server that starts with those copied files. 06:35 Lois: And aside from logical and physical backups, are there other techniques to back up data? Perside: The binary log enables point-in-time recovery. You can enable replication in a couple of ways. If you start replication and then stop it at a particular time, the replica effectively contains a live backup of the data at the time that you stopped replication. You can also enable a defined replication lag so that the replica is always a known period of time behind the production database. You can also use transportable tablespaces, which are tables or sets of tables in a specific file that you can copy to another server. 07:34 AI is being used in nearly every industry…healthcare, manufacturing, retail, customer service, transportation, agriculture, you name it! And it’s only going to get more prevalent and transformational in the future. It’s no wonder that AI skills are the most sought-after by employers. If you’re ready to dive in to AI, check out the OCI AI Foundations training and certification e that’s available for free! It’s the perfect starting point to build your AI knowledge. So, get going! Head over to mylearn.oracle.com to find out more. 08:14 Nikita: Welcome back! I want to return to the topic of crafting an effective backup strategy. Perside, any advice here? Perside: We can use the different backup types to come up with an effective backup strategy based on how we intend to restore the data. A full backup is a complete copy of the database at some point in time. This can take a lot of time to complete and to restore. An incremental backup contains only the changes since the last backup, as recorded in the binary log files. To restore an incremental backup, you must have restored the previous full backup and any incremental backups taken since then. For example, you might have four incremental backups taken after the last full backup. Each incremental backup contains only the changes since the previous backup. If you want to restore to the point at which you took the fourth incremental backup, then you must restore the full backup and each incremental backup in turn. A differential backup contains all changes since the last full backup. It contains only those portions of the database that are different from the full backup. Over time, the differential backup takes longer because it contains more changes. However, it is easier to restore because if you want to restore to the point at which you took a particular differential backup, you must restore the last full backup and only the differential backup that you require. You can ignore the intermediate differential backups. 10:13 Lois: Can you drill into the different types of backups and explain how each technique is used in various situations? Perside: One of the physical backup techniques is taking a snapshot of the storage medium. The advantages of a snapshot include its quickness. A snapshot is quick to create and restore. It is well-suited to situations where you need to quickly revert to a previous version of the database. For example, in a development environment. A storage snapshot is often a feature of the underlying file system. Linux supports logical volume management or LVM, and many storage area networks or network-attached storage platforms have native snapshot features. You can also use a storage snapshot to supplement a more scheduled logical backup structure. This way, the snapshot enables quick reversion to a previous type, and the logical backup can be used for other purposes, such as archiving or disaster recovery. 11:28 Nikita: Are there any downsides to using snapshots? Perside: First one includes issues with consistency. Because taking a snapshot is quick and does not cause a database performance hit, you might take the snapshot while the system is running. When you restore such a snapshot, MySQL must perform a crash recovery. If you want a consistent snapshot, you must shut down MySQL in advance. Another problem is that the snapshot is a copy of the file system and not of the database. So if you want to transfer it to another system, you must create a database backup from the storage. This adds step in time. A snapshot records the state of the disk at a specific point in time. Initially, the snapshot is practically empty. When a data page changes, the original version of that page is written to the snapshot. Over time, the snapshot storage grows as more data pages are modified. So multiple snapshots result in multiple writes whenever a snapshot data page is changed. To avoid performance deterioration, you should remove or release snapshots when they are no longer in use. Also, because snapshots are tied to the storage medium, they're not suited to moving backups between systems. 13:03 Lois: How about logical backups? How do we create those? Perside: The mysqldump utility has long been a standard way to create logical backups. It creates a script made up of the SQL statement that creates the data and structure in a database or server. 13:21 Nikita: Perside, what are the advantages and disadvantages of mysql dump? Perside: It is an excellent solution for preserving the database structure or for backing up small databases. Logical backups naturally require that the server is running. And they use system resources to produce the SQL statements, so they are less likely for very large databases. The output is a human-readable text file with SQL statements that you can edit as a text file. It can be managed by a source code management system. This allows you to maintain a known good version of the database structure, one that matches your application source code version, which can also include sample data. The mysqldump disadvantages are it needs to run against an active server. So if your production server is busy, you must take action to ensure a consistent backup. This requires locking tables or using the single transaction option, which can result in application delays as the backup completes in a consistent way. Mysqldump does not track changes since the last backup, so it has no way of recording only those rows that have changed. This means it's not suited to perform differential or incremental backups. The scripts must be executed against a running server, so it is slower to restore than using a data dump or physical backup. Additionally, if the database structure has indexes of foreign keys, these conditions must be checked and updated as the data is imported. You can disable these checks during the import but must handle any risks that come from doing so. Because the backup is nothing more than an SQL script, it is easy to restore. You can simply use the MySQL client or any other client tool that can process scripts. 15:46 Nikita: Is there an alternative tool for logical backups? Perside: MySQL Shell is another utility that supports logical backup and restore. Unlike mysqldump, it dumps data in a form that can be processed in parallel, which makes it much faster to use for larger data sets. This enables it to export to or import from remote storage where it can stream data without requiring the whole file before starting the input. It can process multiple chunks of imported data in parallel, and you can monitor progress as it completes. You can also pause import and resume later. For example, in the event of network outage. You can dump and restart table structure, including indexes and primary keys. The utilities in MySQL Shell are exposed through functions. The dumpInstance and dumpSchema utilities back up the whole server or specified schemas respectively. And loadDump is how you restore from such a dump. 17:07 Lois: Thanks for that rundown, Perside! This concludes our first part on MySQL backups. Next week, we’ll take a look at advanced backup methods and the unique features of MySQL Enterprise Backup. Nikita: And if you want to learn more about everything we discussed today, head over to mylearn.oracle.com and explore the MySQL 8.4 Essentials course. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 17:37 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
Picking up from Part 1, hosts Lois Houston and Nikita Abraham continue their deep dive into MySQL security with MySQL Solution Engineer Ravish Patel. In this episode, they focus on user authentication techniques and tools such as MySQL Enterprise Audit and MySQL Enterprise Firewall. MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Last week, we began exploring MySQL security, covering regulatory compliance and common security threats. Nikita: This week, we’re continuing the conversation by digging deeper into MySQL’s user authentication methods and taking a closer look at some powerful security tools in the MySQL Enterprise suite. 00:57 Lois: And we’re joined once again by Ravish Patel, a MySQL Solution Engineer here at Oracle. Welcome, Ravish! How does user authentication work in MySQL? Ravish: MySQL authenticates users by storing account details in a system database. These accounts are authenticated with three elements, username and hostname commonly separated with an @ sign along with a password. The account identifier has the username and host. The host identifier specifies where the user connects from. It specifies either a DNS hostname or an IP address. You can use a wild card as part of the hostname or IP address if you want to allow this username to connect from a range of hosts. If the host value is just the percent sign wildcard, then that username can connect from any host. Similarly, if you create the user account with an empty host, then the user can connect from any host. 01:55 Lois: Ravish, can MySQL Enterprise Edition integrate with an organization’s existing accounts? Ravish: MySQL Enterprise authentication integrates with existing authentication mechanisms in your infrastructure. This enables centralized account management, policies, and authentication based on group membership and assigned corporate roles, and MySQL supports a wide range of authentication plugins. If your organization uses Linux, you might already be familiar with PAM, also known as Pluggable Authentication Module. This is a standard interface in Linux and can be used to authenticate to MySQL. Kerberos is another widely used standard for granting authorization using a centralized service. The FIDO Alliance, short for Fast Identify Online, promotes an interface for passwordless authentication. This includes methods for authenticating with biometrics RUSB security tokens. And MySQL even supports logging into centralized authentication services that use LDAP, including having a dedicated plugin to connect to Windows domains. 03:05 Nikita: So, once users are authenticated, how does MySQL handle user authorization? Ravish: The MySQL privilege system uses the GRANT keyword. This grants some privilege X on some object Y to some user Z, and optionally gives you permission to grant the same privilege to others. These can be global administrative privileges that enable users to perform tasks at the server level, or they can be database-specific privileges that allow users to modify the structure or data within a database. 03:39 Lois: What about database privileges? Ravish: Database privileges can be fine-grained from the largest to the smallest. At the database level, you can permit users to create, alter, and delete whole databases. The same privileges apply at the table, view, index, and stored procedure levels. And in addition, you can control who can execute stored procedures and whether they do so with their own identity or with the privileges of the procedure's owner. For tables, you can control who can select, insert, update, and delete rows in those tables. You can even specify the column level, who can select, insert, and update data in those columns. Now, any privileged system carries with it the risk that you might forget an important password and lock yourself out. In MySQL, if you forget the password to the root account and don't have any other admin-level accounts, you will not be able to administer the MySQL server. 04:39 Nikita: Is there a way around this? Ravish: There is a way around this as long as you have physical access to the server that runs the MySQL process. If you launch the MySQL process with the --skip grant tables option, then MySQL will not load the privilege tables from the system database when it starts. This is clearly a dangerous thing to do, so MySQL also implicitly disables network access when you use that option to prevent users from connecting over the network. When you use this option, any client connection to MySQL succeeds and has root privileges. This means you should control who has shell access to the server during this time and you should restart the server or enable privileged system with the command flush privileges as soon as you have changed the root password. The privileges we have already discussed are built into MySQL and are always available. MySQL also makes use of dynamic privileges, which are privileges that are enabled at runtime and which can be granted once they are enabled. In addition, plugins and components can define privileges that relate to features of those plugins. For example, the enterprise firewall plugin defines the firewall admin privilege and the audit admin privilege is defined by the enterprise audit plugin. 06:04 Are you working towards an Oracle Certification this year? Join us at one of our certification prep live events in the Oracle University Learning Community. Get insider tips from seasoned experts and learn from others who have already taken their certifications. Go to community.oracle.com/ou to jump-start your journey towards certification today! 06:28 Nikita: Welcome back! Ravish, I want to move on to MySQL Enterprise security tools. Could you start with MySQL Enterprise Audit? Ravish: MySQL Enterprise Audit is an extension available in Enterprise Edition that makes it easier to comply with regulations that require observability and control over who does what in your database servers. It provides visibility of connections, authentication, and individual operations. This is a necessary part of compliance with various regulations, including GDPR, NIS2, HIPAA, and so on. You can control who has access to the audited events so that the audits themselves are protected. As well as configuring what you audit, you can also configure rotation policies so that unmonitored audit logs don't fill up your storage space. The configuration can be performed while the server is running with minimal effect on production applications. You don't need to restart the server to enable or disable auditing or to change the filtering options. You can output the audit logs in either XML or JSON format, depending on how you want to perform further searching and processing. If you need it, you can compress the logs to save space and you can encrypt the logs to provide address protection of audited identities and data modifications. The extension is available either as a component or if you prefer, as the legacy plugin. 07:53 Lois: But how does it all work? Ravish: Well, first, as a DBA, you'll enable the audit plugin and attach it to your running server. You can then configure filters to audit your connections and queries and record who does what, when they do it, and so on. Then once the system is up and running, it audits whenever a user authenticates, accesses data, or even when they perform schema changes. The logs are recorded in whatever format that you have configured. You can then monitor the audited events at will with MySQL tools such as Workbench or with any software that can view and manipulate XML or JSON files. You can even configure Enterprise Audit to export the logs to an external Audit Vault, enabling collection, and archiving of audit information from all over your enterprise. In general, you won't audit every action on every server. You can configure filters to control what specific information ends up in the logs. 08:50 Nikita: Why is this sort of filtering necessary, Ravish? Ravish: As a DBA, this enables you to create a custom designed audit process to monitor things that you're really interested in. Rules can be general or very fine grained, which enables you to reduce the overall log size, reduces the performance impact on the database server and underlying storage, makes it easier to process the log file once you've gathered data, and filters are configured with the easily used JSON file format. 09:18 Nikita: So what information is audited? Ravish: You can see who did what, when they did it, what commands they use, and whether they succeeded. You can also see where they connected from, which can be useful when identifying man in the middle attacks or stolen credentials. The log also records any available client information, including software versions and information about the operating system and much more. 09:42 Lois: Can you tell us about MySQL Enterprise Firewall, which I understand is a specific tool to learn and protect the SQL statements that MySQL executes? Ravish: MySQL Enterprise Firewall can be enabled on MySQL Enterprise Edition with a plugin. It uses an allow list to set policies for acceptable queries. You can apply this allow list to either specific accounts or groups. Queries are protected in real time. Every query that executes is verified per server and checked to make sure that it conforms to query structures that are defined in the allow list. This makes it very useful to block SQL injection attacks. Only transactions that match well-formed queries in the allow list are permitted. So any attempt to inject other types of SQL statements are blocked. Not only does it block such statements, but it also sends an alert to the MySQL error log in real time. This gives you visibility on any security gaps in your applications. The Enterprise Firewall has a learning mode during which you can train the firewall to identify the correct sort of query. This makes it easy to create the allow list based on a known good workload that you can create during development before your application goes live. 10:59 Lois: Does MySQL Enterprise Firewall operate seamlessly and transparently with applications? Ravish: Your application simply submits queries as normal and the firewall monitors incoming queries with no application changes required. When you use the Enterprise Firewall, you don't need to change your application. It can submit statements as normal to the MySQL server. This adds an extra layer of protection in your applications without requiring any additional application code so that you can protect against malicious SQL injection attacks. This not only applies to your application, but also to any client that configured user runs. 11:37 Nikita: How does this firewall system work? Ravish: When the application submits a SQL statement, the firewall verifies that the statement is in a form that matches the policy defined in the allow list before it passes to the server for execution. It blocks any statement that is in a form that's outside of policy. In many cases, a badly formed query can only be executed if there is some bug in the application's data validation. You can use the firewall’s detection and alerting features to let when it blocks such a query, which will help you quickly detect such bugs, even when the firewall continues to block the malicious queries. 12:14 Lois: Can you take us through some of the encryption and masking features available in MySQL Enterprise Edition? Ravish: Transparent data encryption is a great way to protect against physical security disclosure. If someone gains access to the database files on the file system through a vulnerability of the operating system, or even if you've had a laptop stolen, your data will still be protected. This is called Data at Rest Encryption. It protects not only the data rows in tablespaces, but also other locations that store some version of the data, such as undo logs, redo logs, binary logs and relay logs. It is a strong encryption using the AES 256 algorithm. Once we enable transparent data encryption, it is, of course, transparent to the client software, applications, and users. Applications continue to submit SQL statements, and the encryption and decryptions happen in flight. The application code does not need to change. All data types, table structure, and database names remain the same. It's even transparent to the DBAs. The same data types, table structure, and so on is still how the DBA interacts with the system while creating indexes, views, and procedures. In fact, DBAs don't even need to be in possession of any encryption keys to perform their admin tasks. It is entirely transparent. 13:32 Nikita: What kind of management is required for encryption? Ravish: There is, of course, some key management required at the outside. You must keep the keys safe and put policies in place so that you store and rotate keys effectively, and ensure that you can recover those keys in the event of some disaster. This key management integrates with common standards, including KMIP and KMS. 13:53 Lois: Before we close, I want to ask you about the role of data masking in MySQL. Ravish: Data masking is when we replace some part of the private information with a placeholder. You can mask portions of a string based on the string position using the letter X or some other character. You can also create a table that contains a dictionary of suitable replacement words and use that dictionary to mask values in your data. There are specific functions that work with known formats of data, for example, social security numbers as used in the United States, national insurance numbers from the United Kingdom, and Canadian social insurance numbers. You can also mask various account numbers, such as primary account numbers like credit cards or IBAN numbers as used in the European Bank system. There are also functions to generate random values, which can be useful in test databases. This might be a random number within some range, or an email address, or a compliant credit card number, or social security number. You can also create random information using the dictionary table that contains suitable example values. 14:58 Nikita: Thank you, Ravish, for taking us through MySQL security. We really cannot overstate the importance of this, especially in today’s data-driven world. Lois: That’s right, Niki. Cyber threats are increasingly sophisticated these days. You really have to be on your toes when it comes to security. If you’re interested in learning more about this, the MySQL 8.4 Essentials course on mylearn.oracle.com is a great next step. Nikita: We’d also love to hear your thoughts on our podcast so please feel free to share your comments, suggestions, or questions by emailing us at ou-podcast_ww@oracle.com. That’s ou-podcast_ww@oracle.com. In our next episode, we’ll journey into the world of MySQL backups. Until then, this is Nikita Abraham… Nikita: And Lois Houston, signing off! 15:51 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

מדריך עזר מהיר

האזן לתוכנית הזו בזמן שאתה חוקר
הפעלה