Fixed MySQL source - MDS replication

For the reader: This is a fast implementation (a few clicks here and there) ; the code for these next steps will be provided soon.

As I mentioned in the "Channels", at last section "...more troubleshooting", wht I was looking for was found at Official documentation Import and Exporting Databases (Compatibility Between On-Premise MySQL and MySQL Database Service section) and I quote:

strip_restricted_grants: certain privileges are restricted in the MySQL Database Service. Privileges such as RELOAD, FILE, SUPER, BINLOG_ADMIN, and SET_USER_ID. It is not possible to create users granting these privileges. This option strips these privileges from dumped GRANT statements.

MySQL Shell provides the following utilities:

  • dumpInstance(): MySQL instance export utility which exports all compatible schemas to an Object Storage bucket or to local files. By default, this utility exports users, events, routines, and triggers.

[ ... snip ... ]

and ...

This task describes how to export data from a supported MySQL Server source to an Object Storage bucket using the MySQL Shell dumpInstance utility.This task requires the following:

  • MySQL Shell 8.0.21, or higher.

  • MySQL Server 5.7.9, or higher.

  • Access to Object Storage and an existing bucket.

[.... snip ... ]

This task describes how to import data from Object Storage to a MySQL DB System.This task requires the following:

  • A Compute instance with access to the target MySQL DB System. This example assumes you are using a Compute running Oracle Linux.

  • MySQL Shell 8.0.21, or higher, installed on the Compute instance.

  • Access to Object Storage and an existing bucket which contains the exported files.

-- source Exporting Data to Object Storage with MySQL Shell (Import and Exporting Databases)

So, let's start from there - deploy an Object Storage, and perform dumping and loading data between MySQL source and MDS target.

But first...

Adapting the Terraform code

This troubleshooting and fix was be implemented without the creation of the channel.

Therefore, the "create_channel.tf" will no longer be necessary (nor the channel variables from variables.tf):

Once we adapt the code to the new scenario, we can build the environment as usual: terraform init, terraform plan and terraform apply.

This will create (as might have guessed it):

  • the MySQL source on a compute instance (on which we installed and configured MySQL service with the help of ansible)

  • the MySQL DB System

Deploy an Object Storage (OCI UI)

Code will be implemented for this ... for now we will be clicking for a quick check

As you are already aware of, those two instances are deployed in compartment "MySqlOciCompartment".

Let's create the Object Storage under the same compartment:

... sticking with the default...

... click create:

Manage your bucket

If you want to load or export files into/from your bucket, you would need the configuration file for OCI config under /root/.oci/config.

You can find the tutorial, step-by-step, at: Install and Configure OCI CLI

Keep in mind, that now you make the changes for the compartment where the MDS, MySQL source and Object bucket are located. This configuration file /root/.oci/config must exist on the instance that can access the MDS (in our case, it's the instance "MySqlShellInsance"

Dumping data from MySQL Source into the Bucket

Change/provide root password on the MySQL Source:

Just for phun, create database and a table:

... setup the Javascript mode for MySQL-shell (apparently the snapd installation provides only python and sql modes)

Now, we should be ready to start dumping the data into our bucket, without too many errors:

...and let's check in the bucket for object "test":

Discussion

In order to perform the next step, the loading, you will need to save the file "@.json" on the instance (and folder!) from where you start the mysqlsh to Target db, MDS.

Loading data from bucket into MDS

Technically speaking, this looks like a migration... (and no wonder I did not read it in depth from the beginning... )

Back to replication

...the Replication documentation states as following:

The dump loading utility does not automatically apply the gtid_executed GTID set from the source MySQL instance on the target DB System. The GTID set is included in the dump metadata from MySQL Shell's instance dump utility, schema dump utility, or table dump utility, as the gtidExecuted field in the @.json dump file. Retrieve the value of gtidExecuted from the dump in Object Storage.

Connect to the DB System using MySQL Shell and set the value of gtidPurged in the DB System, using the following command:

where gtidSet is the value in the gtidExecuted field in the @.json dump file.

-- documentation source: Replication

Let's find the gtid from our @.json file:

... and on the target, perform the GTID purging:

Replication channel

And let's create the replication channel (from UI this time):

And if all good, you should see the Channel as Active:

And let's check the status of the slave on the target MDS. If all good, we should see no errors, along with the executed GTIDs:

Small test

From the MySQL source, just create a database, and see if it is replicated in MDS:

... and on the MDS side:

Edit: The code can be found at next chapter, 7.5 Channels (code)

Fini!

Last updated