This page provides you with instructions on how to extract data from Amazon Aurora and load it into Delta Lake on Databricks. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is Amazon Aurora?
Amazon Aurora is a MySQL-compatible relational database employed by organizations that are looking for better performance than they can get from MySQL at cost-effective price points. Aurora is best used as a transactional or operational database and not for analytics.
What is Delta Lake?
Delta Lake is an open source storage layer that sits on top of existing data lake file storage, such AWS S3, Azure Data Lake Storage, or HDFS. It uses versioned Apache Parquet files to store data, and a transaction log to keep track of commits, to provide capabilities like ACID transactions, data versioning, and audit history.
Getting data out of Amazon Aurora
Aurora provides several methods for extracting data; the one you use may depend upon your needs and skill set.
The most common way to get data out of any database is simply to write queries. SELECT queries allow you to pull the data you want. You can specifying filters and ordering, and limit results.
If you’re looking to export data in bulk, there may be an easier way. A handy command-line tool called mysqldump allows you to export entire tables and databases in a format you specify (i.e. delimited text, CSV, or SQL queries that would restore the database if run).
Preparing Amazon Aurora data
For every table in your Amazon Aurora database, you'll need a corresponding table in your destination database. Make sure you've pinpointed all of the fields that will be inserted into your destination, and determined the datatypes for each object (i.e. INTEGER, DATETIME, etc.) to make sure they are mapped properly when they get inserted into the new table.
Loading data into Delta Lake on Databricks
To create a Delta table, you can use existing Apache Spark SQL code and change the format from
delta. Once you have a Delta table, you can write data into it using Apache Spark's Structured Streaming API. The Delta Lake transaction log guarantees exactly-once processing, even when there are other streams or batch queries running concurrently against the table. By default, streams run in append mode, which adds new records to the table. Databricks provides quickstart documentation that explains the whole process.
Keeping Amazon Aurora data up to date
At this point you’ve coded up a script or written a program to get the data you want and successfully moved it into your data warehouse. But how will you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow and resource-intensive.
Instead, identify key fields that your script can use to bookmark its progression through the data and use to pick up where it left off as it looks for updated data. Auto-incrementing fields such as updated_at or created_at work best for this. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in Aurora.
And remember, as with any code, once you write it, you have to maintain it. If Aurora sends a field with a datatype your code doesn't recognize, you may have to modify the script. If your users want slightly different information, you definitely will have to.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to move data from Amazon Aurora to Delta Lake on Databricks automatically. With just a few clicks, Stitch starts extracting your Amazon Aurora data, structuring it in a way that's optimized for analysis, and inserting that data into your Delta Lake on Databricks data warehouse.