This page provides you with instructions on how to extract data from Db2 and load it into Redshift. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is Db2?
Db2 is IBM's relational DBMS. IBM provides versions of Db2 that run on-premises, hosted by IBM, or in the cloud. The on-premises version runs on System z mainframes, System i minicomputers, and Linux, Unix, and Windows workstations.
Getting data out of Db2
The most common way to get data out of any relational database is to write SELECT queries. You can specifying filters and ordering, and limit results. You can also use the EXPORT command to export the data from a whole table.
Loading data into Redshift
Once you've identified all the columns you want to insert, you can use Redshift's CREATE TABLE statement to create a table to receive all of the data.
Once you have a table built, you might think that the easiest way to migrate your data (especially if there isn't much of it) would be to build INSERT statements to add data to your Redshift table row by row. Don't do it! Redshift isn't optimized for inserting data one row at a time. If you have a high volume of data to be inserted, we suggest loading the data into Amazon S3 and then using the COPY command to load it into Redshift.
Keeping Db2 data up to date
So you've written a script to export data from Db2 and load it into your data warehouse. That should satisfy all your data needs for Db2 – right? Not yet. How do you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow; if latency is important to you, it's not a viable option.
Instead, you can identify some key fields that your script can use to bookmark its progression through the data, and pick up where it left off as it looks for updated data. Auto-incrementing fields such as updated_at or created_at work best for this. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in Db2.
Other data warehouse options
Redshift is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Google BigQuery, PostgreSQL, Snowflake, or Microsoft Azure SQL Data Warehouse, which are RDBMSes that use similar SQL syntax, or Panoply, which works with Redshift instances. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To BigQuery, To Postgres, To Snowflake, To Panoply, and To Azure SQL Data Warehouse.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to move data from Db2 to Redshift automatically. With just a few clicks, Stitch starts extracting your Db2 data via the API, structuring it in a way that is optimized for analysis, and inserting that data into your Redshift data warehouse.