Using our data created inside the Azure Table Storage and working with this data in Cosmos DB needs us to migrate from one to another. In the previous blog posts, we have seen that to work with the data in Cosmos DB, we can simply change the connection string from the Azure Table Storage one to that of Cosmos DB Table API.
In this blog post, we look at how we can actually migrate from one to another and what all changes do we need to bring in our code, if at all.
We create a new CosmosDB account using the table API, since we are sure that we do not need to use any other API or switch between APIs at a later stage.
In the above screenshot, we click on New to create a new Cosmos DB account, choose databases, and then select Azure Cosmos DB.
When creating the new Database, we enter the ID, and choose the Azure Table API from the dropdown, create a new resource group, and set the location.
The checkbox to enable geo-redundancy works similarly as for SQL API and allows global distribution of data.
We click on Create, and once the deployment is succeeded, we can move on to bring in the data inside our Table.
There are two ways of getting this data out of the Azure Table Storage
- Using AZ Copy, a command line utility that copies data in and out of the Azure Table Storage.
- Using the Cosmos DB Data Migration Tool which we have learned about in some of the previous blog posts.
Let us look at getting the data using the Data Migration tool. The two interfaces that this tool has are GUI and command-line. The GUI is not compatible with the Table API yet, so we use the command line when working with Table API.
Go to the command line and enter the path where you have downloaded the data migration tool. Enter the source and the target details with the connection string and the primary connection string of the new CosmosDB account that we have created for the table API.
This one long command will help you migrate over your table data from table storage to the Cosmos DB account.
dt /s: AzureTable /s.ConnectionString:DefaultEndpointsProtocol=https;AccoutnName=azuretable;AccountKey=OndsgdsjhuiIUVSFDRhdvsgdnuihjnmns+shjdsnndnjdjdnmbbndnmdkjdkjdnmdn==;EndpointSuffix=core.windows.net /s.Table:Users /t:TableAPIBulk /t
followed by the same details for the target.
When this completes, you can go to the Overview page and check if you can see the imported table in here now. You can also click on the table and view the data explorer page and view the data and add, delete, edit, and so on.
We can see the data inside the explorer, but we need to see if these tables would still work.
We should change the connection string inside the application configuration file for the new CosmosDB account.
With this, our application works now, with just changing the connection string the appconfig file. This will now get our properties indexed, predictable throughput. We can also easily globally distribute the data using the portal.