Civo’s managed storage service, Civo Object Store, allows you to upload unstructured data in an S3-compatible system. This is compatible with AWS S3 (Simple Storage Service), a public cloud storage service with low latency and high availability offered by Amazon Web Service (AWS). In this tutorial you will learn how to migrate data from AWS S3 to Civo Object Stores using rclone, an open-source command-line program to manage files on cloud storage.
Prerequisites
To complete this tutorial, you will need the following:
- AWS account with files inside an S3 bucket
- Civo account
- Civo CLI installed
To complete this tutorial, you will need to have an object store ready. If you don’t have any, create a new one. You can consult our documentation on how to create a new object store.
Getting started with rclone
Throughout this tutorial, we will be utilizing rclone, a Go-based program packaged as a single binary, to migrate data from AWS S3 to Civo Object Stores. Rclone is an open-source program that can help you manage your files on cloud storage service. If you have used an S3-compatible storage system before, you might be familiar with s3cmd. The functionalities offered are similar, but rclone supports not just S3-compatible storage but also non-compatible ones such as Google Drive and Dropbox. You can check the complete list in its documentation.
Installing rclone
Before we begin to migrate your data, you will need to install rclone, which can be done by referring to the documentation here.
If you are using MacOS, you can use brew to install rclone by running the following command:
brew install rclone
If you are using Windows, you can also use your package manager such as Winget, Chocolatey, or Scoop, to install rclone.
With Linux, the easiest option is probably getting a precompiled binary and then copying it into a directory in your PATH.
Configuring rclone
Now that you have rclone in your system, the next step is to configure it based on your AWS security credentials and your Civo object store credential.
Getting AWS access key ID and secret access key
To get your AWS security credentials, do the following steps:
Step 1: Access AWS security credentials page.
Step 2: Scroll down to the Access keys section.
Step 3: Click Create Access Keys. If you are currently using a root account to access your AWS console, a new page will show up, warning you that using the root access key is not a good practice.
If you have an otherwise empty AWS account, for the sake of this tutorial, it’s okay to use it just for this occasion. Tick the checkbox and click Create access key.
Step 4: A new page will show up with details of your new access key. Click Download .csv file.
Step 5: Open the downloaded file. Inside the file, you will find your AWS access key ID and secret access key separated by a comma.
Getting Civo credentials
To get your Civo credentials, follow these steps:
Step 1: Run the following command.
civo objectstore show <object-store-name>
Step 2: The output from the previous step shows a command to get your secret access key. Run that command.
civo objectstore credential secret --access-key=<access-key>
For the above two commands, take note of the access key, secret key, and object store endpoint. You will need these three things to configure rclone for the migration.
Configuring an S3 remote in rclone
Now that you have all the credentials you need, the next step is to configure rclone for the S3 remote. For rclone, ‘remote’ refers to a service or storage location used for file storage. Therefore, in this tutorial's context, AWS S3 and Civo object store are both considered remotes.
Follow these steps to allow rclone to get the files from your S3 bucket:
Step 1: Run rclone config
.
Step 2: Type in n, then press Enter.
Step 3: Enter a name for your remote, for example, ‘aws-s3’.
Step 4: For storage type, type in ‘5’ for Amazon S3 Compliant Storage Providers.
Step 5: For the provider, type in ‘1’ for AWS S3.
Step 6: For env_auth, leave it empty and press Enter.
Step 7: Next, input your AWS access key and press Enter.
Step 8: Input your AWS secret access key and press Enter.
Step 8: For the region, input the option number based on the location of your AWS S3 bucket. For example, if your bucket is in the ap-southeast-1 region, then you input 12. You will find these details on your AWS account.
Step 9: For the option endpoint, leave it empty and press Enter.
Step 10: For the location_constraint, input the same answer with your bucket region from step 9.
Step 11: For the acl, leave it empty and press Enter.
Step 12: Leave serversideencryption empty and press Enter.
Step 13: Leave ssekmskey_id empty and press Enter.
Step 14: Leave storage_class empty and press Enter.
Step 15: Enter ‘n’ for Edit advanced config.
Step 16: Enter ‘y’ to confirm the configuration.
After the last step, you will see the list of remotes in rclone and a prompt asking you what to do next. Now, you will configure a new remote for your Civo object store.
Configuring a Civo object store remote in rclone
Continuing from the last prompt, do the following:
Step 1: Type in ‘n’ for a new remote and press Enter.
Step 2: Type in the name of your new remote and press Enter. For example, you can type in ‘civo’.
Step 3: For the storage type in ‘5’ for Amazon S3 Compliant Storage Providers.
Step 4: For the provider type in ‘25’ for any other S3 compatible provider.
Step 5: For the env_auth, leave it empty and press Enter.
Step 6: For the accesskeyid, type in your Civo credential’s access key ID. This is the output of the command civo objectstore show
you ran previously.
Step 7: For secretaccesskey, type in your Civo credential’s secret access key. This is the output of the command civo objectstore credential secret
you ran previously.
Step 8: For the region, leave it empty and press Enter.
Step 9: For the endpoint, type in your object store endpoint and press Enter.
You can get this information after running civo objectstore show
command previously. Usually, it’s in the pattern of objectstore.region.civo.com. So if your object store region is FRA1, the endpoint should be objectstore.fra1.civo.com.
Step 10: For the location_constraint, leave it empty and press Enter.
Step 11: For the acl, also leave it empty and press Enter.
Step 12: You don’t need to edit advanced config, so type in ‘n’ and press Enter.
Step 13: Type in ‘y’ and press Enter to confirm.
Step 14: You should now have two remotes configured, as shown in the current remotes list. Enter ‘q’ to finish configuring rclone.
Migrating data from AWS S3
With the rclone configuration complete, you can sync the content in your S3 bucket with your Civo object store. The synchronization can be accomplished using the rclone sync
command. You will use the --progress
flag so that it will show the progress of the synchronization process.
The syntax for the sync command is:
rclone sync --progress source:path dest:path [flags]
For example, if your S3 bucket name is my-site-abc7890
and your Civo object store name is my-site-def1234
, then you run:
rclone sync --progress aws-s3:my-site-abc7890 civo:my-site-def1234
Below is the sample of the output after rclone finishes executing:
Cleaning up
Using a Civo object store will incur costs in your account. If you don’t want to be billed for that, you must delete your object store. You can do this by running this command:
civo objectstore delete <object-store-name>
Summary
We've discussed rclone in this tutorial and how to move data from AWS S3 to Civo Object Store. Once you configure the cloud storage provider as your remote, you can do the same steps to migrate or copy your data.
You can check out the main documentation if you want to read more about rclone. Or, if you want to learn more about Civo Object Store, you can check out our Docs or Tutorials.