If the data on your instance is important, it's a good idea to back it up regularly. This will include the data in your database as well as any other files that your app stores on the instance. This way, if the instance develops a serious problem, you could restore your data to a new one and and be back up and running in minutes. A simple way to handle backups is to attach a volume to your instance, and set up an automated process to back information up to that volume.
In this learn guide we’ll discuss:
- attaching a volume to your instance
- backing up your MYSQL database to your volume (and automating this with a cron job)
- backing up your files to your volume (and automating this with a cron job)
Attaching a volume to your instance
Volumes are a convenient way to add storage to your instance. Crucially, they are totally independent of your instance which means they can still be accessed if your instance develops a serious problem. Attaching a volume to your instance is explained in our learn guide: Configuring Block Storage on Civo
For the purposes of this learn guide, we’ll assume you have your volume mounted at the position /mnt/backup_volume
. You should now navigate to your volume and create the directories database
and files
. These directories are where you will store the backups of your database and your files respectively.
Backing up your Database
We will use the mysqldump
command to take a copy of our database and save it into a new sql file. Usage of this command is set out below.
mysqldump -u [user_name] -p [database_name] > [new_filename].sql
As you can see we must execute this command as a mysql user, supplying a username and password. As you will see later, this command will be stored in a script on the instance, so as an extra security measure we will execute it as a read-only user (if somebody does gain access to our instance and finds this script, it’s better they have read only rights and cannot modify our database). If you don’t already have an appropriate user, you can create one as we are doing below.
grant select on production_db.* to ‘new_user’@‘%’ identified by ‘Password1’
Our new user has a username of new_user
and a password of Password1
. We have granted this user read-only rights to the database we plan to back up, production_db
.
With our user created, we can run the mysqldump
command discussed earlier to save a copy of our databse, providing our new user’s credentials. We will be including this command in a script shortly in order to automate it, but it's a good idea to test it manually first.
mysqldump --single-transaction -u new_user -p production_db > production_backup.sql
Enter password: Password1
(—single-transaction
is required here because we have not granted table locking permissions to our new user).
If this executes correctly, you should see a new file ‘production_backup.sql’ in the current directory. Great, so our user has the ability to backup our database for us. Now we simply need to automate this command. The easiest way to do this is to include it in a bash script which we can run every day via the cron tab (you could probably put this command directly into the cron tab in one line, however you’ll see further below that there are other things we want to do daily so we’ll create our bash script now).
Create a bash script (this learn guide assumes you already know how to create one) and include the mysqldump
command. As we plan to run this daily, we'll tweak it slightly to save the day number into the filename, allowing us to save 7 days worth of backups. After 7 days, new backups will begin to overwrite the old ones.
DAY=$(date +%w)
/user/bin/mysqldump --single-transaction -u new_user -pPassword1 production_db \
> /mnt/backup_volume/database/production_backup_$DAY.sql
Note in the above:
- the password can be included in the same as the command by including it directly after
-p
without a space. - when including commands in a bash script it's a good idea to include the full path to the command to make it a bit more robust on different systems.
After saving your script, it’s again a good idea to check it works by running it manually. Assuming it runs fine (creating the desired backup) you can now automate the running of this script using the crontab. Edit the crontab with crontab -e
and, to run the script every night at say 11pm, include the following:
0 23 * * * sudo /bin/backup_script
(the sudo
prefix is required our case, as we are modifying the /mnt/backup_volume directory
)
Backing up your Files
Your app may allow users to save files to their accounts. If these are saved to a cloud storage provider, that’s great, but if you are simply saving them to a location on your instance, it’s a good idea to back these up too. A good way to to do this is combining tar
and gzip
to create compressed archive files. tar
allows you choose between two different types of backup: full or incremental.
Full backups are when your file store is archived and compressed in it's entirety every time a backup is taken. Incremental backups are when you take a full copy of the file store only occasionally, with incremental backups taken in between which only save the changes made to the file store. We'll explain how to do each below.
Full Backups
First confirm where your app is storing the files in question. In our case, files are stored at: /var/lib/dokku/data/production_data/public/system
, in a directory called files
. files
is the directory we will archive.
The tar command will create a archive file of the file or directory specified. This is used in the format below.
tar -czf [/path/to/destination/archive_file_name.tar.gz] \
-C [/path/to/source] [file_or_directory_to_archive]
I've explained the different parts of this command below:
- The
-c
indicates you want to [c]reate an archive (tar
an also be used to e[x]tract an archive back to it’s source files) - The
-z
indicates you want to [z]ip (compress) the archive. - The
-f
indicates you want to output the result to a [f]ile’ (and must then specify the name and location of that file) - The
-C
indicates you want [C]hange to a certain directory before you perform the operation (note: do not include the file or directory to archive) - The file or directory to be archived is the last argument (note it is separated from
-C /path/to/source
by a space)
So in our case this command would look something like this:
sudo tar -czf /mnt/backup_volume/files/archive.tar.gz \
-C /var/lib/dokku/data/production_data/public/system files
(the sudo
prefix is required in our case, as we are modifying the /mnt/backup_volume directory)
Let's run this command to make sure it works. It should result in creating an archive.tar.gz
file in the destination directory. If that works, we can now automate this command by including it in the script that we have already set to run every night. Again, we will tweak the command to store the day number in the backup’s filename. This is allow us to retain a week’s worth of backups.
/bin/tar -czf /mnt/backup_volume/files/archive_$DAY.tar.gz \
-C /var/lib/dokku/data/production_data/public/system files
(sudo
prefix is no longer required in our case, as our crontab calls the script using sudo
)
Thats it! This approach works well for most small to medium file stores. If you are archiving large file stores, you may want to think about tar
’s incremental backup option discussed further below.
Incremental Backups
Incremental backups will allow us to take a full backup just once at the beginning of the week, and then incremental backups on subsequent days. The incremental backups store only the changes made, minimising storage space required. Note: in order to restore any of these incremental backups, you must start by restoring the full backup, then restore each day’s incremental, until you arrive at the day you require.
To implement incremental backups, we will tweak the tar command slightly in our script.
/bin/tar -czg /mnt/backup_volume/files/files.snar \
-f /mnt/backup_volume/files/archive_$DAY.tar.gz \
-C /var/lib/dokku/data/production_data/public/system files
The -g
indicates we want to apply incremental backup mode, after which we must supply the location and name of the file which will contain the meta data, usually a .snar
file. When run for the first time, this command will create a full backup (the .tar.gz
that we are already familiar with), and also the meta data .snar
file. The next time it runs, it sees that the snar
file already exists, and knows to create only an incremental backup in the form of a new .tar.gz
file.
We would like to keep one week’s worth of backups, so we must make sure that at the beginning of each week, a full backup is taken. We can do this by simply deleting the snar
file before the tar
command is run (we will actually delete all the .tar.gz
files as well so that we don't confuse people by leaving incremental files that no longer have a an associated full backup, but this is technically not required as they will be overwritten during the week). Let’s include an if statement in the script to do this (before the tar
command is run).
if [ $DAY = 1 ]
then
/bin/rm /mnt/backup_volume/files/$DIRECTORY_NAME/files.snar
/bin/rm /mnt/backup_volume/files/$DIRECTORY_NAME/*.tar.gz
fi
So, our script will take a full backup each Monday and incremental backups throughout the rest of the week. However, there is a problem with this. Once Monday’s full backup is overwritten, the incremental backups that remain are useless. So we aren’t actually keeping backups for a full week. On Sunday we'd have 6 days backup available, but on Tuesday we only have 1 day available. There are many backup strategies out there, but one which solves this problem is to rotate our backups between two directories. For example, when Monday comes around, instead of overwriting your full backup, you switch to the alternate directory. The full backup and incremental backups are saved in this alternate directory throughout the week, and on the following Monday switch back to the first directory again. This way, you’ll always have at least a full week’s backups at any point in time.
You could implement this by creating the directories even_week
and odd_week
within /mnt/backup_volume/files
, and including something like the following in your bash script.
DAY=$(date -d +%w)
WEEK=$(date +%V)
if [ $((WEEK%2)) -eq 0 ]
then
DIRECTORY_NAME="even_week"
else
DIRECTORY_NAME="odd_week"
fi
if [ $DAY = 1 ]
then
/bin/rm /mnt/backup_volume/files/$DIRECTORY_NAME/files.snar
/bin/rm /mnt/backup_volume/files/$DIRECTORY_NAME/*.tar.gz
fi
/bin/tar -czg /mnt/backup_volume/files/$DIRECTORY_NAME/files.snar \
-f /mnt/backup_volume/files/$DIRECTORY_NAME/archive_$DAY.tar.gz \
-C /var/lib/dokku/data/production_data/public/system files
An alternative strategy might be to write to the date in the filename (rather than just weekday number). You would still create a full backup every Monday, and then simply delete files older than 14 days (find /path/to/files/ -type f -mtime +14 -exec rm {} \;
), eliminating the need for two directories.
Whichever strategy you choose, incremental backups are great way save space when you're dealing with large file stores.