Background and Objective
After using WordPress for about a month, and using it extensively for documenting projects and minor tech discoveries, I thought it would be good to start automatically backing the site up.
However, before I started, I thought it would be good to define what I meant by a ‘backup’
Using the backup contents, I should be able to fully restore the site on a new server running nginx, mysql and php
So why did I word it that way? The key point for me is that I’d like to be able to lift the backup and move it to a new server – it’s not enough to just get the WordPress files and database backed up. So the scenario is that the server running the site goes on fire – no access.
To test my backup I have an Intel NUC sitting on the desk beside me. By the end of the exercise I want to be able to get the site up and running in its entirety only using the backup file.
Key points:
- everything that is considered a normal WordPress backup should be included
- WordPress database
- WordPress directory in /var/www
- I should include the SSL certificates
- I should include the nginx configuration files
Before I started on the backup, I set up my test environment using the same guide as I setup my server from Ionos. This took care of the nginx, mysql and php elements so I had an ’empty’ server all ready to take a copy of my site.
Database Backup
Mysql comes with a handy application to take a copy of a database called mysqldump. This application will write the contents of a database to a script file that can be used to recreate the database. It includes both the schema and the data in the same script, so its ideal for porting a database to a new server. You point it at a database and it generates a (potentially very big file) as the output. It should to be run as the same user as the one WordPress uses when connecting to the site – no need to be root.
We could write a script and embed the username/password in it, or have a config.cnf with the values but the details are already stored in the wp-config.php files that WordPress uses. The key parameters I need are DB_NAME, DB_USER, DB_PASSWORD.
This leads to a test script to backup the database: (Version 1). You can ignore the log() function for now, it just wraps a call to echo.
#!/bin/bash
TODAY=`date +%Y%m%d`
WEBSITE="tangentuniverse.net"
WWW_ROOT="/var/www/html/"
DB_ARCHIVE=${TODAY}_DB_${WEBSITE}.sql
log(){echo $1;}
CONFIG=$WWW_ROOT/$WEBSITE/wordpress/wp-config.php
DBUSER=$(grep 'DB_USER' ${CONFIG} | cut -d "'" -f 4)
DATABASE=$(grep 'DB_NAME' $CONFIG | cut -d "'" -f 4)
log "Backing up database $DATABASE with user $DBUSER"
PASSWORD=$(grep 'DB_PASSWORD' $CONFIG | cut -d "'" -f 4)
mysqldump -u ${DBUSER} -p${PASSWORD} --no-tablespaces ${DATABASE}> ${DB_ARCHIVE}
log "DB Backup complete"
When I run this today, it creates a file called 20250313_DB_WordPress.sql. However, it annoyingly generates a warning
Backing up database WordPress with user wdpress
mysqldump: [Warning] Using a password on the command line interface can be insecure.
DB Backup complete
Why is this annoying? Well if I run this as a cron job I’ll get an email every time this runs as it generates output to the console.
I could generate a config.cnf and store the connection settings there but it seems a bit clunky. The alternative is to set the MYSQL_PWD environment variable in the script with the copy of the value. This is insecure, but given the values are available to the user anyway it’s no more insecure than the system as a whole (e.g. if a hacker gets access as me then they can see the files anyway)
This give me Version 2:
#!/bin/bash
TODAY=`date +%Y%m%d`
WEBSITE="tangentuniverse.net"
WWW_ROOT="/var/www/html/"
DB_ARCHIVE=${TODAY}_DB_${WEBSITE}.sql
log(){ echo $1;}
CONFIG=$WWW_ROOT/$WEBSITE/wordpress/wp-config.php
DBUSER=$(grep 'DB_USER' ${CONFIG} | cut -d "'" -f 4)
DATABASE=$(grep 'DB_NAME' $CONFIG | cut -d "'" -f 4)
log "Backing up database $DATABASE with user $DBUSER"
export set MYSQL_PWD=$(grep 'DB_PASSWORD' $CONFIG | cut -d "'" -f 4)
mysqldump -u ${DBUSER} --no-tablespaces ${DATABASE}> ${DB_ARCHIVE}
log "DB Backup complete"
WordPress Files and Media Backup
Helpfully, WordPress stores all of the files it needs in the WordPress directory. This include all plugins and media for the site – perfect. My root directory is /var/www/html/tangentuniverse.net/
To back this up, I can simply compress the directory
#!/bin/bash
TODAY=`date +%Y%m%d`
WEBSITE="tangentuniverse.net"
WWW_ROOT="/var/www/html/"
FILE_ARCHIVE=${TODAY}_F_${WEBSITE}.tar.gz
log(){ echo $1; }
log "backing up files"
cd $WWW_ROOT
tar -czf ~/$FILE_ARCHIVE $WEBSITE
log "website file backup complete"
When I run it today, it generates the file 20250313_F_tangentuniverse.net.tar.gz
john@tangentuniverse:~$ sh backup_files.sh
backing up files
website file backup complete
Nginx Backup
I use the nginx webserver to host the tangentuniverse.net website. This means that I will need to backup any site related files that are located in the /etc/nginx directory. The main configuration file is store in the sites-enabled directory called tangentuniverse.net.conf
Should I backup the certificates?
Looking at the nginx configuration file, I can see that it refers to a ssl_certificate and ssl_certificate_key that are stored in /etc/letsencrypt/live directory.
I could attempt to copy those, but unfortunately a regular (non-root) user cannot access those directories.
john@tangentuniverse:~$ cp /etc/letsencrypt/live/tangentuniverse.net/fullchain.pem .
cp: cannot stat '/etc/letsencrypt/live/tangentuniverse.net/fullchain.pem': Permission denied
So is this a problem. Well, it depends –
- A: If the original server is gone forever then I should setup a new Certbot managed certificate on my new server so it would only be a temporary inconvenience
- B: If the original server is only going to be offline for a small amount of time, I can use the certificate and key on the new server for a little while
When you think about it though, even in case A you may want the original certificate and key on a backup server while you setup the Certbot on a new server. So I decided to backup the certificates
How to backup certificates?
Given the certificates are located where the regular user can’t get to them, I need a way backup the files. The most straight forward would be to run a root level cron script to copy the two files to the user directory. Alternatively I could change the permissions on the /etc/letsencrypt directory, but I’m reluctant to do that in case I forget about it and edit files by accident down the line.
I made a backup script called copy_certs.sh. It’s a very simple script that copies the two files to a user controlled directory. I don’t attempt to backup the entire LetsEncrypt directory as that is only needed if I want to renew the certificates. This script will allow me to get the site up and running with the existing SSL infrastructure, after that I’ll have to install LetsEncrypt on the new server and get new SSL certificates.
#!/bin/bash
ARCHIVE_DIR=/home/john/ssl_certs
TARGET_USER=john
WEBSITE=tangentuniverse.net
cp /etc/letsencrypt/live/$WEBSITE/fullchain.pem $ARCHIVE_DIR/
cp /etc/letsencrypt/live/$WEBSITE/privkey.pem $ARCHIVE_DIR/
sudo chown $TARGET_USER -R $ARCHIVE_DIR
Then I added the job to the root user cron tasks – it HAS to run as root, or it won’t be able to have access.
john@tangentuniverse:~$ sudo crontab -e
30 11 * * * /home/john/scripts/copy_certs.sh
so now as the standard user, I can see a copy of the ssl certificate/key
john@tangentuniverse:~$ ls -l ssl_certs/
total 8
-rw-r--r-- 1 john root 3619 Mar 14 11:30 fullchain.pem
-rw------- 1 john root 1704 Mar 14 11:30 privkey.pem
Nginx backup script
Now that I have a copy of the SSL certfificate/key in the user directory I can put together a script to archive all the data
#!/bin/bash
tmp_dir=system_settings
cd
mkdir $tmp_dir
WEBSITE_SETTINGS="/etc/nginx/sites-enabled/tangentuniverse.net.conf"
cp ${WEBSITE_SETTINGS} ${tmp_dir}/
cp -r $HOME/ssl_certs ${tmp_dir}/
tar -czf ~/$(date +%Y%m%d)_sys_config.tar.gz $tmp_dir
rm -fr $tmp_dir
If I run that script it generates a file called 20250314_sys_config.tar.gz. Looking in the file I can see
john@tangentuniverse:~$ tar -tzf 20250314_sys_config.tar.gz
system_settings/
system_settings/tangentuniverse.net.conf
system_settings/ssl_certs/
system_settings/ssl_certs/fullchain.pem
system_settings/ssl_certs/privkey.pem
Putting it all together
I have the 3 main components of the website being backed up individually, so now I can put them on in one bigger script, or call them separately. My preference is to put them together because it will mean only one cron job to run, and one place to look if backups fail. They also share a lot of common configuration, so it makes it far easier to maintain as one file.
To make it easier to support, I’ve updated the log() function. As mentioned before if a cron script generates any output it will cause an email to be sent – this is the correct behaviour. I’ve updated the logging function so that can be turned on and off with a boolean. So if I need to test I can turn it on, but normally it will be off
The final script
#!/bin/bash
WWW_ROOT="/var/www/html/"
WEBSITE="tangentuniverse.net"
WEBSITE_SETTINGS="/etc/nginx/sites-enabled/tangentuniverse.net.conf"
SSL_CERTS_DIR=$HOME/ssl_certs
logging_on=true
log ()
{
if [ "$logging_on" = true ] ; then
echo "$1"
fi
}
# derived variables
TODAY=`date +%Y%m%d`
FILE_ARCHIVE=${TODAY}_F_${WEBSITE}.tar.gz
DB_ARCHIVE=${TODAY}_DB_${WEBSITE}.sql
FINAL_ARCHIVE=${TODAY}_${WEBSITE}.tar.gz
#
# copy the website files
#
log "backing up files"
cd $WWW_ROOT
tar -czf $HOME/${FILE_ARCHIVE} $WEBSITE
log "website file backup complete"
#
# backup the database
#
cd $HOME
CONFIG=$WWW_ROOT/$WEBSITE/wordpress/wp-config.php
DBUSER=$(grep 'DB_USER' ${CONFIG} | cut -d "'" -f 4)
DATABASE=$(grep 'DB_NAME' $CONFIG | cut -d "'" -f 4)
log "Backing up database $DATABASE with user $DBUSER"
export set MYSQL_PWD=$(grep 'DB_PASSWORD' $CONFIG | cut -d "'" -f 4)
mysqldump -u ${DBUSER} --no-tablespaces ${DATABASE}> ${DB_ARCHIVE}
log "DB Backup complete"
#
# backup the system setting and copy the db and WordPress archives
#
tmp_dir=${TODAY}
cd $HOME
log "Building final archive in $HOME"
mkdir $tmp_dir
log "Copying website settings"
cp ${WEBSITE_SETTINGS} ${tmp_dir}
log "Copying SSL certs"
cp -r $SSL_CERTS_DIR ${tmp_dir}
log "Copying website files"
mv ${FILE_ARCHIVE} ${tmp_dir}
log "Copying website database"
mv ${DB_ARCHIVE} ${tmp_dir}
log "Generating archive ${FINAL_ARCHIVE}"
tar -czf ${FINAL_ARCHIVE} $tmp_dir
log "Cleaning up"
rm -fr $tmp_dir
log "Done."
Running this with logging on gives:
john@tangentuniverse:~$ sh full_backup.sh
backing up files
website file backup complete
Backing up database WordPress with user wdpress
DB Backup complete
Building final archive in /home/john
Copying website settings
Copying SSL certs
Copying website files
Copying website database
Generating archive 20250314_tangentuniverse.net.tar.gz
Cleaning up
Done.
For the first few runs I will leave the system email me the results, then I will edit the script to turn logging off. I’ve selected to run the backup at 1:15am as … well why not …
15 01 * * * /home/john/scripts/full_backup.sh
Archiving the Backups
The final stage is to copy the archive from the server to a remote location. The best way to do this is actually copy from the remote server using scp because I already have SSH keys for accessing my website server from by home server.
So, writing a simple cron job – on my home server
#!/bin/bash
TODAY=`date +%Y%m%d`
FINAL_ARCHIVE=${TODAY}_tangentuniverse.net.tar.gz
scp john@tangentuniverse.net:$FINAL_ARCHIVE .
ssh john@tangentuniverse.net rm $FINAL_ARCHIVE
Testing
I took the archive file generated and copied it to my test server. I then went through the steps of restoring each element of the backup
Database Restore
The database restore was straightforward. The only slight snag was that I had to create the data and user first, something I completely forgot about until I went to load the db!
- create a database
create database WordPress;
- create the user
CREATE USER 'wdpress'@'localhost' IDENTIFIED BY '****'
- grant rights on the database to the user
GRANT ALL Privileges on WordPress.* TO 'wdpress'@'localhost' with grant option;
Then, I logged back into the database as the WordPress user and ran the script
mysql -u wdpress -p
WordPresssource 20250314_DB_tangentuniverse.net.sql;
File Restore
All of the files and plugins were compressed together, so all I had to to was extract them by decompressing the archive into the /var/www/html directory. I had to make sure to change the ownership to the www-data data user (or the corresponding user for your nginx install) Not only does this restore the WordPress files, it also restores any plugins and their settings.
john@bee:/var/www/html$ tar -xzf 20250314_F_tangentuniverse.net.tar.gz
john@bee:/var/www/html$ sudo chown -R www-data tangentuniverse.net/
Nginx Config
Finally I had to setup the nginx server to be aware of the site and SSL configuration. I copied the file tangentuniverse.net.conf to the /etc/nginx/sites-enabled directory and then linked to the enabled sites.
john@bee:$ sudo cp 20250314/tangentuniverse.net.conf /etc/nginx/sites-available/
john@bee:$ sudo ln -s /etc/nginx/sites-available/tangentuniverse.net.conf /etc/nginx/sites-enabled/default
Finally, I had to manually edit the configuration file /etc/nginx/sites-enabled/tangentuniverse.net.conf so that the SSL certificate and key point to the right place.
# ssl_certificate /etc/letsencrypt/live/tangentuniverse.net/fullchain.pem; # managed by Certbot
# ssl_certificate_key /etc/letsencrypt/live/tangentuniverse.net/privkey.pem; # managed by Certbot
ssl_certificate /home/john/20250314/ssl_certs/fullchain.pem;
ssl_certificate_key /home/john/20250314/ssl_certs/privkey.pem;
After that I started the nginx server on the test machine. There were no errors in the log, I was ready to test.
Hitting the Test Site
I had to update the routing on my laptop so that it would resolve the hostname tangentuniverse.net to the IP of my test machine. In /etc/hosts I created the entry –
192.168.1.46 tangentuniverse.net
Launching a fresh browser, I can hit the website on my test server. It all seemed a bit too flawless so I actually shutdown nginx on my live server just to make sure!

I checked that I could log in as the administrator and I created a test blog post

Also, checking out the website I could see all of the files and pages, including those that were in draft status.
Conclusion
It took a bit of effort, but the act of restoring the website on a clean server helped me fully understand the process and ensure I hadn’t missed anything. There was one step I had to take, which was relating to php libraries that I forgot to install, but other than that the process was pretty smooth.
Up next I’ll have to periodically clear out the archive directory. I’ll do this in a week or so when I actually have a few files to test on.