WordPress Migration Saga – The Beginnings

6 min read
From Le blog de Laurel

After a week of docker up, docker down, and random redirect loops, I’ve finally managed to move my children (this blog included) into a new home, on Linode, a virtual private server provider.

Here is the first post from WordPress Migration Saga, a series that presents the migration of an existing WordPress website to another host, in Docker. It was not suppose to become quite a saga, but it ended up being a very looong post and I had to split it. 🙃

In this post we will talk a bit about the migration context and the prerequisites for this kind of migration.

127.0.0.1? Home, Localhost in networking. Yes, this is a more nerdy post, but if you like running all tasks in Bash, you will stick around. 🙂

Let’s Pack Our Bags

One year ago, I’ve started some test blogs on GoDaddy WordPress hosting and a static website and another blog managed through cPanel. The year has passed, the offer was gone, it was time to pack my bags and move to a cheaper home. I was already hosting websites on Linode, for testing purposes, so I decided to move my stuff on a Linode server instance, too.

WordPress (father and mother of this blog) is a free and open-source Content Management System (CMS) built on a MySQL database with PHP processing.

Linode is an American web hosting company that provides cloud hosting via VPS hosting plans that run on Linux-only servers. The company was founded in 2003 by Christopher Aker and has grown over the years to incorporate cloud hosting options. Linode data centers support 11 global markets, including Frankfurt. (src)

Before doing any migration, I had to prepare the virtual server and secure it a bit, so I wouldn’t end up serving adult content from my websites when I would present them to my mum and to possible clients 🙂 (actually, a botnet would be more probable). There are a lot of useful resources about securing a host.

In order to migrate a WordPress website, you have to move your entire blog folder, plus the database (usually MySQL, where options, users, posts and others are stored) on the new server, then restore the database. I thought it won’t be such a big deal, so I postponed the migration to the last week before GoDaddy plans would have expired.

And then, the horror! 😱 When I tried to download my blogs from the WordPress hosting, I could not find download links, only Restore.. Seems that the option to download the backups is available only to US and Canada. Well, I could not connect through ssh to the server, so I’ve resorted to Filezilla to download my full blog folder, and to UpdraftPlus, a WordPress plugin, to download my database, too.

For the cPanel managed website the migration was easy, I had the option to either use phpMyAdmin, or MySQL to get the database, and download any backup to the local computer, or transfer it directly to the new server.

After backing all my data up on my computer, I have used scp to copy the websites to their new spots on the Linode instance (could use rsync, too). From now on, how you choose to serve your data to the outside world can vary a lot, depending on the websites purpose, management, resources etc. I chose to use docker because I wanted to separate my websites environments and to make them portable and easy to build up or tear down.

A Word on Docker

When you build multiple applications or websites, you want to make sure that they don’t conflict with each other. Maybe one application uses Python 2, and another uses Python 3. Installing both of them in the same environment would lead to failures for an application or another. That’s why is a good practice to separate the environments for each application, and vagrant and docker are among the virtual environments providers that can help you do this.

The main difference between those is that vagrant runs an entire operating system in the virtual machine for the project and is used mainly for staging, and docker uses a containerization approach, and a copy-on-write (CoW) strategy that reduces start-up time and saves space (layers share read-only files, and files are copied into the layer only if/when they are modified).

Vagrant from Hashicorp is a solution that enables quick configuration and provisioning of virtual machines (VMs) that help to isolate the application in its own development environment. These VM’s work on top of real hardware servers but emulate the virtual infrastructure the developer need and ensure the app works the same no matter the underlying hardware and software, as long as it runs in the VM. (src)

Docker is an open-source platform that allows isolating the apps within code containers similar to Linux Containers (LXC), though Docker moved from LXC to containerd to enable industry-wide standardization. A Docker container is a code package with everything needed to run the app code inside. While a single container is created to run a single app, one Vagrant VM can run multiple interacting apps at once. For Docker containers, this is possible using Kubernetes and docker compose. (src)

I wanted to try Docker for a while now, so I read some tutorials and decided to give it a try. For this migration, I’ve used mainly docker-compose utility, that allows users to run commands on multiple containers at once, for example, building images, scaling containers, running containers that were stopped, and more. You can also define persistent volumes on the disk, where you store the data that you want to preserve (for example your website files).

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration. (src)

In order to communicate with the world, every host makes its services available using numbered ports, the Web server usually uses port 80, the file transfer protocol server (FTP) uses port 21 and so on. Every Web server on the Internet conforms to the Hypertext Transfer Protocol (HTTP) on port 80 and to the HTTPS (HTTP Secure) on port 443.

HTTPS is a protocol created by combining HTTP and SSL/TLS protocols. SSL (Secure Sockets Layer) and it’s successor TLS (Transport Layer Security) are methods used to secure and encrypt sensitive information like credit cards, usernames, passwords, and other private data sent over the Internet.

If I had to move only one website, it was enough to create the docker-compose file in the website folder, run docker-compose, and the website would have been online. Because we can’t have all our blogs using the same port 80 and 443, when a client reaches the blog example1.com, behind the scenes we need to redirect it to, let’s say, port 8001, and when it reaches example2.com, we redirect it to port 8002, and so on. This can be accomplished using an nginx or apache web server and some reverse proxy directives. I chose nginx because of its light-weight resource utilization.

NGINX is open source software for web serving, reverse proxying, caching, load balancing, media streaming, and more. It started out as a web server designed for maximum performance and stability. In addition to its HTTP server capabilities, NGINX can also function as a proxy server for email (IMAP, POP3, and SMTP) and a reverse proxy and load balancer for HTTP, TCP, and UDP servers. (src)

The key elements for the migration of a blog consisted in a docker-compose.yml file, a hidden .env file where we store the credentials for the database, a folder containing the full blog, the database.sql file and the nginx config file for the website. I would run docker-compose up -d, then nginx reload, and the website was ready to go!

In real life, things got a little more complicated, because I had multiple websites, and one of them responded with redirections loops because of some SSL settings. If it weren’t for this issue, I would have used two Docker images to handle the reverse proxy and the SSL automatically, jwilder/nginx-proxy and jrcs/letsencrypt-nginx-proxy-companion. Because I’ve needed a more granular control, I chose to update the SSL certificates using certbot and cron, and do the reverse proxy with nginx installed on the host.

I made a folder for each website, created the folder structure, copied the existing files (database.sql and src), ran docker-compose from each folder, and nginx reload. Then tested the connection to the databases, watched the logs and examined the websites in the browser.

Prerequisites

This guide will show the steps for migrating a single WordPress website to a Linode server instance. Take the instructions with a pinch of salt, I’m not a Docker expert, I’ve just managed to put my websites up. 😉

Before starting the migration, you will need:

From Le blog de Laurel

In the next post, Getting Used to Docker, we will go over a series of commands in Docker, to get a little aquainted with this kind of working environment.

Leave a Reply

Your email address will not be published. Required fields are marked *