I want to put WordPress — all of it — on EFS. Should I?

I have a WordPress site that, as of now, I am running completely on an Amazon Lightsail instance. It gets properly backed up; availability is okay; security is good; it's a simple setup to maintain. It's fine.

I want to improve availability and load times by moving the site to Amazon Elastic Beanstalk and doing some other major changes to the site's backend (making the site database independent from the PHP server, moving to nginx, etc.). This would be good for a multitude of reasons, chief among them: EBS is just a more robust server setup than Lightsail.

The Amazon-provided tutorial for creating a WordPress site on Beanstalk advises to mount only /wp-content/uploads/ on an Amazon Elastic File System, and the default configuration script does so for the customer. The downside to doing this is that core WordPress files do not get shared between EC2 instances; they are instead replicated on each instance, so running the WordPress update process on your site means that some instances don't actually get updated, yielding undesirable behaviors.

The tutorial advises to update WordPress (and plugins) by going through an awful process of exporting all site content with WordPress' (beta) export tool and re-importing it with the (beta) import tool on a new Beanstalk environment that is running the updated WordPress.

I am not strictly opposed to automating this process so that updating WordPress is easier for me, but I value staying current on updates, so I want to optimize for doing that, and I am skeptical that the best solution for me (or any customer) is to use beta import and export tools for WordPress in the process of updating the system or a plugin.

For example, sometimes these updates are security updates for which WordPress offers an automated patching system, which is highly desirable for maximizing site security. However, that automated patching wouldn't work as expected if core WordPress files are not shared between EC2 instances because of the updating challenge I already outlined.

With all of this said:

What architectural or systems issues might I encounter if I put all of my WordPress files — not just the /wp-content/uploads/ directory — on shared EFS storage?

I will, of course, do my own testing to see how things go, but I want to know what to look out for, if anything, and where to set my expectations as I attempt to do this.


Solution 1:

(Beware: I'm no expert, but was looking for an unanswered question so I could give back to serverfault.com)

I would first pursue scaling Lightsail. The docs (https://aws.amazon.com/lightsail/features/) discuss having multiple instances with a load balancer. Have you tried that? Check out the CloudWatch monitoring to see how busy your instances are. Horizontal scaling may buy you faster response times. Also, you can expand Lightsail to use stand-alone database instance(s).

In my mind, Word Press is a beast when it comes to computational efficiency. It can load a hundred scripts to execute a single request. Trimming down can help. Any un-needed plugins? Can code be combined into a fewer number of modules? There are probably optimization/profiling tools for it that could help you speed up your site.

Also, consider how much of your site is static file serving vs computational. Offloading static files to AWS S3 is a great solution. Also, implementing a CDN cache (AWS CDN, for example) can give you faster response times on non-computational page loads, as well as potentially moving the content closer to where your users are located. And, lastly, but often overlooked is just how many HTTP requests your users need to make to load a given page. Do you have many JS libraries that need to get loaded? Consider consolidating them into one file that can be loaded in a single transaction. Using the browser Dev tools in Chrome or Firefox can really help you pin-point where the delay is coming from.

If you want to go with the manage your servers approach, then updating WP can be done in simpler ways. I would suggest you start using Docker containers. You can then simply have one image you run on N servers and keep each one up to date. Use AWS ECR for your image. When you want to update WP or plug-ins, you build a new container image version. Then use AWS EKS or Fargate to allow you to roll out the new version in a rolling fashion. This is all part of the pets vs. cattle topic. You don't do maintenance on individual servers (containers), but if there is a change, you throw away the old and deploy new ones.

But, after all this, if you still want to go the route of common storage, I would not suggest you use EFS. You don't get security with that. Instead, I would run an S3 file cache on each server. It will pull files locally whenever they are changed in S3, so everyone gets the changes. Example cache is https://github.com/s3fs-fuse/s3fs-fuse/wiki/Fuse-Over-Amazon .

I would do testing to see if there are any writes to the WP files when in operation. If there is and what is written is instance dependent, you are going to have to keep those directories as local on the server.