Sometimes, our site optimisation issues go beyond our website code and are, instead, more closely-related to our server configuration. Though optimising our code should be a priority, we should also be mindful of the effects our server configuration can have on overall performance, and take the time to learn how to configure our server to get the most out of our site performance. In this article, we'll cover some of the key steps that you can take that will ensure your site's server runs as smoothly and efficiently as possible.
Use GZIP compression as your base
One of the most basic techniques for optimizing website performance is to use GZIP compression - with this, we can potentially reduce the size of the data that our site needs to transmit several times over (!). This is principally because text data (such as HTML, CSS, JS, or SVG graphics) compresses very well.
GZIP compression can be enabled in a few different ways depending on your host - sometimes it's as simple as turning it on in the server's control panel; whereas on some machines you will need a dedicated entry in the .htaccess file to enable it, or it may just be enabled by default. Generally, we would advise contacting your hosting support or checking your server documentation to be sure that you have got GZIP enabled.
Once it's up and running, we can check if GZIP compression is actively working on our website using this tool: https://www.giftofspeed.com/gzip-test/
When performing a controlled website performance test, the browser will usually have little to no effect on the final result. However, when live users are browsing the site, visiting multiple pages in a single session or returning frequently, proper implementation of the browser cache can be of fundamental importance for page loading efficiency.
So it's worth configuring your server so that additional resource files such as CSS files, JS scripts, fonts and graphics, that frequently add a noticeable load time to your site, are saved in the browser cache.
Thanks to this, when a user is browsing your site they will not need to redownload all page assets on subsequent subpages, other than the initial site load. If you're using a lot of elements on your site this will significantly affect the speed of page rendering.
As with GZIP compression, there is no single solution here. Sometimes the cache will be automatically set on the server, and sometimes it will have to be configured manually, such as via the .htaccess file.
Here's an example rule for a .htaccess file that sets the validity of files with selected extensions to 7 days; that is, once a user visits your site, any files that are downloaded with the specified file extensions will be stored by the browser for a period of seven days, allowing for a faster load when returning to the site within a 7 day period, or when browsing to subpages of the website:
<IfModule mod_headers.c> <filesMatch ".(jpg|jpeg|png|svg|gif|webp|ico|css|js|woff|woff2)$"> Header set Cache-Control "max-age=604800, public" </filesMatch> </Ifmodule>
Pitfalls of using the cache
When using the browser cache, remember that this is a double-edged sword - on the one hand, we gain performance, but on the other hand - there may be some abnormal behaviours that occur when updating the page content.
For example: if HTML files are cached on your site, then the user's browser may depending on the browser they are using and its settings, continue to load the older cached version of the file on subsequent visits until the cache expires, preventing them from seeing the updated content. For this reason, it's better to avoid using the browser cache for such files, or any other content that is frequently updated.
If, on the other hand, we replace an image on a given page and have cache enabled for image files, then we can work around this limitation by giving the new image a different name to the original file; this way, the browser should opt to download the new file rather than reloading the original image from the cache, which increases the chance that the user will see the updated content instantly.
Avoid unnecessary redirections
The server configuration files (eg .htaccess) allow you to create advanced redirects for when users enter an incomplete or inexact URL, to direct them to the correct webpage. Sometimes, however, too many redirects are created which can have a negative impact on page loading performance.
It's a good idea then to check that we have correctly configured our redirects to make them as efficient as possible. For example, a fairly common error is to implement a redirect chain like the following:
http://example.org -> https://example.org -> https://www.example.org/
You can configure the redirects to skip the middle step and get:
http: //example.org -> https://www.example.org/
Which makes for a smoother transition. We can check the redirects and their respective link chains on our website using this tool: https://redirect.li/http/
When setting up redirects, we should take a moment to ensure we select the correct type of redirect: permanent (301) and temporary (302). Many times I have encountered a situation where the domain owner, while working on a new frontpage, would use a permanent redirect (301) to direct users to, for example, the company profile on Facebook. Since browsers have a redirect cache to store any permanent redirects, any user that visited the site while the redirect was active would still be sent to Facebook when visiting the domain even after the new site was finished and published.
If we expect an increase in traffic to our website, regular hosting may not be enough to guarantee rapid site loading. It's easy to forget that hosting works a bit like public transport; though we talk about our site server like it's a standalone, dedicated server, most of the time it’s just one site of many stored on one of the host's servers. This means that increased traffic to other sites that are on the same hosting server can have a negative impact on our website's response time.
If this is happening to you (or if you are the one with increased traffic!), then the solution can be to utilise a VPS server or a dedicated server - then we can have a set amount of computing power and bandwidth reserved for our website (though this of course comes with a correspondingly higher fee).
Of course, there are also cloud solutions available that make for a quick and easy solution. However, though they can sometimes be cheaper than regular hosting, if there is a huge increase in traffic to the site it can become much more expensive than simply renting a dedicated server.
In the case of servers, it is also worth paying attention to what type of hard disks they use; older SSDs, though still relatively fast, are heavily outperformed in terms of site response time by more modern solutions such as NVMe SSDs. Thus, it’s definitely advisable to look for servers running on NVMe drives to get the best load times possible.
Try as we might, we can't fool physics. If we are based in Europe but our server is located overseas, then our site visitors from Europe will experience slower load times than, say, users in the US. Therefore, we should always choose our server location to target the area where the majority of our visitors are based to keep things running smoothly.
Alternatively, if your users are spread out around the world you can use, for example, Cloudflare to make use of a network of servers located around the world to deliver our website resources based on user location. Then the problem of locating the user in relation to the server disappears.
DNS location and TTL selection in DNS
Sometimes we may find ourselves in a situation where our domain name (and thus, DNS) and actual site server are stored with separate providers. In these cases we need to be careful that our DNS server is not located in a completely different geographical location to our users as this will slow the speed at which our domain name is translated into the target IP address for our site server, negatively impacting the response time (this is a similar issue as mentioned above regarding server location, though this time focused on the DNS server only).
In addition, it is worth paying attention to the TTL parameter in our DNS records - values that are too low may cause our DNS server to be queried too often and its slower responses will, in turn, slow down the response time of our website. Therefore, we should analyse TTL value selection for DNS records to optimise the time needed to translate the domain name into the server's IP address.
You can read more about selecting of TTL values here: https://www.varonis.com/blog/dns-ttl/
Optimizing the code and website is important and should be our first port of call when looking to optimise load times, but that doesn't mean that we can disregard server configuration as insignificant. A poorly configured/selected server can ruin the results of our work. So, after completing the optimization of our website code, we should then spend some time analyzing whether our server and DNS are adapted to provide the fastest possible response times to our users.