There’s a clear tradeoff with compressing HTTP responses on the fly: compress “harder” and you’ll (hopefully) get a
smaller file that takes less time to send over the network – but the net benefit might be negative if the extra work
takes too much time, or (when under heavy load) too much CPU. A lot of work has been done analysing this tradeoff, but for static
content there’s a neat and simple way to avoid the tradeoff completely: compress offline before serving. Nginx supports
this using the gzip_static
module.
The primary benefit is that you can eliminate serve-time CPU cost, while getting even better compression levels. Of course, it doesn’t work for dynamically generated content, but modern sites tend to have a lot of heavy static assets. A side benefit is that browser caches normally store the compressed response from the server, so better compression means that more of your site will fit in the browser cache. Lower network usage and better cacheability is a double win for mobile browsers.
More good news is that (at least with the DEFLATE algorithm used in gzip) higher compression levels don’t make any noticeable difference to decompression time. The compressor just works harder to find an optimal encoding.
To get a concrete feel for the size difference, here are some numbers for jquery-2.2.4.min.js:
Compression | Size (bytes) |
---|---|
None | 85578 |
gzip 1 | 34789 |
gzip 2 | 33533 |
gzip 3 | 32568 |
gzip 4 | 30850 |
gzip 5 | 29930 |
gzip 6 | 29717 |
gzip 7 | 29679 |
gzip 8 | 29672 |
gzip 9 | 29672 |
zopfli | 28799 |
Nginx uses gzip at level 1 (fastest) by default. As you can see, it reduces the size of this text file by better than half, which explains why it’s so popular. The last compression type, zopfli, is a relatively new algorithm designed for applications like offline compression – it prioritises compression quality over encoding speed. Here it’s cut a further 15% from the file. That’s not bad when you consider that compression is now effectively zero cost for serving.
The zopfli algorithm actually has tunable compression levels as well, but I’ve found that the default 15 iterations is good for typical applications. For example, by cranking it up to 1000s of iterations, I only got zopfli to shave a dozen or so more bytes from this javascript file, while compression time took several seconds. Obviously, this is only worth it for highly performance-critical assets.
Configuring Nginx
To check if you have the gzip_static
module, run Nginx
with the -V
flag. You should see --with-http_gzip_static_module
in the output. If you don’t, you’ll need a
copy of Nginx compiled with the --with-http_gzip_static_module
flag added to the configure script
arguments – either compile it yourself or beg politely
ask your package supplier to add the module. If you try to enable static compression without the module being
installed, you’ll get an error like unknown directive
"gzip_static"
on startup.
If you do have the module, things are very straightforward. There are two steps:
- Add a “
gzip_static on;
” to a relevanthttp
,server
, orlocation
block in your config. - Put pre-compressed files next to the uncompressed files on disk.
On a *nix system, your Nginx configs are probably in /etc/nginx/nginx.conf
and other files in the same directory. You should
put the “gzip_static on;
” inside the blocks that configure
static files, but if you’re only running one site, it’s safe to just put it in the http
block. You can also just look for other existing gzip configuration
in your configs.
If you have “gzip_static on;
”, then “gzip off;
” disables dynamic compression, but not static
compression. This is a sensible configuration for serving static files, but it’s potentially confusing, so I suggest
adding a comment for your own sanity.
Also, if you’re serving HTTP (as opposed to HTTPS), make sure you have “sendfile on;
” in your config. On Linux, this tells Nginx to use the
sendfile
system call when possible, which (among other
things) can dump a file directly onto a network link without userspace interaction. This is a good performance trick,
but doesn’t work if you’re doing processing like encryption or dynamic compression. If you’re using static compression
over HTTP, take advantage of it. On non-Linux systems, the sendfile config enables similar optimisations where
available.
Here’s an example config. This doesn’t have most of the stuff you’d want in a production server, so adapt your existing config.
events {
worker_connections 1024;
use epoll;
}
http {
# Enable static gzip
gzip_static on;
# Disable dynamic compression (optional, and not recommended if you're proxying)
gzip off;
sendfile on;
server {
listen localhost:80;
# Stuff being served here
}
}
The gzip_static
module finds the pre-compressed files by
simply looking for them in the same directory. For example, if someone requests a file that’s located in /var/www/static/screen.css
, Nginx will check if /var/www/static/screen.css.gz
exists. If it does, it’ll send it as is,
otherwise it’ll serve /var/www/static/screen.css
as it
normally would.
This means your static files directory will be full of files like foo.js
and foo.js.gz
. This is a bit odd, but isn’t actually a problem in most
applications (I’d be wary about using it in a user upload directory, though). Web sites these days use all kinds of
build scripts for generating static files, but as an example, here’s a *nix command that uses zopfli to pre-compress
css, csv, html, js, svg, txt and xml files in /var/www/static
:
find /var/www/static -type f -regextype posix-extended -iregex '.*\.(css|csv|html?|js|svg|txt|xml)' -exec zopfli '{}' \;
You probably need to install zopfli – try your distro’s package manager.
File Types
Note that the above command specifically chooses file types that tend to compress well (text files). Your site possibly has other file types that are worth compressing, but experiment and don’t take it for granted that compression will make files smaller. Naïvely running zopfli on an image or movie file will tend to make it bigger, for example.
Having said that, the PNG image format uses the DEFLATE algorithm internally, and zopfli is pretty good at making
PNG files smaller, too. Use the zopflipng
command for
that.
Also, using zopfli doesn’t negate the benefits of specialised compressors (or “minifiers”) for CSS, JavaScript,
SVGs, etc. These can do semantic compression tricks like shortening variable names and replacing “false
” with “!!0
”. The unminified, uncompressed copy of the jQuery library used before
was 257551 bytes (252kiB). Minifying cut that down to a third (84k), and zopfli brought it down to a third again (29k).
Not bad.