On Not Optimising for Last Century's Hardware

Published

Tags: , , and

Once upon a time I wrote a super-optimised algorithm for rotating data in an array. At least, it was meant to be super-optimised, but its real-world performance turned out to be terrible. That’s because my intuition about performance was stuck in the 20th century:

  1. Break a program down into basic operations like multiplication and assignment
  2. Give each operation a cost (or just slap on an O(1) if you’re lazy)
  3. Add up all the numbers
  4. Try to make the total in step #3 small

A lot of textbooks still teach this “classic” thinking, but except for some highly constrained embedded systems, it just doesn’t work that well on modern hardware.

A Tale of Three Server Caching Architectures

Published

Tags: , and

Exactly where you put caching in a distributed system has a significant impact on its effectiveness, in ways that aren’t always obvious during the design phase of development.

Offline Compression with Nginx

Published

Tags: , and

There’s a clear tradeoff with compressing HTTP responses on the fly: compress “harder” and you’ll (hopefully) get a smaller file that takes less time to send over the network – but the net benefit might be negative if the extra work takes too much time, or (when under heavy load) too much CPU. A lot of work has been done analysing this tradeoff, but for static content there’s a neat and simple way to avoid the tradeoff completely: compress offline before serving. Nginx supports this using the gzip_static module.