Key generation failed due to lack of entropy on the system. You may need to generate a GPG key on a desktop machine and import it into Virtualmin instead.

Failed to create backup key : Key generation failed : Key generation failed due to lack of entropy on the system. You may need to generate a GPG key on a desktop machine and import it into Virtualmin instead.

So you’re running Virtualmin and would like to encrypt your backups before they leave your server?
Unfortunately it would seem that some older, more quiet systems (or virtual systems) suffer from a lack of entropy.
The general advice it to move the mouse and use the keyboard, both which are not going to do much on a server.
Some people will recommend you use urandom as a source of random material (That’s a very bad idea!)

I’ve found Virtualmin to be extremely impatient when it comes to generating the key and found that if you’re willing to be patient you can better do this from the command line and just import the key into Virtualmin instead.
Also, you can follow these instruction on a local machine that does have sufficient entropy.
Here are my step by step instructions:

gpg –gen-key

gpg (GnuPG) 2.0.14; Copyright (C) 2009 Free Software Foundation, Inc.

This is free software: you are free to change and redistribute it.

There is NO WARRANTY, to the extent permitted by law.


Please select what kind of key you want:

(1) RSA and RSA (default)

(2) DSA and Elgamal

(3) DSA (sign only)

(4) RSA (sign only)

Your selection? 1

RSA keys may be between 1024 and 4096 bits long.

What keysize do you want? (2048) 

Requested keysize is 2048 bits

Please specify how long the key should be valid.

0 = key does not expire

<n>  = key expires in n days

<n>w = key expires in n weeks

<n>m = key expires in n months

<n>y = key expires in n years

Key is valid for? (0)

Key does not expire at all

Is this correct? (y/N) y

GnuPG needs to construct a user ID to identify your key.

Real name: <Server> Backups 2013

Email address: <server>@<company>.com

Comment: <Server> backups 2013

You selected this USER-ID:

“<Server> Backups 2013 (<Server> backups 2013) <<server>@<company>.com>”

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O

You need a Passphrase to protect your secret key.

can’t connect to `/root/.gnupg/S.gpg-agent’: No such file or directory

You don’t want a passphrase – this is probably a *bad* idea!

I will do it anyway.  You can change your passphrase at any time,

using this program with the option “–edit-key”.

We need to generate a lot of random bytes. It is a good idea to perform

some other action (type on the keyboard, move the mouse, utilize the

disks) during the prime generation; this gives the random number

generator a better chance to gain enough entropy.

We need to generate a lot of random bytes. It is a good idea to perform

some other action (type on the keyboard, move the mouse, utilize the

disks) during the prime generation; this gives the random number

generator a better chance to gain enough entropy.


public and secret key created and signed.



Look at the bolded parts for your input and be sure to replace the <Server> with your server name (whatever you want it to be) and same for <Company>
For example: [email protected]

Once you have generated the key you need to export it, list it first:

gpg –list-keys



pub   2048R/1074… 2013-12-15

uid                  <Server> Backups 2013 (<Server> backups 2013) <<server>@<company>.com>

sub   2048R/443F… 2013-12-15

It should list more, as your keychain also includes GPG for the Virtualmin repo.

You export the key using the ID for the “sub” part of your key. Of course you should use the full value as shown by the output.

gpg –output <server>.gpg –armor –export-secret-key 443F…

This gives you the ASCII version of your private key that you can offer to Virtualmin, you can either copy paste it into your browser or point to the file locally.
Hope this helps you encrypt your backups

Virtualbox and Vagrant on CentOS

We heard you like servers, so we put a Virtualbox in your server.

I’m fairly sure you’ve heard of both of these technologies, so I’ll skip the introduction. You can read more about VirtualBox and Vagrant if you want.
Today I’ve setup a small test lab using Vagrant so that I can run integration tests against live servers (that optionally I can restore if the tests break)
For some of my projects I’ve built SSH and Virtualmin integration code and was testing it against a server that was technically also a backup machine. Not the best idea.

It goes without saying that you should not do this on a machine that is critical to you. Also, make backups.

Installing Virtualbox and Vagrant

We are assuming you are on a x86_64 machine, replace paths and packages as needed for 32 bit systems.

Start by updating the system and ensuring the dependencies are installed

yum update
yum install binutils qt gcc make patch libgomp glibc-headers glibc-devel kernel-headers kernel-devel dkms

Add the correct Virtualbox repo:

cd /etc/yum.repos.d/

And install it

yum install VirtualBox-4.3

Resolving Dependencies
–> Running transaction check
—> Package VirtualBox-4.3.x86_64 0:4.3.2_90405_el6-1 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

Package Arch Version Repository Size
VirtualBox-4.3 x86_64 4.3.2_90405_el6-1 virtualbox 72 M

Transaction Summary
Install 1 Package(s)

Total download size: 72 M
Installed size: 146 M
Is this ok [y/N]: y
Downloading Packages:
VirtualBox-4.3-4.3.2_90405_el6-1.x86_64.rpm | 72 MB 02:24
warning: rpmts_HdrFromFdno: Header V4 DSA/SHA1 Signature, key ID 98ab5139: NOKEY
Retrieving key from
Importing GPG key 0x98AB5139:
Userid: “Oracle Corporation (VirtualBox archive signing key) <[email protected]>”
From :
Is this ok [y/N]: y
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : VirtualBox-4.3-4.3.2_90405_el6-1.x86_64 1/1

Creating group ‘vboxusers’. VM users must be member of that group!

No precompiled module for this kernel found — trying to build one. Messages
emitted during module compilation will be logged to /var/log/vbox-install.log.

Stopping VirtualBox kernel modules [ OK ]
Uninstalling old VirtualBox DKMS kernel modules [ OK ]
Trying to register the VirtualBox kernel modules using DKMS [ OK ]
Starting VirtualBox kernel modules [ OK ]
Verifying : VirtualBox-4.3-4.3.2_90405_el6-1.x86_64 1/1

VirtualBox-4.3.x86_64 0:4.3.2_90405_el6-1


This will take some time, so grab a cup of tea while it installs.

After this we will install Vagrant. Go to to get the latest versions and grab the link to the proper (x86_64) rpm.
You will install this using (example):

rpm -iv

You can check Vagrant is installed:

vagrant -v
Vagrant 1.3.5

I’m placing my vagrant boxes in /opt/boxes:

mkdir /opt/boxes
chgrp -R vboxusers /opt/boxes

Congratulations. The hard part is over!

Your first Vagrant box

Let’s grab a CentOS box from:
I’m using the “CentOS 6.4 x86_64”:

mkdir centos6_virtualmin ; cd centos6_virtualmin
vagrant box add CentOS6_virtualmin

Downloading or copying the box…
Extracting box…te: 21.0M/s, Estimated time remaining: 0:00:01)
Successfully added box ‘CentOS6_virtualmin’ with provider ‘virtualbox’!

vagrant init CentOS6_virtualmin
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`` for more information on using Vagrant.

Time to get going:

vagrant up
Bringing machine ‘default’ up with ‘virtualbox’ provider…
[default] Importing base box ‘CentOS6_virtualmin’…
[default] Matching MAC address for NAT networking…
[default] Setting the name of the VM…
[default] Clearing any previously set forwarded ports…
[default] Creating shared folders metadata…
[default] Clearing any previously set network interfaces…
[default] Preparing network interfaces based on configuration…
[default] Forwarding ports…
[default] — 22 => 2222 (adapter 1)
[default] Booting VM…
[default] Waiting for machine to boot. This may take a few minutes…
[default] Machine booted and ready!
[default] Mounting shared folders…
[default] — /vagrant

Start customizing your box

Logging in is quite simple:

vagrant ssh

Usually a box lets you sudo to root without a password:

[vagrant@vagrant-centos64 ~]$ sudo -i
[root@vagrant-centos64 ~]#

Wait, am I really in the box?

[root@vagrant-centos64 ~]# hostname

Yup 🙂

In my case I’ll be installing Virtualmin for internal tests of the LAMP stack.

When you need more power

So a default box has the following:

[root@vagrant-centos64 ~]# free -m
total used free shared buffers cached
Mem: 589 212 376 0 12 146
-/+ buffers/cache: 53 535
Swap: 0 0 0


[root@vagrant-centos64 ~]# cat /proc/cpuinfo | grep “model name”
model name : Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz

So let’s assign 2 cores and 3 GB of memory (the host has 64G so plenty to go… )

You exit twice (CTRL+D or type exit) to get back to the server you installed on.
To change the box you need to shut it down first, if you are in the folder with the Vagrant file just this command will suffice:

vagrant halt

[default] Attempting graceful shutdown of VM…

Check the status

vagrant status
Current machine states:

default poweroff (virtualbox)

The VM is powered off. To restart the VM, simply run `vagrant up`

You will find a commented line that contains “vb.customize [“modifyvm”, :id, “–memory”, “1024”]”
Uncomment this line and change the memory as you need

Thanks to this article  and this StackOverflow question I also found how to set the cores. Ending up with this configuration:

config.vm.provider :virtualbox do |vb|
# # Don’t boot with headless mode
# vb.gui = true
# # Use VBoxManage to customize the VM. For example to change memory:

vb.customize [“modifyvm”, :id, “–ioapic”, “on”]
vb.customize [“modifyvm”, :id, “–memory”, “3072”]
vb.customize [“modifyvm”, :id, “–cpus”, 2]

Note how I uncommented the config.vm.provider :virtuabox block and the end statement!

After starting and logging in, I am greeted with the following:

[root@vagrant-centos64 ~]# free -m
total used free shared buffers cached
Mem: 2887 105 2781 0 5 33
-/+ buffers/cache: 66 2821
Swap: 0 0 0


[root@vagrant-centos64 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1

Please note that not including “vb.customize [“modifyvm”, :id, “–ioapic”, “on”]” will cause the second CPU not to show up. It also seemed to slow my Box.

Good luck and please comment back if you need help.
I’ll be typing up a small article on how to include this in Jenkins.


Instructions unclear? Stuck on the configuration bit?
Download this Gist


gem install remote_syslog failing on CentOS 6 (ERROR: Failed to build gem native extension.)

Got this problem while installing the remote_syslog gem on CentOS 6 for pushing logs to remote syslog (for example Papertrail)
Basically running gem install remote_syslog yields the following errors:

[root@machine log]# gem install remote_syslog
Building native extensions. This could take a while…
ERROR: Error installing remote_syslog:
ERROR: Failed to build gem native extension.

/usr/bin/ruby extconf.rb
checking for rb_trap_immediate in ruby.h,rubysig.h… yes
checking for rb_thread_blocking_region()… no
checking for inotify_init() in sys/inotify.h… yes
checking for writev() in sys/uio.h… yes
checking for rb_thread_check_ints()… no
checking for rb_time_new()… yes
checking for sys/event.h… no
checking for epoll_create() in sys/epoll.h… yes
creating Makefile

g++ -I. -I. -I/usr/lib64/ruby/1.8/x86_64-linux -I. -DWITH_SSL -DBUILD_FOR_RUBY -DHAVE_RB_TRAP_IMMEDIATE -DHAVE_RBTRAP -DHAVE_INOTIFY_INIT -DHAVE_INOTIFY -DHAVE_WRITEV -DHAVE_WRITEV -DHAVE_RB_TIME_NEW -DOS_UNIX -DHAVE_EPOLL_CREATE -DHAVE_EPOLL -fPIC -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector –param=ssp-buffer-size=4 -m64 -mtune=generic -fno-strict-aliasing -fPIC -c pipe.cpp
make: g++: Command not found
make: *** [pipe.o] Error 127
Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/eventmachine-1.0.0 for inspection.
Results logged to /usr/lib/ruby/gems/1.8/gems/eventmachine-1.0.0/ext/gem_make.out


Well there is a duh moment.. you will need to install the package gcc-c++
yum install gcc-c++


Now running the install again will yield the following:


[root@machine log]# gem install remote_syslog
Building native extensions. This could take a while…
Successfully installed eventmachine-1.0.0
Successfully installed eventmachine-tail-0.6.4
Successfully installed syslog_protocol-0.9.2
Successfully installed em-resolv-replace-1.1.3
Successfully installed remote_syslog-1.6.13
5 gems installed
Installing ri documentation for eventmachine-1.0.0…
Installing ri documentation for eventmachine-tail-0.6.4…
Installing ri documentation for syslog_protocol-0.9.2…
Installing ri documentation for em-resolv-replace-1.1.3…
Installing ri documentation for remote_syslog-1.6.13…
Installing RDoc documentation for eventmachine-1.0.0…
Installing RDoc documentation for eventmachine-tail-0.6.4…
Installing RDoc documentation for syslog_protocol-0.9.2…
Installing RDoc documentation for em-resolv-replace-1.1.3…
Installing RDoc documentation for remote_syslog-1.6.13…


If it still does not work, ensure you have installed ruby-devel as well

Monitoring bandwidth usage with vnstat on CentOS

For more advanced setups and breakdowns you should really use MRTG, but for those who just want to know their daily bandwidth consumption this 2 minute tip  to setup vnstat will suffice.

You can add the EPEL yum repository to download vnstat and many other tools, but if you only need this one tool or have requirements not to include EPEL you can just download the single RPM from

For example, in my case the lastest was:


which can be installed using

rpm -iv vnstat-1.11-1.el6.x86_64.rpm

Now you just need to populate the database for the interfaces you need to monitor. You can look at the current interfaces with ifconfig

In my case I’m using em1  ( More information can be found here Consistent Network Device naming on Fedora / RHEL )

so you can execute: vnstat -u -i em1
This yields the following output:

Error: Unable to read database “/var/lib/vnstat/em1”.
Info: -> A new database has been created.

All that remains at this point is to setup a cronjob which takes care of inserting the values into the crontab:

@daily /usr/bin/vnstat -h
0-55/5 * * * * /usr/bin/vnstat -u

The first is a report sent to root every day (optional!) and the second is a job that will update the vnstat database every 5 minutes

Checking MySQL: mysqltuner (and setting up your .my.cnf)

There are many ways to monitor your MySQL server; I’m going to start posting a few small articles monitoring your MySQL server.
Today we’ll start with the least intrusive and simplest method and also setup .my.cnf to ensure your scripts can login to MySQL.
Setting this up and following the first advice from this script will take 5 minutes of your time and provide you with a bit of insight.

DISCLAIMER: I’m not the author of this tool and you should exercise sound judgement in implementing any advice from mysqltuner.

For the script to run correctly you must have MySQL root access or at least an account with sufficient privileges. The script itself can be run as a non privileged user.
Let’s create one for this article:

useradd mysqlmonitor
sudo -i -u mysqlmonitor

create the .my.cnf file in your favorite editor with the following contents


Make sure this file can only be read by this user (and root)
chmod 600 .my.cnf

You can retrieve the script very easily:


Be sure to set the permissions on the resulting file:

chmod 700

After inspecting the file you should be able to run it by executing: ./ and get a report much like this:

>> MySQLTuner 1.2.0 – Major Hayden
>> Bug reports, feature requests, and downloads at
>> Run with ‘–help’ for additional options and output filtering

——– General Statistics ————————————————–
[–] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 5.5.28-29.1-log
[OK] Operating on 64-bit architecture

——– Storage Engine Statistics ——————————————-
[–] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster
[–] Data in MyISAM tables: 51M (Tables: 715)
[–] Data in InnoDB tables: 39M (Tables: 336)
[–] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)
[–] Data in MEMORY tables: 0B (Tables: 2)
[!!] Total fragmented tables: 44

——– Security Recommendations ——————————————-
[!!] User ‘@localhost’ has no password set.

——– Performance Metrics ————————————————-
[–] Up for: 33d 11h 10m 37s (12M q [4.324 qps], 627K conn, TX: 51B, RX: 1B)
[–] Reads / Writes: 94% / 6%
[–] Total buffers: 2.1G global + 2.8M per thread (500 max threads)
[OK] Maximum possible memory usage: 3.5G (5% of installed RAM)
[OK] Slow queries: 0% (0/12M)
[OK] Highest usage of available connections: 9% (46/500)
[OK] Key buffer size / total MyISAM indexes: 1.0G/6.9M
[OK] Key buffer hit rate: 100.0% (35M cached / 2K reads)
[!!] Query cache is disabled
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 677K sorts)
[!!] Joins performed without indexes: 29977
[!!] Temporary tables created on disk: 43% (411K on disk / 950K total)
[OK] Thread cache hit rate: 99% (46 created / 627K connections)
[!!] Table cache hit rate: 13% (1K open / 9K opened)
[OK] Open file limit used: 2% (1K/65K)
[OK] Table locks acquired immediately: 99% (10M immediate / 10M locks)
[OK] InnoDB data size / buffer pool: 40.0M/1.0G

——– Recommendations —————————————————–
General recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance
Adjust your join queries to always utilize indexes
When making adjustments, make tmp_table_size/max_heap_table_size equal
Reduce your SELECT DISTINCT queries without LIMIT clauses
Increase table_cache gradually to avoid file descriptor limits
Variables to adjust:
query_cache_size (>= 8M)
join_buffer_size (> 128.0K, or always use indexes with joins)
tmp_table_size (> 128M)
max_heap_table_size (> 128M)
table_cache (> 10240)

To have this run every week is just a matter of setting up the cron properly:

crontab -e

[email protected]
@weekly $HOME/

Soon I will post how to setup a MySQL my.cnf that will work well for use in webhosting along with a few tips to safely try these configurations and revert back if needed.

Logwatch: How to fix Dovecot Unmatched Entries (CentOS / RHEL 6)

UPDATE: Thanks and Julian Stokes I was alerted to the fact that the current HEAD version of the script no longer works. You can use my old version instead.

You may have noticed the extra useless notices by Dovecot in your Logwatch (you do check your logwatch every day right?)
These kinds of messages can be found under the Dovecot logs:

**Unmatched Entries**
dovecot: imap(useraccount): Connection closed bytes=16/338: 1 Time(s)
dovecot: imap(useraccount): Connection closed bytes=17/340: 7 Time(s)
dovecot: imap(useraccount): Connection closed bytes=18/342: 3 Time(s)

And depending on your activity you could be getting this kind of message a few hundred times per day on more busy setups.
A quick Google check reveals the following bug reports for Redhat: Bug 666376 and Bug 669161

This was fixed in Rawhide, but apparently hasn’t made it to Redhat proper yet.

Let’s fix this in the cleanest possible way.
Logwatch stores the script originals in /usr/share/logwatch/scripts/services/* and these are bundled in the RPM that is installed by yum.
Changing these scripts wouldn’t help much as these are overwritten by updates.

Logwatch also maintains a folder in /etc under /etc/logwatch/scripts/services/ where you can place your own scripts (they will replace the scripts in /usr/share; according to the manpage)

Just download the latest from the repository and name it dovecot

You should use this version for now and save it  as dovecot

With the dovecot script downloaded to /etc/logwatch/scripts/services/dovecot your report should now look a lot cleaner.

Run logwatch and you should see something like:

Dovecot IMAP and POP3 Successful Logins: 316
Dovecot disconnects: 286

I hope this has been somewhat helpful to you

Varnish for the impatient (RHEL / CentOS 6)


You may have heard of Varnish before and may want to see what the fuss is about and more importantly, get started yourself.
The format of this post is pretty simple, the command should be copy-paste-able and due to the magic of Yum and RPM should work for you.
The instructions do not include any control panel integration (I use Virtualmin, you probably use cPanel, Plesk or Directadmin. I have no experience with these)

However, before we get started:

Disclaimer: These instructions should *NOT* be executed on a live environment. You will have to test them first. I’m not responsible if you break your cPanel / Plesk environment. If you’re unsure, a very safe solution is to setup a new virtual server and make Varnish “backend” point to your old server.

With that out of the way, let’s get started..


Varnish has very little in hard requirements but as you would most likely store the content in memory, sufficient memory is a plus. If you’re running CentOS/RHEL 5/6 (64 bit preferred) you should be set.


You can get the current version and install instructions from the Varnish Site

For your convenience, you can use the following:

rpm --nosignature -i

yum -y install varnish

You can verify a succesful install by querying rpm:

rpm -q varnish

Which should yield something along the lines of: varnish-3.0.2-1.el5

This installs Varnish Cache and it’s dependencies (you’ll note that Varnish includes the use of a compiler such as GCC. This is required for compiling the configuration files from VCL)

You will then need to customize a few entries before you can start using Varnish. We’re going to assume you want to experiment with Varnish and thus run it from port 8080 instead of replacing Apache on port 80. I’ll detail the switch from port 8080 to 80 in a later post.


Varnish uses VCL files to route requests and determine how to cache and how long. VCL is a subset of the C programming language and is actually compiled into computer language when loaded. In essence your VCLs drive Varnish as if you programmed it, making the routing very efficient. For your convenience I’ve included a simple VCL for WordPress which does the following for you:

  • Setup a single backend (your backend is basically Apache on the same machine, but advanced setups can include loadbalancing)
  • Allow purge requests from localhost (for your WordPress plugins)
  • Strip cookies from all requests except login, wp-admin, comments
  • Disable caching of the admin interface

I’ve also included links to more advanced VCL setups which include more functionality. Keep in mind that this VCL is tailored to WordPress and static sites, but does not include provisions for other setups. I’ve taken this from the Varnish documentation and tweaked it slightly

You can download it here:

default.vcl (please don’t forget to modify the default backend to point to apache)


This file setups your Varnish preferences allowing you to customize your settings through a few different “Alternatives”. I tend to stick with the default enabled one: Alternative 3.

The included configuration sets the following for you:

  • Varnish uses port 8080
  • Admin IP is and port 2000
  • Varnish min threads 64 (spin up some threads to do work)
  • Varnish storage size to 1G
  • Varnish storage mode malloc (meaning in memory instead of to file)

A simple non scientific performance test

By now you must be wondering what we’re doing all this for. Let’s have some fun and benchmark a simple WordPress site (site contains text). is a local alias for a simple benchmark site. Both tests were run locally against the server using Apache Benchmark

The server is a Xen based virtual machine running on 3 E5440  @ 2.83GHz cores. The machine has 2 GB memory available and runs Centos 5.

Performance using a tweaked Apache setup (Apache worker, fcgid, W3 Total cache with page caching). When page caching is disabled the server crashes the PHP processes and only shows blank pages.

Without using Apache 2.4 and the event worker, this is as good as it will probably get…

Server Software:        Apache/2
Server Hostname:
Server Port:            80

Document Path:          /
Document Length:        10935 bytes

Concurrency Level:      250
Time taken for tests:   84.20278 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      1135200000 bytes
HTML transferred:       1093500000 bytes
Requests per second:    1190.19 [#/sec] (mean)
Time per request:       210.051 [ms] (mean)
Time per request:       0.840 [ms] (mean, across all concurrent requests)
Transfer rate:          13194.35 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   64 921.3      0   45006
Processing:     1  129 376.0     68   18268
Waiting:        1  128 376.0     67   18267
Total:          4  193 1004.5     68   47267

Percentage of the requests served within a certain time (ms)
  50%     68
  66%     78
  75%     85
  80%     90
  90%    131
  95%    525
  98%   1093
  99%   3070
 100%  47267 (longest request)

The job is close to CPU bound and the load grows to close to 7:

00:37:26 up 38 days, 21:13,  2 users,  load average: 6.97, 3.19, 1.28

Let’s be fair here, we served up 100.000 pages in 84 seconds or 1190 pages per second.
Apache may not be extremely fast, but it still does a good job serving up HTML pages (remember, the frontpage is cached using W3 Total Cache)

Let’s engage Varnish and see the difference

Server Software:        Apache/2
Server Hostname:
Server Port:            80

Document Path:          /
Document Length:        10935 bytes

Concurrency Level:      250
Time taken for tests:   24.277115 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      1136834104 bytes
HTML transferred:       1093532805 bytes
Requests per second:    4119.11 [#/sec] (mean)
Time per request:       60.693 [ms] (mean)
Time per request:       0.243 [ms] (mean, across all concurrent requests)
Transfer rate:          45729.86 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   10  60.8      8    3035
Processing:     3   49  14.7     49     271
Waiting:        0   21  14.2     19     243
Total:         14   60  61.7     56    3064

Percentage of the requests served within a certain time (ms)
  50%     56
  66%     59
  75%     65
  80%     71
  90%     76
  95%     77
  98%     80
  99%     81
 100%   3064 (longest request)


The server had an easier time too

00:52:42 up 38 days, 21:28,  2 users,  load average: 1.70, 0.81, 0.73

Other than showing impressive numbers, please keep in mind that every pageload includes loads of other resources (CSS, images, javascript) that will also be sped up by Varnish cache. Your website will also feel a lot faster to your users and rank higher in performance tests.


As you can see, Varnish really increases performance of your websites. Unfortunately, while Varnish is perfect (really!) the configuration and the real world rarely is. Varnish by default will not cache pages that serve cookies, and for good reason. Cookies usually indicate dynamic content. To make WordPress cachable we drop most of the cookies except for dynamic pages.

Here is a small list of things that will change when using Varnish:

  • Apache logs become useless since not every request hits apache. This is important for bandwidth accounting in controlpanels such as Virtualmin, cPanel etc. Varnish allows logging in the same format as Apache, but does not separate the logs by domain as Apache does. I’ll share a work around in a later post
  • Plugins may break in subtle ways  as they don’t receive cookies correctly or if their pages are cached (for example, completely dynamic user driven pages may need to be excluded).
  • Widgets don’t update properly (for example a hitcounter will not update) I’ll show in a later post how to use “Edge Side Includes (ESI)” to counter this problem.

Next article will discuss how to setup W3 Total Cache with Varnish (don’t worry, it will be easy)