What I learned about auctions (and missed opportunities)

The other day I heard about an auction (for a good cause called “Serious request” ) which had several items, including the following:

Zit jij tijdens het gamen ook helemaal in het spel? Vergeet je de wereld om je heen ook af en toe als je aan het spannend is? Neem dat gevoel en denk daar dan het grootste scherm van de Benelux bij, zodat er ook echt even niets anders is dan die game.
Omniversum in Den Haag biedt namelijk de ultieme game ervaring aan, met een scherm zo groot als een half voetbalveld en een subliem geluidssysteem. Na sluitingstijd opent het filmtheater in Den Haag haar deuren voor de gameliefhebber met het hoogste bod, dus wees er snel bij!
Wil jij de unieke kans om deze spectaculaire game ervaring mee te maken niet laten schieten? Bied dan mee voor een exclusieve avondopenstelling op maandag, dinsdag of woensdag (datum in overleg). Je mag 2,5 uur lang gamen met je eigen spelcomputer (VGA uitgang is wel vereist).
Basically they were auctioning the use of the largest IMAX screen in the Netherlands for 2.5 hours to the highest bidder. Here are 3 things I ended up regretting:
  1. Not making up my mind right away that this was worthwhile to bid on
  2. Not setting some kind of reminder to bid on the final day
  3. Not bidding with a maximum bid (I could have just done one bid and tell the site to outbid all others up to a limit)

 

When the auction went online, the bidding was around €500 which is not a lot if you split it down the middle. I made excuses not to bid on the item, saying “Oh, somebody will come in and bid it up to a few thousand” and “Who is going to join me?”  and “Wouldn’t this be a waste of money?”

When I checked the auction today the bidding had already ceased and the total was €775. Crap, I could have totally done that and enjoyed a once in a life time experience together with a bunch of friends. In fact, it would have been a perfect birthday party.
I lost out because I finalized on my decision that I wanted this after the item was no longer available and was caught by surprise at the low cost.

Lesson learned? Don’t spend too much time second guessing decisions. Decide on one or the other and execute.
Setting up any reasonable maximum bid would have landed me this activity or in the case somebody else outbid me I’d lose nothing.

I got my inspiration from this article by Tim Ferriss

Monitoring bandwidth usage with vnstat on CentOS

For more advanced setups and breakdowns you should really use MRTG, but for those who just want to know their daily bandwidth consumption this 2 minute tip  to setup vnstat will suffice.

You can add the EPEL yum repository to download vnstat and many other tools, but if you only need this one tool or have requirements not to include EPEL you can just download the single RPM from http://dl.fedoraproject.org/pub/epel/6/x86_64/repoview/vnstat.html

For example, in my case the lastest was:

wget http://dl.fedoraproject.org/pub/epel/6/x86_64/vnstat-1.11-1.el6.x86_64.rpm

which can be installed using

rpm -iv vnstat-1.11-1.el6.x86_64.rpm

Now you just need to populate the database for the interfaces you need to monitor. You can look at the current interfaces with ifconfig

In my case I’m using em1  ( More information can be found here Consistent Network Device naming on Fedora / RHEL )

so you can execute: vnstat -u -i em1
This yields the following output:

Error: Unable to read database “/var/lib/vnstat/em1”.
Info: -> A new database has been created.

All that remains at this point is to setup a cronjob which takes care of inserting the values into the crontab:

@daily /usr/bin/vnstat -h
0-55/5 * * * * /usr/bin/vnstat -u

The first is a report sent to root every day (optional!) and the second is a job that will update the vnstat database every 5 minutes

Checking MySQL: mysqltuner (and setting up your .my.cnf)

There are many ways to monitor your MySQL server; I’m going to start posting a few small articles monitoring your MySQL server.
Today we’ll start with the least intrusive and simplest method and also setup .my.cnf to ensure your scripts can login to MySQL.
Setting this up and following the first advice from this script will take 5 minutes of your time and provide you with a bit of insight.

DISCLAIMER: I’m not the author of this tool and you should exercise sound judgement in implementing any advice from mysqltuner.

For the script to run correctly you must have MySQL root access or at least an account with sufficient privileges. The script itself can be run as a non privileged user.
Let’s create one for this article:

useradd mysqlmonitor
sudo -i -u mysqlmonitor

create the .my.cnf file in your favorite editor with the following contents

[client]
user=root
password=MYSQLPASSWORDGOESHERE

Make sure this file can only be read by this user (and root)
chmod 600 .my.cnf

You can retrieve the script very easily:

wget mysqltuner.pl

Be sure to set the permissions on the resulting file:

chmod 700 mysqltuner.pl

After inspecting the file you should be able to run it by executing: ./mysqltuner.pl and get a report much like this:

>> MySQLTuner 1.2.0 – Major Hayden
>> Bug reports, feature requests, and downloads at http://mysqltuner.com/
>> Run with ‘–help’ for additional options and output filtering

——– General Statistics ————————————————–
[–] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 5.5.28-29.1-log
[OK] Operating on 64-bit architecture

——– Storage Engine Statistics ——————————————-
[–] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster
[–] Data in MyISAM tables: 51M (Tables: 715)
[–] Data in InnoDB tables: 39M (Tables: 336)
[–] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)
[–] Data in MEMORY tables: 0B (Tables: 2)
[!!] Total fragmented tables: 44

——– Security Recommendations ——————————————-
[!!] User ‘@localhost’ has no password set.

——– Performance Metrics ————————————————-
[–] Up for: 33d 11h 10m 37s (12M q [4.324 qps], 627K conn, TX: 51B, RX: 1B)
[–] Reads / Writes: 94% / 6%
[–] Total buffers: 2.1G global + 2.8M per thread (500 max threads)
[OK] Maximum possible memory usage: 3.5G (5% of installed RAM)
[OK] Slow queries: 0% (0/12M)
[OK] Highest usage of available connections: 9% (46/500)
[OK] Key buffer size / total MyISAM indexes: 1.0G/6.9M
[OK] Key buffer hit rate: 100.0% (35M cached / 2K reads)
[!!] Query cache is disabled
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 677K sorts)
[!!] Joins performed without indexes: 29977
[!!] Temporary tables created on disk: 43% (411K on disk / 950K total)
[OK] Thread cache hit rate: 99% (46 created / 627K connections)
[!!] Table cache hit rate: 13% (1K open / 9K opened)
[OK] Open file limit used: 2% (1K/65K)
[OK] Table locks acquired immediately: 99% (10M immediate / 10M locks)
[OK] InnoDB data size / buffer pool: 40.0M/1.0G

——– Recommendations —————————————————–
General recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance
Adjust your join queries to always utilize indexes
When making adjustments, make tmp_table_size/max_heap_table_size equal
Reduce your SELECT DISTINCT queries without LIMIT clauses
Increase table_cache gradually to avoid file descriptor limits
Variables to adjust:
query_cache_size (>= 8M)
join_buffer_size (> 128.0K, or always use indexes with joins)
tmp_table_size (> 128M)
max_heap_table_size (> 128M)
table_cache (> 10240)

To have this run every week is just a matter of setting up the cron properly:

crontab -e

[email protected]
@weekly $HOME/mysqltuner.pl

Soon I will post how to setup a MySQL my.cnf that will work well for use in webhosting along with a few tips to safely try these configurations and revert back if needed.

Logwatch: How to fix Dovecot Unmatched Entries (CentOS / RHEL 6)

UPDATE: Thanks jagro.nl and Julian Stokes I was alerted to the fact that the current HEAD version of the script no longer works. You can use my old version instead.

You may have noticed the extra useless notices by Dovecot in your Logwatch (you do check your logwatch every day right?)
These kinds of messages can be found under the Dovecot logs:

**Unmatched Entries**
dovecot: imap(useraccount): Connection closed bytes=16/338: 1 Time(s)
dovecot: imap(useraccount): Connection closed bytes=17/340: 7 Time(s)
dovecot: imap(useraccount): Connection closed bytes=18/342: 3 Time(s)

And depending on your activity you could be getting this kind of message a few hundred times per day on more busy setups.
A quick Google check reveals the following bug reports for Redhat: Bug 666376 and Bug 669161

This was fixed in Rawhide, but apparently hasn’t made it to Redhat proper yet.

Let’s fix this in the cleanest possible way.
Logwatch stores the script originals in /usr/share/logwatch/scripts/services/* and these are bundled in the RPM that is installed by yum.
Changing these scripts wouldn’t help much as these are overwritten by updates.

Logwatch also maintains a folder in /etc under /etc/logwatch/scripts/services/ where you can place your own scripts (they will replace the scripts in /usr/share; according to the manpage)

Just download the latest from the repository and name it dovecot
http://logwatch.svn.sourceforge.net/viewvc/logwatch/scripts/services/

You should use this version for now and save it  as dovecot http://barry.coffeesprout.com/dovecot

With the dovecot script downloaded to /etc/logwatch/scripts/services/dovecot your report should now look a lot cleaner.

Run logwatch and you should see something like:

Dovecot IMAP and POP3 Successful Logins: 316
Dovecot disconnects: 286

I hope this has been somewhat helpful to you

Varnish for the impatient (RHEL / CentOS 6)

Introduction

You may have heard of Varnish before and may want to see what the fuss is about and more importantly, get started yourself.
The format of this post is pretty simple, the command should be copy-paste-able and due to the magic of Yum and RPM should work for you.
The instructions do not include any control panel integration (I use Virtualmin, you probably use cPanel, Plesk or Directadmin. I have no experience with these)

However, before we get started:

Disclaimer: These instructions should *NOT* be executed on a live environment. You will have to test them first. I’m not responsible if you break your cPanel / Plesk environment. If you’re unsure, a very safe solution is to setup a new virtual server and make Varnish “backend” point to your old server.

With that out of the way, let’s get started..

Requirements

Varnish has very little in hard requirements but as you would most likely store the content in memory, sufficient memory is a plus. If you’re running CentOS/RHEL 5/6 (64 bit preferred) you should be set.

Installation

You can get the current version and install instructions from the Varnish Site

For your convenience, you can use the following:

rpm --nosignature -i http://repo.varnish-cache.org/redhat/varnish-3.0/el5/noarch/varnish-release-3.0-1.noarch.rpm

yum -y install varnish

You can verify a succesful install by querying rpm:

rpm -q varnish

Which should yield something along the lines of: varnish-3.0.2-1.el5

This installs Varnish Cache and it’s dependencies (you’ll note that Varnish includes the use of a compiler such as GCC. This is required for compiling the configuration files from VCL)

You will then need to customize a few entries before you can start using Varnish. We’re going to assume you want to experiment with Varnish and thus run it from port 8080 instead of replacing Apache on port 80. I’ll detail the switch from port 8080 to 80 in a later post.

/etc/varnish/default.vcl

Varnish uses VCL files to route requests and determine how to cache and how long. VCL is a subset of the C programming language and is actually compiled into computer language when loaded. In essence your VCLs drive Varnish as if you programmed it, making the routing very efficient. For your convenience I’ve included a simple VCL for WordPress which does the following for you:

  • Setup a single backend (your backend is basically Apache on the same machine, but advanced setups can include loadbalancing)
  • Allow purge requests from localhost (for your WordPress plugins)
  • Strip cookies from all requests except login, wp-admin, comments
  • Disable caching of the admin interface

I’ve also included links to more advanced VCL setups which include more functionality. Keep in mind that this VCL is tailored to WordPress and static sites, but does not include provisions for other setups. I’ve taken this from the Varnish documentation and tweaked it slightly

You can download it here:

default.vcl (please don’t forget to modify the default backend to point to apache)

/etc/sysconfig/varnish

This file setups your Varnish preferences allowing you to customize your settings through a few different “Alternatives”. I tend to stick with the default enabled one: Alternative 3.

The included configuration sets the following for you:

  • Varnish uses port 8080
  • Admin IP is 127.0.0.1 and port 2000
  • Varnish min threads 64 (spin up some threads to do work)
  • Varnish storage size to 1G
  • Varnish storage mode malloc (meaning in memory instead of to file)

A simple non scientific performance test

By now you must be wondering what we’re doing all this for. Let’s have some fun and benchmark a simple WordPress site (site contains text). Example.com is a local alias for a simple benchmark site. Both tests were run locally against the server using Apache Benchmark

The server is a Xen based virtual machine running on 3 E5440  @ 2.83GHz cores. The machine has 2 GB memory available and runs Centos 5.

Performance using a tweaked Apache setup (Apache worker, fcgid, W3 Total cache with page caching). When page caching is disabled the server crashes the PHP processes and only shows blank pages.

Without using Apache 2.4 and the event worker, this is as good as it will probably get…

Server Software:        Apache/2
Server Hostname:        www.example.com
Server Port:            80

Document Path:          /
Document Length:        10935 bytes

Concurrency Level:      250
Time taken for tests:   84.20278 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      1135200000 bytes
HTML transferred:       1093500000 bytes
Requests per second:    1190.19 [#/sec] (mean)
Time per request:       210.051 [ms] (mean)
Time per request:       0.840 [ms] (mean, across all concurrent requests)
Transfer rate:          13194.35 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   64 921.3      0   45006
Processing:     1  129 376.0     68   18268
Waiting:        1  128 376.0     67   18267
Total:          4  193 1004.5     68   47267

Percentage of the requests served within a certain time (ms)
  50%     68
  66%     78
  75%     85
  80%     90
  90%    131
  95%    525
  98%   1093
  99%   3070
 100%  47267 (longest request)

The job is close to CPU bound and the load grows to close to 7:

00:37:26 up 38 days, 21:13,  2 users,  load average: 6.97, 3.19, 1.28

Let’s be fair here, we served up 100.000 pages in 84 seconds or 1190 pages per second.
Apache may not be extremely fast, but it still does a good job serving up HTML pages (remember, the frontpage is cached using W3 Total Cache)

Let’s engage Varnish and see the difference

Server Software:        Apache/2
Server Hostname:        www.example.com
Server Port:            80

Document Path:          /
Document Length:        10935 bytes

Concurrency Level:      250
Time taken for tests:   24.277115 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      1136834104 bytes
HTML transferred:       1093532805 bytes
Requests per second:    4119.11 [#/sec] (mean)
Time per request:       60.693 [ms] (mean)
Time per request:       0.243 [ms] (mean, across all concurrent requests)
Transfer rate:          45729.86 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   10  60.8      8    3035
Processing:     3   49  14.7     49     271
Waiting:        0   21  14.2     19     243
Total:         14   60  61.7     56    3064

Percentage of the requests served within a certain time (ms)
  50%     56
  66%     59
  75%     65
  80%     71
  90%     76
  95%     77
  98%     80
  99%     81
 100%   3064 (longest request)

 

The server had an easier time too

00:52:42 up 38 days, 21:28,  2 users,  load average: 1.70, 0.81, 0.73

Other than showing impressive numbers, please keep in mind that every pageload includes loads of other resources (CSS, images, javascript) that will also be sped up by Varnish cache. Your website will also feel a lot faster to your users and rank higher in performance tests.

Caveats

As you can see, Varnish really increases performance of your websites. Unfortunately, while Varnish is perfect (really!) the configuration and the real world rarely is. Varnish by default will not cache pages that serve cookies, and for good reason. Cookies usually indicate dynamic content. To make WordPress cachable we drop most of the cookies except for dynamic pages.

Here is a small list of things that will change when using Varnish:

  • Apache logs become useless since not every request hits apache. This is important for bandwidth accounting in controlpanels such as Virtualmin, cPanel etc. Varnish allows logging in the same format as Apache, but does not separate the logs by domain as Apache does. I’ll share a work around in a later post
  • Plugins may break in subtle ways  as they don’t receive cookies correctly or if their pages are cached (for example, completely dynamic user driven pages may need to be excluded).
  • Widgets don’t update properly (for example a hitcounter will not update) I’ll show in a later post how to use “Edge Side Includes (ESI)” to counter this problem.

Next article will discuss how to setup W3 Total Cache with Varnish (don’t worry, it will be easy)