Thursday, August 23, 2012

MySQL Cluster performance up again, or CLIENT_COMPRESS considered harmful

I'm back again (previous post in ths series is here)., with some interesting finds related to some more testing of MySQL Cluster testing (yes, I have promissed to test more things than this, but I got so entangled with NDB that I just had to give it one more shot). Looking at my benchmarking code, I realized I used the CLIENT_COMPRESS flag when I connected to MySQL. This flag was there in the code where I connected to MySQL (using mysql_real-connect(), this is a C program after all) and it was probably just pasted in from some other code somewhere. Not that this flag isnät known to me or anything, but I had not tested the compressed or non-compress MySQL client protocols much. I guess I at one time had assumed that CLIENT_COMPRESS at best helps when sending large packets between the client and the MySQL server, and above all, that for many small packets, it wasn't terribly helpful, but also not terribly harmful. Turns out I was wrong (yepp, that DOES happen).

Googling for CLIENT_COMPRESS, I didn't find much more than this either, to be honest, if you have many small packets, it's not going to be very helpful, but not very harmful either.

In this case though, it was the MySQL daemon maxing out the CPU that was the issue, so maybe I should to to run without CLIENT_COMPRESS. As stated above, Googling for this did not, at least not initially, provide much help, but as the CPU was maxed out, and compression consumes CPU power a lot, maybe we should avoid compression.

The result? 25 - 30 % more performance, just like that! MySQL Cluster with NDB is now managing some 46 k requests per second, as compared to the previous 35 k! Not a bad improvement. All in all, using MySQL Cluster using the MySQL API, as opposed to NDB, you probably want to avoid using CLIENT_COMPRESS and you are likely to make many small SQL statements with limited sizes of the result sets, and all data in memory (well, not all if you use STORAGE DISK, but that has issues of it's own), chances are that your performance bottleneck of the database side of things, will be the CPU.

But don't get too excited, as I am now going to revisit this with InnoDB also! (Yes, that is mean)

/Karlsson

3 comments:

Unknown said...

If some compression overhead means such a difference, what role does the C/S protocol play in your benchmark?

Karlsson said...

Well, I am pushing queries as much as I can, and the queries are simple and the resultsets small, this is, after all, supposed to be a test of something that looks like a KeyValue Store. Not a document store though, as large documents will not easily be stored in MySQL Cluster (don't tell me to use STORAGE DISK here, that one sux in the current implementation (fixed size on disk means A LOT of disk data to write)).

But yes, the C/S protocol is a big deal here. To test that, I am planning to try using MySQL libmysqld here, and see what that brings. But note that MongoDB, also running in C/S mode, was some 3 times faster than MySQL using any storage engine (besides using NDBAPI then, which was pretty close to MongoDB),

Cheers
/Karlsson

Unknown said...

You know, I'm a Connectors guy. I am not after any storage details of your performance evaluations. Also NDBAPI vs. MySQL frontend is a bit out of scope for me.

The link to the storage system is my world. How about connect overhead?