Wednesday, August 29, 2012

Revisiting libmysqld, the client / server overhead and all that. And an apology

I wrote about the performance gains with libmysqld a few days ago but I had just too many things in my head to do a proper comparison with the MySQL Cluster / Server protocol. Yes, libmysqld is faster, but not as much faster as I thought, and blogged about. What happened was that I had another thing to try which I had forgotten about, which was to test using the Client / Server protocol without the dreaded CLIENT_COMPRESS flag (see more on this here).

Without CLIENT_COMPRESS, I could see NDB performance improve by some 25 - 30 %. But with InnoDB, which I just tested, I achieved some 98 k row reads per second! Yikes, I should have tested that one before comparing with libmysqld (in which case I got 115 k rows read per second, which is still faster).

The good thing with all this a multitude of things:
  • We know for sure that you should NOT use the CLIENT_COMPRESS flag. Just don't. At least not when you have many small operations going and the database is largely in RAM. I'll test this in some more detail later, to see if I can find some good cases where CLIENT_COMPRESS is a good fit, but in this case, it's not.
  • When data is in memory, and you aren't using sharding, MongoDB really isn't that much faster, maybe some 10% compared to MySQL using InnoDB. But then you get transactions, joins and all sorts of goodies with MySQL.
  • The MySQL Client / Server protocol is FAR from as sluggish as I suspected!
  • The MySQL Parser and Optimizer is not that much of an overhead as I was lead to believe.
  • Using MySQL with InnoDB in a simple table might be such a bad Key Value Store as you, but as always it depends on the milage.
Frankly, the result is pretty much a surprise to me, MongoDB isn't that much faster than MySQL AT ALL, at least not in the case when data is in RAM. And if you ask how the MEMORY engine performance, well, about the same as InnoDB, slightly faster, but no as much as to say anything conclusively.

What remains to test then? Well, I have Tarantool and HANDLER SOCKET to test. And possibly a few more things. Also, I want to test what happens when there are some biggers sized documents in store that will not fit in memory in either MongoDB or MySQL? BLOBs anyone?

/Karlsson
Apologizing for comparing apples to oranges. Heck I already KNEW that I had used the CLIENT_COMPRESS flag, so why did I reference those test before that was removed? I just forgot it I guess.

4 comments:

Andy said...

Hi Anders,
one reason I've seen CLIENT_COMPRESS being user is when you have low bandwidth to the DB Server (not common when the application and the DB resides in the same datacenter anymore). However this would require that you have the CPU cycles to do the compression and de-compression on both the DB server and client so the benefit might not bee there anyway.

<>

Karlsson said...

Yes, that is true and that is why I assume is why t was implemented to begin with. These days though, I do not think this is an issue in most cases.

Cheers
/Karlsson

Mats Kindahl said...

Have you tried out libdrizzle? It is re-implementation of the MySQL connector library developed by Eric Day. He is using asynchronous I/O for the implementation, which showed some 16% performance increase in some measurements done by Jay Pipes in 2009 (seems page is gone, but you can read it here). I haven't kept up with the latest developments there, but it might be interesting to look at.

Karlsson said...

Mats!

Good point, I'd give that a shot, as well as Drizzle itself eventually.

Cheers
/Karlsson