dbqp being renamed

One of the best things that can happen to a piece of software is for people to actually use it.

I’ve been fortunate enough to have received feedback on the tool from several members of both the Percona and Drizzle teams.  The most common and strongly emphasized comments were in regards to what a terrible, terrible name dbqp really is in terms of saying, seeing, and typing it ; )

As that isn’t something that can be disputed (it’s really annoying to use in conversations *and* to type several dozen times a day), the project has been renamed to kewpie.  For those that follow such things, I did present on another tool with that name at the last MySQL Conference, but *that* tool is a nice-to-have, while the test-runner sees daily use.  Better to save the good names for software that actually stands a chance of being used, I say : )

While there are probably 1*10^6 other things I need to do (Stewart is a merciless slave driver as a boss, btw…heheh), the fact that we are merging the tool into the various Percona branches meant it should be done sooner rather than later.  The tool is currently in our 5.1 branch and I have merge requests up for both Drizzle and Xtrabackup (dbqp was living there too).

I have several other interesting things going on with the tests and tool, which I’ll be blogging about over at MySQL Performance Blog.  Later this week, I’ll be talking about what we’ve been doing to work on this bug ; )


Also, the Percona Live MySQL Conference in DC is just around the corner.  There are going to be some great speakers and attendees

dbqp and Xtrabackup testing

So I’m back from the Percona dev team’s recent meeting.  While there, we spent a fair bit of time discussing Xtrabackup development.  One of our challenges is that as we add richer features to the tool, we need equivalent testing capabilities.  However, it seems a constant in the MySQL world that available QA tools often leave something to be desired.  The randgen is a literal wonder-tool for database testing, but it is also occasionally frustrating / doesn’t scratch every testing itch.  It is based on technology SQL Server was using in 1998 (MySQL began using it in ~2007, IIRC).  So this is no knock, it is merely meant to be an example of a poor QA engineer’s frustrations ; )  While the current Xtrabackup test suite is commendable, it also has its limitations. Enter the flexible, adaptable, and expressive answer: dbqp.

One of my demos at the dev meeting was showing how we can set up tests for Xtrabackup using the unittest paradigm.  While this sounds fancy, basically, we take advantage of Python’s unittest and write classes that use their code.  The biggest bit dbqp does is search the specified server code (to make sure we have everything we should), allocate and manage servers as requested by the test cases, and do some reporting and management of the test cases.  As the tool matures, I will be striving to let more of the work be done by unittest code rather than things I have written : )

To return to my main point, we now have two basic tests of xtrabackup:

Basic test of backup + restore:

  1. Populate server
  2. Take a validation snapshot (mysqldump)
  3. Take the backup (via innobackupex)
  4. Clean datadir
  5. Restore from backup
  6. Take restored state snapshot and compare to original state

Slave setup

  1. Similar to our basic test except we create a slave from the backup, replicating from the backed up server.
  2. After the initial setup, we ensure replication is set up ok, then we do additional work on the master and compare master and slave states

One of the great things about this is that we have the magic of assertions.  We can insert them at any point of the test we feel like validating and the test will fail with useful output at that stage.  The backup didn’t take correctly?  No point going through any other steps — FAIL! : )  The assertion methods just make it easy to express what behavior we are looking for.  We want the innobackupex prepare call to run without error?
Boom goes the dynamite!:

# prepare our backup
cmd = ("%s --apply-log --no-timestamp --use-memory=500M "
"--ibbackup=%s %s" %( innobackupex
, xtrabackup
, backup_path))
retcode, output = execute_cmd(cmd, output_path, exec_path, True)
self.assertEqual(retcode, 0, msg = output)

From these basic tests, it will be easy to craft more complex test cases.  Creating the slave test was simply matter of adapting the initial basic test case slightly.  Our plans include: *heavy* crash testing of both xtrabackup and the server, enhancing / expanding replication tests by creating heavy randgen loads against the master during backup and slave setup, and other assorted crimes against database software.  We will also be porting the existing test suite to use dbqp entirely…who knows, we may even start working on Windows one day ; )

These tests are by no means the be-all-end-all, but I think they do represent an interesting step forward.  We can now write actual, honest-to-goodness Python code to test the server.  On top of that, we can make use of the included unittest module to give us all sorts of assertive goodness to express what we are looking for.  We will need to and plan to refine things as time moves forward, but at the moment, we are able to do some cool testing tricks that weren’t easily do-able before.

If you’d like to try these tests out, you will need the following:
* dbqp (bzr branch lp:dbqp)
* DBD:mysql installed (test tests use the randgen and this is required…hey, it is a WONDER-tool!) : )
* Innobackupex, a MySQL / Percona server and the appropriate xtrabackup binary.

The tests live in dbqp/percona_tests/xtrabackup_basic and are named basic_test.py and slave_test.py, respectively.

To run them:
$./dbqp.py –suite=xtrabackup_basic –basedir=/path/to/mysql –xtrabackup-path=/mah/path –innobackupex-path=/mah/other/path –default-server-type=mysql –no-shm

Some next steps for dbqp include:
1)  Improved docs
2)  Merging into the Percona Server trees
3)  Setting up test jobs in Jenkins (crashme / sqlbench / randgen)
4)  Other assorted awesomeness

Naturally, this testing goodness will also find its way into Drizzle (which currently has a 7.1 beta out).  We definitely need to see some Xtrabackup test cases for Drizzle’s version of the tool (mwa ha ha!) >: )

Drizzle / dbqp updates

Just wanted to blog about some of the latest updates to dbqp.  We just merged some interesting changes into Drizzle (just in time for the impending Fremont beta).  In additional to general code cleanup / reorganization, we have the following goodies:

Randgen in the Drizzle tree

One of the biggest things is that the random query generator (aka randgen) is now part of the Drizzle tree.  While I did some of the work here, the major drivers of this happening were Brian and Stewart:

  1. Brian makes a fair argument that the easier / more convenient it is to run a test, the greater the likelihood of it being run.  Additional tools to install, etc = not so much.  Having something right there and ready to go = win!
  2. Stewart is also a fan of convenience, lotsa testing, and working smarter, not harder.  As a result, he did the initial legwork on merging the randgen.  I do suspect there is still much for me to learn about properly bzr joining trees and whatnot, but we’ll get it right soon enough ; )

This doesn’t mean we won’t be contributing any changes we make back to the main randgen project / branch, it is strictly to facilitate more testing for Drizzle.  As we already have our randgen tests packaged into dbqp-runnable suites, running these tests is even easier : )


Another request fulfilled in this update is the ability to use Stewart’s libeatmydata to speed up testing.  By default, dbqp uses shared memory as a workdir, similar to mysql-test-run’s –mem option (this can be bypassed in dbqp with –no-shm, fyi).  However, this isn’t always perfect or desirable to do.

An alternative is to use libeatmydata, which disables fsync() calls.  As the name implies, you don’t want to use it if care about your data, but for general testing purposes, it can greatly speed up test execution.

If you have the library installed / on your machine, you can use it like so:  ./dbqp –libeatmydata [–libeatmydata-path ] …

By default, libeatmydata-path is /usr/local/lib/libeatmydata.so (as if you used make install)

Multiple server types

IMHO, this is one of the coolest new tricks.  dbqp can now handle more than just Drizzle servers / source!  The ultimate idea is to allow tests that utilize more than one type / version of a server to have more interesting tests : )  This will be useful for scenarios like testing Drizzledump migration as we can feed in one (or more) MySQL servers and a Drizzle tree and make sure we can migrate data from all of them.

We also intend to utilize dbqp for testing a variety of Percona products, and it is kind of handy to be able to run the code you are testing ; )  I already have the tool running Percona / MySQL servers and have some randgen tests working:

$ ./dbqp.py --default_server_type=mysql --basedir=/percona-server/Percona-Server --mode=randgen
Setting --no-secure-file-priv=True for randgen usage...
20111013-163443 INFO Linking workdir /dbqp/workdir to /dev/shm/dbqp_workdir_pcrews_9dbc7e8a-2872-45a9-8a07-f347f6184246
20111013-163443 INFO Using mysql source tree:
20111013-163443 INFO basedir: /percona-server/Percona-Server
20111013-163443 INFO clientbindir: /percona-server/Percona-Server/client
20111013-163443 INFO testdir: /dbqp
20111013-163443 INFO server_version: 5.5.16-rel21.0
20111013-163443 INFO server_compile_os: Linux
20111013-163443 INFO server_platform: x86_64
20111013-163443 INFO server_comment: (Percona Server with XtraDB (GPL), Release rel21.0, Revision 188)
20111013-163443 INFO Using default-storage-engine: innodb
20111013-163443 INFO Using testing mode: randgen
20111013-163443 INFO Processing test suites...
20111013-163443 INFO Found 5 test(s) for execution
20111013-163443 INFO Creating 1 bot(s)
20111013-163449 INFO Taking clean db snapshot...
20111013-163452 INFO bot0 server:
20111013-163452 INFO NAME: s0
20111013-163452 INFO MASTER_PORT: 9307
20111013-163452 INFO SOCKET_FILE: /dbqp/workdir/bot0/s0/var/s0.sock
20111013-163452 INFO VARDIR: /dbqp/workdir/bot0/s0/var
20111013-163452 INFO STATUS: 1
20111013-163506 ===============================================================
20111013-163506 TEST NAME [ RESULT ] TIME (ms)
20111013-163506 ===============================================================
20111013-163506 main.blob [ pass ] 8624
20111013-163516 main.create_drop [ pass ] 2862
20111013-163524 main.many_indexes [ pass ] 1429
20111013-163547 main.optimizer_subquery [ pass ] 17153
20111013-163558 main.outer_join [ pass ] 4243
20111013-163558 ===============================================================
20111013-163558 INFO Test execution complete in 69 seconds
20111013-163558 INFO Summary report:
20111013-163558 INFO Executed 5/5 test cases, 100.00 percent
20111013-163558 INFO STATUS: PASS, 5/5 test cases, 100.00 percent executed
20111013-163558 INFO Spent 34 / 69 seconds on: TEST(s)
20111013-163558 INFO Test execution complete
20111013-163558 INFO Stopping all running servers...

Expect to see this up and running tests against Percona Server in the next week or so.  I’ll be writing more about this soon.

Native / unittest mode

This hasn’t made it into the Drizzle tree yet.  To ease merging the code with Percona Server / Xtrabackup, I’ve created a separate launchpad project.  One of the things we needed was the ability to write complex tests directly.  It is currently easy to plug new tools into dbqp, but we essentially needed a new tool for certain testing needs.

Our solution for this was to allow dbqp to run python unittest modules.  We still have a bit of work to do before we have some demo tests ready, but we will be creating some expanded Xtrabackup tests using this system very soon.  So far, it is turning out to be pretty neat:

./dbqp.py --default_server_type=mysql --basedir=/percona-server/Percona-Server --mode=native
20111013-190744 INFO Killing pid 1747 from /dbqp/workdir/bot0/s0/var/run/s0.pid
20111013-190744 INFO Linking workdir /dbqp/workdir to /dev/shm/dbqp_workdir_pcrews_9dbc7e8a-2872-45a9-8a07-f347f6184246
20111013-190744 INFO Using mysql source tree:
20111013-190744 INFO basedir: /percona-server/Percona-Server
20111013-190744 INFO clientbindir: /percona-server/Percona-Server/client
20111013-190744 INFO testdir: /dbqp
20111013-190744 INFO server_version: 5.5.16-rel21.0
20111013-190744 INFO server_compile_os: Linux
20111013-190744 INFO server_platform: x86_64
20111013-190744 INFO server_comment: (Percona Server with XtraDB (GPL), Release rel21.0, Revision 188)
20111013-190744 INFO Using default-storage-engine: innodb
20111013-190744 INFO Using testing mode: native
20111013-190744 INFO Processing test suites...
20111013-190744 INFO Found 1 test(s) for execution
20111013-190744 INFO Creating 1 bot(s)
20111013-190749 INFO Taking clean db snapshot...
20111013-190750 INFO bot0 server:
20111013-190750 INFO NAME: s0
20111013-190750 INFO MASTER_PORT: 9306
20111013-190750 INFO SOCKET_FILE: /dbqp/workdir/bot0/s0/var/s0.sock
20111013-190750 INFO VARDIR: /dbqp/workdir/bot0/s0/var
20111013-190750 INFO STATUS: 1
20111013-190756 ===============================================================
20111013-190756 TEST NAME [ RESULT ] TIME (ms)
20111013-190756 ===============================================================
20111013-190756 main.example_test [ pass ] 1
20111013-190756 test_choice (example_test.TestSequenceFunctions) ... ok
20111013-190756 test_sample (example_test.TestSequenceFunctions) ... ok
20111013-190756 test_shuffle (example_test.TestSequenceFunctions) ... ok
20111013-190756 ----------------------------------------------------------------------
20111013-190756 Ran 3 tests in 0.000s
20111013-190756 OK
20111013-190756 ===============================================================
20111013-190756 INFO Test execution complete in 6 seconds
20111013-190756 INFO Summary report:
20111013-190756 INFO Executed 1/1 test cases, 100.00 percent
20111013-190756 INFO STATUS: PASS, 1/1 test cases, 100.00 percent executed
20111013-190756 INFO Spent 0 / 6 seconds on: TEST(s)
20111013-190756 INFO Test execution complete
20111013-190756 INFO Stopping all running servers...

This really only scratches the surface of what can happen, but I’ll be writing more in-depth articles on what kind of tricks we can pull off as the code gets more polished.

Three non-testing bits:

1)  Percona Live London is just around the corner and members of the Drizzle team will be there.

2)  We are *this* close to Fremont beta being ready.  The contributions and feedback have been most welcome.  Any additional testing / etc are most appreciated.

3)  Drizzle is now part of the SPI!


Drizzle multi-master testing!

So, it has been a while since I’ve blogged.  As some of you may have read, I have a new job and Stewart and I have been busy planning all kinds of testing goodness for Percona >: ) (I’ve also been recovering from trying to keep up with Stewart!)

Rest assured, gentle readers, that I have not forgotten everyone’s favorite modular, community-driven database ; )  Not by a long-shot.  I have some major improvements to dbqp getting ready for a merge (think randgen in-tree / additional testing modes / multiple basedirs of multiple types).  Additionally, I’ve been cooking up some code to test the mighty Mr. Shrews’ multi-master code (mwa ha ha!)

What I’ve done is allow for a new option to be used with a test’s .cnf file (this is a dbqp thing, won’t work with standard drizzle-test-run).  If the runner sees this request, it will generate a multi-master config file from the specified servers’ individual slave.cnf files. 

Here is a sample config:

servers = [[--innodb.replication-log],[--innodb.replication-log],[--plugin-add=slave --slave.config-file=$MASTER_SERVER_SLAVE_CONFIG]]

# we tell the system that we want
# to generate a multi-master cnf file
# for the 3rd server to use, that
# has the first two servers as masters
# the final file is written to the first
# server's general slave.cnf file
gen_multi_master_cnf= 0,1

A good rundown of the file’s contents can be found on Shrews’ blog here, but the end result looks like this:




I tried cooking up a basic test case where we spin up 3 servers – 2 masters and one slave.  One master 1, we create table t1:

CREATE TABLE t1 (a int not null auto_increment, primary key(a));

On master 2, table t2:

CREATE TABLE t2 (a int not null auto_increment, primary key(a));

We insert some records into both tables, then check that our slave has everything! Sounds simple, right?

Sigh. If only. It seems that we are running into some issues when we try to record the test – you can read the bug here

We see some interesting output in the slave’s logs before it crashes:

$ cat workdir/bot0/s2/var/log/s2.err
InnoDB: Doublewrite buffer not found: creating new
InnoDB: Doublewrite buffer created
InnoDB: 127 rollback segment(s) active.
InnoDB: Creating foreign key constraint system tables
InnoDB: Foreign key constraint system tables created
(SQLSTATE 00000) Duplicate entry '772-1' for key 'PRIMARY'
Failure while executing:
INSERT INTO `sys_replication`.`queue` (`master_id`, `trx_id`, `seg_id`, `commit_order`, `originating_server_uuid`, `originating_commit_id`, `msg`) VALUES (2, 772, 1, 1, 'ac9c8ac0-8f10-474b-9bbd-b61d2cdb2b93' , 1, 'transaction_context {
server_id: 1
transaction_id: 772
start_timestamp: 1317760732106016
end_timestamp: 1317760732106017
event {
segment_id: 1
end_segment: true

Replication slave: Unable to insert into queue.
Replication slave: drizzle_state_read:lost connection to server (EOF)
Lost connection to master. Reconnecting.
Replication slave: drizzle_state_connect:could not connect
111004 16:39:05 InnoDB: Starting shutdown...

Additionally, you can just try the setup with –start-and-exit:

$ ./dbqp --suite=slave --start-and-exit multi_master_basic
20111004-170033 INFO Using Drizzle source tree:

20111004-170033 INFO Taking clean db snapshot...
20111004-170033 INFO Taking clean db snapshot...
20111004-170033 INFO Taking clean db snapshot...
20111004-170035 INFO bot0 server:
20111004-170035 INFO NAME: s0
20111004-170035 INFO MASTER_PORT: 9306
20111004-170035 INFO DRIZZLE_TCP_PORT: 9307
20111004-170035 INFO MC_PORT: 9308
20111004-170035 INFO PBMS_PORT: 9309
20111004-170035 INFO RABBITMQ_NODE_PORT: 9310
20111004-170035 INFO VARDIR: /drizzle_mm_test/tests/workdir/bot0/s0/var
20111004-170035 INFO STATUS: 1
20111004-170035 INFO bot0 server:
20111004-170035 INFO NAME: s1
20111004-170035 INFO MASTER_PORT: 9312
20111004-170035 INFO DRIZZLE_TCP_PORT: 9313
20111004-170035 INFO MC_PORT: 9314
20111004-170035 INFO PBMS_PORT: 9315
20111004-170035 INFO RABBITMQ_NODE_PORT: 9316
20111004-170035 INFO VARDIR: /drizzle_mm_test/tests/workdir/bot0/s1/var
20111004-170035 INFO STATUS: 1
20111004-170035 INFO bot0 server:
20111004-170035 INFO NAME: s2
20111004-170035 INFO MASTER_PORT: 9318
20111004-170035 INFO DRIZZLE_TCP_PORT: 9319
20111004-170035 INFO MC_PORT: 9320
20111004-170035 INFO PBMS_PORT: 9321
20111004-170035 INFO RABBITMQ_NODE_PORT: 9322
20111004-170035 INFO VARDIR: /drizzle_mm_test/tests/workdir/bot0/s2/var
20111004-170035 INFO STATUS: 1
20111004-170035 INFO User specified --start-and-exit. dbqp.py exiting and leaving servers running...
pcrews@mister:/drizzle_mm_test/tests$ ps -al
0 S 1000 18652 1 2 80 0 - 112094 poll_s pts/2 00:00:00 lt-drizzled
0 S 1000 18688 1 3 80 0 - 112096 poll_s pts/2 00:00:00 lt-drizzled
0 S 1000 18721 1 3 80 0 - 156326 poll_s pts/2 00:00:00 lt-drizzled
0 R 1000 18780 15985 0 80 0 - 3375 - pts/2 00:00:00 ps
0 S 1000 32463 30047 0 80 0 - 11272 poll_s pts/1 00:00:01 ssh

From here, we can connect to the slave and check out sys_replication.applier_state:

$ drizzle -uroot -p9318 test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the Drizzle client.. Commands end with ; or \g.
Your Drizzle connection id is 216
Connection protocol: mysql
Server version: 2011.09.26.2427 Source distribution (drizzle_mm_test)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

drizzle> use sys_replication;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Schema changed
drizzle> show tables;
| Tables_in_sys_replication |
| applier_state |
| io_state |
| queue |
3 rows in set (0.001641 sec)

drizzle> select * from applier_state;
| master_id | last_applied_commit_id | originating_server_uuid | originating_commit_id | status | error_msg |
| 1 | 0 | f716781f-8c00-4b81-82c6-62039136d616 | 0 | RUNNING | |
| 2 | 3 | df7f2f6e-dba4-43ea-a674-fa4a3709865b | 3 | RUNNING | |
2 rows in set (0.000928 sec)

drizzle> select * from io_state;
| master_id | status | error_msg |
| 1 | STOPPED | |
| 2 | RUNNING | |
2 rows in set (0.000839 sec)


So, it looks like the slave knows about both masters, but for some reason, the applier from master 1 is stopped : (
At any rate, there is a bug open on this and it could be something in my config(?) It’s been a while since I’ve played with replication and I know there has been some tinkering under the hood since then : )

The branch with the test code can be found here:

At the very least, we can now create tests that use this feature, which will help ensure that it stays on the path of solid code in the future! How about anyone out there? Has anyone been using multi-master? If so, can you share any setups / tests? Extra information would be most appreciated : )

Drizzle’s Jenkins system using dbqp for randgen and crashme testing

Well, that’s pretty much it, thanks for stopping by ; )

In all seriousness, it’s kind of neat that we’re using dbqp to run some of our staging tests and we gain a few neat things:


Here are the trend charts for randgen and crashme.  While it doesn’t look like randgen is showing much of an improvement, it is worth mentioning that this job now runs both the standard and the transaction log tests in a single run >: )  Previously, we had a separate drizzle-automation job for the transaction log.  Just the trx_log tests took ~30 minutes to run (plus build time).  Long story short, we’re saving about 30-40 minutes on randgen testing per staging run and only needing to build once!


The jobs we run are in the tree and anyone can easily repeat them.  While Drizzle-automation kicks major butt (and I have taken many ideas from it), it is a separate piece of software that requires setup and maintenance.  Basing things around an in-tree setup means that you only need the code and any required bits and pieces.  Now if we need to set up a new randgen machine, we only need the randgen and dbd::drizzle installed (and we plan on including randgen in-tree soon, so you won’t even need that!).  If we need to set up a new crash-me machine, we only need dbd::drizzle – and everyone should have dbd::drizzle installed! ; )

Ease of use

Pretty much all tests provide the same standard output:

dtr mode

From the command:


Our default mode is dtr (aka using drizzletest.cc to execute standard .test files). To run all available tests, use the make target – make test-dbqp

20110621-081404  trigger_dictionary.loaded                  [ pass ]       43
20110621-081408  logging_stats.cumulative                   [ pass ]     1045
20110621-081412  errmsg_stderr.stderr                       [ pass ]       36
20110621-081412  ===============================================================
20110621-081412 INFO Test execution complete in 496 seconds
20110621-081412 INFO Summary report:
20110621-081412 INFO Executed 566/566 test cases, 100.00 percent
20110621-081412 INFO STATUS: PASS, 566/566 test cases, 100.00 percent executed
20110621-081412 INFO Spent 254 / 496 seconds on: TEST(s)
20110621-081412 INFO Test execution complete
20110621-081412 INFO Stopping all running servers...

randgen mode

From the command:

./dbqp --mode=randgen --randgen-path=/path/to/your/randgen

20110621-170141  main.subquery                              [ pass ]     3780
20110621-170148  main.subquery_semijoin                     [ pass ]     3016
20110621-170156  main.subquery_semijoin_nested              [ pass ]     3750
20110621-170202  main.varchar                               [ pass ]     2658
20110621-170202  ===============================================================
20110621-170202 INFO Test execution complete in 147 seconds
20110621-170202 INFO Summary report:
20110621-170202 INFO Executed 19/19 test cases, 100.00 percent
20110621-170202 INFO STATUS: PASS, 19/19 test cases, 100.00 percent executed
20110621-170202 INFO Spent 77 / 147 seconds on: TEST(s)
20110621-170202 INFO Test execution complete
20110621-170202 INFO Stopping all running servers...

crashme mode

From the command:

./dbqp --mode=crashme

20110621-181515  main.crashme                               [ fail ]   149840
20110621-181515  func_extra_to_days=error        # Function TO_DAYS
20110621-181515  ###
20110621-181515  ###<select to_days('1996-01-01') from crash_me_d
20110621-181515  ###>2450084
20110621-181515  ###We expected '729024' but got '2450084'
20110621-181515  func_odbc_timestampadd=error        # Function TIMESTAMPADD
20110621-181515  ###
20110621-181515  ###<select timestampadd(SQL_TSI_SECOND,1,'1997-01-01 00:00:00')
20110621-181515  ###>1997-01-01 00:00:01.000000
20110621-181515  ###We expected '1997-01-01 00:00:01' but got '1997-01-01 00:00:01.000000'
20110621-181515  ###
20110621-181515  ###<select {fn timestampadd(SQL_TSI_SECOND,1,{ts '1997-01-01 00:00:00'}) }
20110621-181515  ###>1997-01-01 00:00:01.000000
20110621-181515  ###We expected '1997-01-01 00:00:01' but got '1997-01-01 00:00:01.000000'
20110621-181515 ERROR Failed test.  Use --force to execute beyond the first test failure
20110621-181515  ===============================================================
20110621-181515 INFO Test execution complete in 153 seconds
20110621-181515 INFO Summary report:
20110621-181515 INFO Executed 1/1 test cases, 100.00 percent
20110621-181515 INFO STATUS: FAIL, 1/1 test cases, 100.00 percent executed
20110621-181515 INFO FAIL tests: main.crashme
20110621-181515 INFO Spent 149 / 153 seconds on: TEST(s)
20110621-181515 INFO Test execution complete
20110621-181515 INFO Stopping all running servers...

While this isn’t a huge feature, it is nice to have a standardized report for knowing if something failed, what failed and how (we always dump test tool output on test failures).  Why is this nice?  Well, the world is a busy place and only needing to know one way of reading test output simplifies things just a teensy little bit.  This small improvement becomes a huge benefit over time if you happen to spend good chunks of your day looking at test output like me : )

Other than that, I’m still working on teaching dbqp interesting new tricks that will help me in testing SkySQL‘s Reference Architecture – expect to hear more about that next month!

Drizzle testing – now with more server stressing goodness!

One of the long term testing goals for Drizzle is to move all of our test logic directly in-tree.  Currently, we use a system called drizzle-automation to execute a variety of tests for our staging branch.  This is the final set of tests patches must pass before being allowed to merge into Drizzle trunk and includes things like sysbench, dbt2, the randgen, etc.  With the development of dbqp, we can now move this testing logic directly into the tree (and even move some of the testing tools there as well).  Of course, I’ve rambled on about this before, but I personally think it is cool and useful ; )  However enough of the sales pitch, on to the new modes!

sysbench mode

With but a simple incantation of ./dbqp –mode=sysbench [–suite=readonly|readwrite], you too can invoke the mighty sysbench configurations that we use to ensure each and every Drizzle patch is worth its salt!

Basically, each test case is a sysbench command line for a certain concurrency:

sysbench --max-time=240 --max-requests=0 --test=oltp --db-ps-mode=disable --drizzle-table-engine=innodb --oltp-read-only=off --oltp-table-size=1000000 --drizzle-mysql=on --drizzle-user=root --drizzle-db=test --drizzle-port=$MASTER_MYPORT --drizzle-host=localhost --db-driver=drizzle --num-threads=32

readonly and readwrite suites differ only with the –oltp-read-only switch being on|off.

The output looks like this (at present):

20110601-191706  ===============================================================
20110601-191706  TEST NAME                                  [ RESULT ] TIME (ms)
20110601-191706  ===============================================================
20110601-191706  readonly.concurrency_16                    [ pass ]   240019
20110601-191706  max_req_lat_ms: 21.44
20110601-191706  rwreqps: 4208.2
20110601-191706  min_req_lat_ms: 6.31
20110601-191706  deadlocksps: 0.0
20110601-191706  tps: 150.29
20110601-191706  avg_req_lat_ms: 6.65
20110601-191706  95p_req_lat_ms: 7.02
20110601-191706  ===============================================================
20110601-191706 INFO Test execution complete in 275 seconds
20110601-191706 INFO Summary report:
20110601-191706 INFO Executed 1/1 test cases, 100.00 percent
20110601-191706 INFO STATUS: PASS, 1/1 test cases, 100.00 percent executed
20110601-191706 INFO Spent 240 / 275 seconds on: TEST(s)
20110601-191706 INFO Test execution complete
20110601-191706 INFO Stopping all running servers...

This is probably the most ‘work-in-progress’ mode we have.  The reason for this is that our Jenkins system uses a database of previous results for comparison / emailing and we need to come up with some way to keep this bit working properly.  I’m still collaborating with the mighty computing wizard Monty Taylor on this.  One of the possibilities we’ve discussed is the use of the Phoronix Test Suite.  Personally, I think this looks pretty interesting / promising and if any php gurus want to assist here, we will compose ballads to honor your awesomeness.

sqlbench mode

Technically, sqlbench and crashme modes are both tied to the sql-bench test suite, however, they do different things and produce different output, so I will discuss them separately.

The biggest thing to note is that sql-bench is now in-tree.  You can read a bit more about this tool here and here

This mode basically calls the run-all-tests sql-bench script.  This executes all of the available tests for sql-bench and reports on the results (dbqp will fail if any sql-bench tests does).  NOTE – this takes some time (~45 minutes on my laptop)

To use it:
./dbqp –mode=sqlbench


20110608-135645  ===============================================================
20110608-135645  TEST NAME                                  [ RESULT ] TIME (ms)
20110608-135645  ===============================================================
20110608-135645  main.all_sqlbench_tests                    [ pass ]  2732007
20110608-135645  Test finished. You can find the result in:
20110608-135645  drizzle/tests/workdir/RUN-drizzle-Linux_2.6.38_9_generic_x86_64
20110608-135645  Benchmark DBD suite: 2.15
20110608-135645  Date of test:        2011-06-08 13:11:10
20110608-135645  Running tests on:    Linux 2.6.38-9-generic x86_64
20110608-135645  Arguments:           --connect-options=port=9306 --create-options=ENGINE=innodb
20110608-135645  Comments:
20110608-135645  Limits from:
20110608-135645  Server version:      Drizzle 2011.06.19.2325
20110608-135645  Optimization:        None
20110608-135645  Hardware:
20110608-135645  alter-table: Total time: 42 wallclock secs ( 0.06 usr  0.04 sys +  0.00 cusr  0.00 csys =  0.10 CPU)
20110608-135645  ATIS: Total time: 22 wallclock secs ( 4.01 usr  0.26 sys +  0.00 cusr  0.00 csys =  4.27 CPU)
20110608-135645  big-tables: Total time: 24 wallclock secs ( 4.16 usr  0.22 sys +  0.00 cusr  0.00 csys =  4.38 CPU)
20110608-135645  connect: Total time: 31 wallclock secs ( 6.81 usr  4.50 sys +  0.00 cusr  0.00 csys = 11.31 CPU)
20110608-135645  create: Total time: 59 wallclock secs ( 2.93 usr  1.65 sys +  0.00 cusr  0.00 csys =  4.58 CPU)
20110608-135645  insert: Total time: 1962 wallclock secs (270.53 usr 66.35 sys +  0.00 cusr  0.00 csys = 336.88 CPU)
20110608-135645  select: Total time: 560 wallclock secs (23.12 usr  4.62 sys +  0.00 cusr  0.00 csys = 27.74 CPU)
20110608-135645  transactions: Total time: 21 wallclock secs ( 2.43 usr  1.98 sys +  0.00 cusr  0.00 csys =  4.41 CPU)
20110608-135645  wisconsin: Total time: 10 wallclock secs ( 2.11 usr  0.52 sys +  0.00 cusr  0.00 csys =  2.63 CPU)
20110608-135645  All 9 test executed successfully
20110608-135645  Totals per operation:
20110608-135645  Operation             seconds     usr     sys     cpu   tests
20110608-135645  alter_table_add                       18.00    0.02    0.00    0.02     100
20110608-135645  alter_table_drop                      17.00    0.02    0.01    0.03      91
20110608-135645  connect                                2.00    1.02    0.51    1.53    2000
20110608-135645  update_rollback                        3.00    0.26    0.23    0.49     100
20110608-135645  update_with_key                       73.00    6.70    5.23   11.93  300000
20110608-135645  update_with_key_prefix                34.00    4.45    2.30    6.75  100000
20110608-135645  wisc_benchmark                         2.00    1.49    0.00    1.49     114
20110608-135645  TOTALS                              2865.00  310.26   79.94  390.20 2974250
20110608-135645  ===============================================================
20110608-135645 INFO Test execution complete in 2735 seconds
20110608-135645 INFO Summary report:
20110608-135645 INFO Executed 1/1 test cases, 100.00 percent
20110608-135645 INFO STATUS: PASS, 1/1 test cases, 100.00 percent executed
20110608-135645 INFO Spent 2732 / 2735 seconds on: TEST(s)
20110608-135645 INFO Test execution complete
20110608-135645 INFO Stopping all running servers...

crashme mode

This mode is also provided thanks to the sql-bench suite, but the output and processing are different, thus a separate mode and section : )

Anyway, there is a script called crash-me that is provided with sql-bench.  We execute this script, look for any test failures in the output and report pass/fail.

There is an interesting story around these tests (and the sample output)- our Jenkins crashme slave has been down / having problems for a while.  Due to life and whatnot, we’ve had some issues getting it sorted.  However, once I got this mode up and running, I discovered that we were failing some tests:

20110608-152759  ===============================================================
20110608-152759  TEST NAME                                  [ RESULT ] TIME (ms)
20110608-152759  ===============================================================
20110608-152759  main.crashme                               [ fail ]   155298
20110608-152759  func_extra_to_days=error           # Function TO_DAYS
20110608-152759  ###
20110608-152759  ###<select to_days('1996-01-01') from crash_me_d
20110608-152759  ###>2450084
20110608-152759  ###We expected '729024' but got '2450084'
20110608-152759  func_odbc_timestampadd=error               # Function TIMESTAMPADD
20110608-152759  ###
20110608-152759  ###<select timestampadd(SQL_TSI_SECOND,1,'1997-01-01 00:00:00')
20110608-152759  ###>1997-01-01 00:00:01.000000
20110608-152759  ###We expected '1997-01-01 00:00:01' but got '1997-01-01 00:00:01.000000'
20110608-152759  ###
20110608-152759  ###<select {fn timestampadd(SQL_TSI_SECOND,1,{ts '1997-01-01 00:00:00'}) }
20110608-152759  ###>1997-01-01 00:00:01.000000
20110608-152759  ###We expected '1997-01-01 00:00:01' but got '1997-01-01 00:00:01.000000'
20110608-152759 ERROR Failed test.  Use --force to execute beyond the first test failure
20110608-152759  ===============================================================
20110608-152759 INFO Test execution complete in 158 seconds
20110608-152759 INFO Summary report:
20110608-152759 INFO Executed 1/1 test cases, 100.00 percent
20110608-152759 INFO STATUS: FAIL, 1/1 test cases, 100.00 percent executed
20110608-152759 INFO FAIL tests: main.crashme
20110608-152759 INFO Spent 155 / 158 seconds on: TEST(s)
20110608-152759 INFO Test execution complete

So, while our tests were down, an ugly bug crept into the works.  Of course, it is terrible that we have a bug, but we can always bzrfind our way to the culprit code (expect a mode for that soon!) and we see the value of constant testing!  At any rate, we can now get our Jenkins slave back in working order and any developer or user that wants to stress their server now has an easy way to do so : )

Upcoming work

I’ve also been doing some cleaning up / reorganizing of dbqp code to allow for other neat tricks.  These changes will enable it to run other servers such as MySQL and allow it to serve as the basis of test suites for tools like mydumper and xtrabackup – I’ve already been discussing things with Stewart and Andrew about this and will be blogging / demoing the code very soon.

Additionally, we’re going to also see about moving the randgen into the Drizzle tree.  We use it for a significant portion of our testing and through the magic of bzr join, it will be easy to provide this tool for everyone (provided they have dbd::drizzle installed, of course).  Stewart was kind enough to set up an initial tree, I’ve just been too busy with SkySQL work to get it done this week.

Finally, we’re still moving forward with making dbqp *the* Drizzle test runner.  This is naturally happening in baby steps, but expect to see some changes in the next month or so.

With that said, I hope that people will enjoy playing with the new toys and I look forward to providing more fun ways of making your favorite dbms sweat in the near future >: )

Hello, SkySQL!

So, as LinuxJedi so eloquently noted here, Rackspace and Drizzle are parting ways.  While they were kind enough to offer other opportunities with them, my preferences were similar to Andrew’s – to remain in the MySQL/Drizzle world.

I was fortunate enough that SkySQL had need of my services and am happy to announce that today marks my first day as a Senior QA Engineer with them.  I am very honored to join such a promising and skilled group and am very excited about the opportunities ahead.

My work will have me developing QA solutions (tests, code, whatever) for a wide variety of things.  Naturally, this includes SkySQL’s Reference Architecture where I will work on tests to ensure the delivered packages work perfectly for our customers (among many, many other things).  Additionally, I’ll be working with tools such as LinuxJedi’s modifications to Domas’ oh-so-tasty mydumper.  Basically, a little bit of everything ; )  One of the first things I’ll be doing is adapting dbqp to work with MySQL.  This will allow us to handle a wide variety of testing challenges from a single, adaptable platform in the MySQL world as well.  Drizzle will be replacing the legacy test-runner with this tool in the very near future.

Speaking of which, I’ve also been encouraged to continue contributing to Drizzle.  I have been speaking with Stewart recently about beefing up xtrabackup tests and putting the random query generator directly in the Drizzle tree.  It’s quite amazing to be in a position where I can collaborate on projects across company lines.  I also think it is particularly cool that Henrik notes SkySQL were the first to do a production install of Drizzle for someone!

With that said, I have plenty on my plate already and it’s time to get back to work : )

New dbqp feature – using pre-created datadirs for tests

Why would one want to do this, you may ask?  Well, for starters, it makes a great ‘canary-in-the-coal-mine‘ in regards to backwards compatibility!

For Drizzle, we’ve created some tables (via the randgen’s data generator if you are curious), saved a copy of the datadir, and then created a test case that uses said datadir for the test server.  The test executes some simple SQL queries to make sure we can read the tables properly.  This way, if we ever do something to either the server or .dfe format (data format exchange – had a most enlightening conversation with the team about this format’s history at the MySQL UC), we’ll have a broken test that cries about it.  From there, we’ll know we have to take some action.  The always-amazing Stewart Smith has also created some foreign key backwards compatibility tests, which I believe marks further progress towards the magical goodness that is catalogs!

We signal that we want to do this by using a .cnf file:

servers = [[]]


Each server is named s0,s1,sN.  If a server name is contained in the .cnf file, the test-runner will do the appropriate magic to use the specified datadir for that server.  The argument to load-datadir is the name of the directory that is intended for use in the test.  All datadirs are expected to live in drizzle/tests/std_data.  Tests that do use a .cnf file, like main.backwards_compatibility and slave.basic are skipped by test-run.pl automatically (you *can’t* run them via test-run.pl).

This is something that I don’t believe could be accomplished with the old test runner, or at least not *easily* done (see Rube Goldberg) ; ).  At some point, we will switch over to dbqp entirely and remove test-run.pl.  Seeing comments like this, makes me happy and think things are on track.

dbqp was created with the idea that it should be easy to express complex testing setups (multiple servers, using a preloaded datadir, etc, etc) and it looks like the incubation is starting to pay some benefits.  In addition to allowing this voodoo to happen, the code I’ve added to the test runner will allow us to start doing proper tests of the super Mr. Shrewsbury’s multi-master replication.  Joe Daly has also been doing some very promising work for hierarchical replication based on Dave’s tree.  I’ll be creating some example tests for these badass features soon.  The moral of the story is that by rethinking our test-runner, one tiny bit of code helps us move the ball forward on testing replication, backwards compatibility, and catalogs.

It’s honestly one of the best parts of working on the Drizzle project – being encouraged to experiment and rethink problems has enabled all sorts of innovation (but one example of Monty Taylor’s computing wizardry!) and cool features.  Thanks to this freedom to experiment, we now have even more ways of making sure we are producing quality code.

My view of QA is that we do help test, but that we also help other people answer their own questions about quality (via tools, documentation, examples, etc).  Ultimately, a test is a question – “Do you return the right answer for this query?”, “Can you survive a beating from the randgen?”, etc – and asking questions should be easy and informative.  QA shouldn’t be the sole province of some obscure priest class, but everyone’s playground.  When I see developers like Stewart writing interesting test cases and even contributing to the test tool itself, I’m even happier than when I find a bug (and finding bugs is quite awesome!).

Anyway, the code is proposed for a merge to trunk and documentation is available (under testing/writing test cases).  I hope that this makes trying to break things even more fun for people >: )

Testing Xeround’s database as a service

So while I was at the MySQL UC, The Xeround database came to my attention.  It bills itself as database as a service for MySQL systems and a seamless replacement for standard MySQL.

Of course, since I am a QA Engineer, I could not resist the urge to try to break it >:)  As my friend and former MySQL colleage, Kostja says, “QA Engineers are a unique breed…they like to push all the buttons” : )  I would say that the QA mindset goes a bit further than that, but it is something I will delve into in another post.  I will only say that there is a reason that Microsoft recognizes QA software engineering as a distinct and specialized discipline.

So, let’s get back to Xeround.  It was the first database as a service that caught my eye and I just had to test it!  They are currently offering a free beta.  It is remarkably easy and fast to get set up with a test database and the web-based dashboard they provide is pretty interesting and offers some good information (though some of it is confusing…more on that in a bit)

It was my intent to run a small handful of tests with the mighty, mighty randgen!

My tests were as follows:

  1. outer_join grammar – creates seriously nasty JOIN queries that can use up to 20 tables
  2. transactional grammar – we have a grammar that creates a variety of transactions.  Some good, some bad, with lots of ROLLBACKs and SAVEPOINTs sprinkled in for spice.
  3. subqueries – the nastiest grammar I have created and as I have mentioned elsewhere, it is also part of why we are just now seeing optimizer features like index condition pushdown (ICP) being reintroduced to MySQL >: )

My thoughts were that these could be quickly executed and point out any serious problems in basic functionality.  MySQL and Drizzle both use these grammars as part of their testing.  Drizzle must survive these tests on every push to trunk, so these seem like reasonable stressors for a new engine >: )

It should be noted that I had to modify the test grammars to accomodate some Xeround limitations, the modified randgen branch I used is here.  It can be branched via bzr branch lp:~patrick-crews/randgen/randgen_drizzle_exp

Each grammar would be run with the randgen’s –debug option.  This is because the user is presented with a nice report at the end of the run which indicates:  query_count:row_count (ie how many queries returned how many rows):

# 2011-04-27T20:40:18 Rows returned:
$VAR1 = {
‘    0’ => 59,
‘    1’ => 2,
‘    4’ => 1,
‘    9’ => 1,
‘   -1’ => 35,
‘>100’ => 1

I would use this as a comparison point against MySQL 5.1.  Granted, I could use the –Validator=ResultsetComparatorSimplify option, but then I would have an actual bug report that I would feel compelled to file and this would feel less like fun and more like work ; )  However, I have been in contact with engineers from Xeround and have shared my findings with them.

For the transactional grammar, I would run the grammar on each system and then do a diff of mysqldump files from each database.  As Xeround is a MySQL engine, this could cause some differences, but the data in the tables should be consistent.

Before I get into the testing results, I’ll provide some overall impressions:
As I said, the web interface is pretty nice and provides you with a lot of useful information.  It allows you to easily create a new named database instance and provides you with data such as status, scale, uptime, cpu utilization, memory utilization, number of connections, ops/sec, and message count.  Scale refers to the autoscale capabilities that Xeround advertises.  For the beta, you are allowed to scale from 3 to 4 servers.  3 servers is considered 100%, adding the extra server (when certain user-specified CPU or Memory limits are hit) equates to 133% .  Interestingly enough, I observed that there were always 6 active connections when the database was idle (probably some of the Xeround ‘secret sauce‘ working…).

The control panel also allows the user to set the CPU, memory, and connections limits that will trigger scale up (and possibly scale down).  In my experiments, I never seemed to tax memory or connections, but CPU limits were hit and auto-scale did trigger, though I will admit that I didn’t observe any noticeable change in the test execution.

There are also tabs for backup (not available in the free beta, though mysqldump does work against a Xeround instance), general monitoring which provides real-time information about cpu, memory and connections, and an events (messages tab).  The one thing I noted about the events tab was that I received a number of warning messages about the health of my database during times I wasn’t using it.  However, it is a beta service for general evaluation and certain oddities are to be expected.

Here is what I found with my tests:
1)  Xeround is a MySQL engine.  They do advertise this, but the main reason I noticed that all of my created test tables were now ‘Engine=Xeround’ was that I was unable to create a varchar_1024 indexed column.  Xeround is limited to 255 characters max:

# 2011-04-27T19:50:27 key (`col_char_1024_key` ))  failed: 1074 Column length too big for column 'col_char_1024' (max = 255); use BLOB or TEXT instead

This limitation required modification of the randgen grammars and gendata files to limit char columns to 255.  As noted above, you can find the modified version of the randgen here.

2)  Tables with an ENGINE=$engine_name argument are processed without an issue (ie you should be able to use a dumpfile without problems) and are converted to Xeround tables.  One thing to note is that dumpfiles *from* Xeround have ENGINE=Xeround for the CREATE TABLE statements

create table t1 (a int not null auto_increment, primary key(a)) engine=innodb;
Query OK, 0 rows affected, 2 warnings (0.702761 sec)
drizzle> show create table t1;
| Table | Create Table                                                                                                                          |
| t1    | CREATE TABLE `t1` (
) ENGINE=Xeround DEFAULT CHARSET=utf8 COLLATE=utf8_bin |

3)  outer_join grammar:
I used the following command line:

./gentest.pl --gendata=conf/drizzle/outer_join_drizzle.zz --grammar=conf/drizzle/outer_join_drizzle.yy --queries=100 --threads=1 --dsn=dbi:mysql:host= --sqltrace --debug

The test is designed to generate queries with large numbers of tables (up to ~20).  The test ran without much incident.  The Xeround server monitor indicated that the CPU was hovering near 80% for most of the time, but again…beta test setup, so I’ll give them some leeway.

The big trouble is what follows.  Remember those randgen summary reports I mentioned earlier?  Below is a comparison of Xeround vs. MySQL for the same command line.  The values are row_count’ => number_of_queries_returning_said_row_count.  What this means is that for the same set of queries, Xeround and MySQL do not always return the same result sets.  I did not note any differences in query failures, so this simply indicates that results processing is differing somewhere : (  To elaborate, Xeround had 56 queries that returned 0 rows, for the same workload, MySQL only had 39.  A row count of -1 indicates that there was an error with the query, such as referencing a table or column that doesn’t exist.  Somehow, Xeround hit fewer errors than MySQL, though that is also worrisome – why do they register errors differently?

# 2011-04-27T20:11:05 Rows returned:
$VAR1 = {
'    0' => 56,
'    1' => 16,
'    2' => 6,
'    3' => 2,
'    5' => 1,
'    6' => 1,
'    7' => 1,
'    8' => 1,
'   -1' => 13,
'   10' => 2,
'>10' => 1

MySQL 5.1

$VAR1 = {
'    0' => 39,
'    1' => 15,
'    2' => 2,
'    3' => 2,
'    4' => 1,
'    7' => 2,
'    8' => 1,
'   -1' => 32,
'   10' => 1,
'>10' => 5

4)  transactional grammar:
I used the following command line:

./gentest.pl --gendata=conf/drizzle/translog_drizzle.zz --grammar=conf/drizzle/translog_concurrent1.yy --queries=100 --threads=1 --dsn=dbi:mysql:host= --sqltrace --debug

This grammar generates a variety of transactions and standalone queries.  The queries generated consist of both good and invalid SQL with lots of ROLLBACK’s and SAVEPOINT’s here and there.  Unfortunately, I noticed a large number of differences.  We’ll start with the easiest one:

< ) ENGINE='InnoDB' AUTO_INCREMENT=105 COLLATE='utf8_general_ci';
> ) ENGINE='Xeround' COLLATE='utf8_bin';

It isn’t huge, but Xeround apparently auto-converts tables names to lower-case.  The randgen attempts to create table `A`, but it is stored as table `a`.  This could be an issue for some people, but Xeround does say that the beta is for people to evaluate the system’s suitability for their purposes.

The big issue is that Xeround appears to not have registered a lot of the transactions issued by the randgen.  The Xeround dumpfile only contained the original 10 rows from table `a`, while the MySQL 5.1 version I ran locally had 94 rows by the end of the randgen run : (

Further research of the randgen logs indicate the following issue:

# 2011-04-27T20:06:56 Query:  INSERT INTO `d` ( `col_char_10` , `col_char_10_key` , `col_char_10_not_null` , `col_char_10_not_null_key` , `col_char_255` , `col_char_255_key` , `col_char_255_not_null` , `col_char_255_not_null_key` , `col_int` , `col_int_key` , `col_int_not_null` , `col_int_not_null_key` , `col_bigint` , `col_bigint_key` , `col_bigint_not_null` , `col_bigint_not_null_key` , `col_enum` , `col_enum_key` , `col_enum_not_null` , `col_enum_not_null_key` , `col_text` , `col_text_key` , `col_text_not_null` , `col_text_not_null_key` ) SELECT `col_char_10` , `col_char_10_key` , `col_char_10_not_null` , `col_char_10_not_null_key` , `col_char_255` , `col_char_255_key` , `col_char_255_not_null` , `col_char_255_not_null_key` , `col_int` , `col_int_key` , `col_int_not_null` , `col_int_not_null_key` , `col_bigint` , `col_bigint_key` , `col_bigint_not_null` , `col_bigint_not_null_key` , `col_enum` , `col_enum_key` , `col_enum_not_null` , `col_enum_not_null_key` , `col_text` , `col_text_key` , `col_text_not_null` , `col_text_not_null_key` FROM `bb`  ORDER BY `col_bigint`,`col_bigint_key`,`col_bigint_not_null`,`col_bigint_not_null_key`,`col_char_10`,`col_char_10_key`,`col_char_10_not_null`,`col_char_10_not_null_key`,`col_char_255`,`col_char_255_key`,`col_char_255_not_null`,`col_char_255_not_null_key`,`col_enum`,`col_enum_key`,`col_enum_not_null`,`col_enum_not_null_key`,`col_int`,`col_int_key`,`col_int_not_null`,`col_int_not_null_key`,`col_text`,`col_text_key`,`col_text_not_null`,`col_text_not_null_key`,`pk` LIMIT 50 /*Generated by THREAD_ID 1*/  failed: 1038 Out of sort memory; increase server sort buffer size

So, it would appear that transactions are failing for some reason or another.  However, I repeat the disclaimer about this being a beta and not a production deployment.  It could have something to do with the resources allocated for each beta user.

5)  Subquery grammar
This was the initial test I ran, but I have saved it for last.  First of all, the command line:

./gentest.pl --gendata=conf/drizzle/drizzle.zz --grammar=conf/drizzle/optimizer_subquery_drizzle.yy --queries=100 --threads=1 --dsn=dbi:mysql:host= --sqltrace --debug

This test generates some very nasty subquery-laded queries (see below).  The first thing I noticed on the single-threaded run was that Xeround seemed to not like this query very much at all:

SELECT    table2 . `col_int` AS field1 FROM ( CC AS table1 STRAIGHT_JOIN ( ( CC AS table2 STRAIGHT_JOIN CC AS table3 ON (table3 . `col_bigint_key` = table2 . `col_int_not_null_key`  ) ) ) ON (table3 . `col_text_not_null_key` = table2 . `col_char_10_key`  ) ) WHERE (  table1 . `col_int` NOT IN ( SELECT   SUBQUERY1_t1 . `col_int_not_null_key` AS SUBQUERY1_field1 FROM ( BB AS SUBQUERY1_t1 INNER JOIN ( CC AS SUBQUERY1_t2 INNER JOIN BB AS SUBQUERY1_t3 ON (SUBQUERY1_t3 . `col_char_10_key` = SUBQUERY1_t2 . `col_char_10_key`  ) ) ON (SUBQUERY1_t3 . `col_char_10_not_null_key` = SUBQUERY1_t2 . `col_char_10`  ) ) WHERE SUBQUERY1_t2 . `col_bigint` != table1 . `pk` OR SUBQUERY1_t2 . `pk` >= table2 . `pk` ) ) OR ( table1 . `col_int_key`  BETWEEN 48 AND ( 48 + 183 ) OR table1 . `pk`  BETWEEN 48 AND ( 48 + 104 ) )  GROUP BY field1  ;

Now it is quite nasty, but standard MySQL executes it with a minimum of fuss (though it does take a moment to handle this monster as well).

The other thing is that Xeround took an exceedingly long time to execute this workload.  While the other grammars executed in moderate amounts of time (my testing was from a hotel room in Santa Clara while the instance is in Chicago), the subquery test was noticeably slow.  I was able to walk down to the lobby, buy something, and return to my room while it was dealing with the nasty query above : (  For some context, running the same command line on my laptop took 8 seconds, Xeround took 14 minutes, but again…beta test setup and hardware, so YMMV.

Finally, we have the dreaded row count report:

# 2011-04-27T20:45:19 Rows returned:
$VAR1 = {
'    0' => 59,
'    1' => 2,
'    4' => 1,
'   -1' => 35,
'>10' => 1,
'>100' => 1

MySQL 5.1:

# 2011-04-27T20:40:18 Rows returned:
$VAR1 = {
'    0' => 59,
'    1' => 2,
'    4' => 1,
'    9' => 1,
'   -1' => 35,
'>100' => 1

As we can see, there is 1 query out of the 100 issued where result sets differed (returning 9 rows in MySQL vs. >10 rows in Xeround).

I also tried using –threads=10 to really stress the Xeround system (I didn’t bother with MySQL, it handles 10 threads of nasty subqueries like a champ…incidentally, so does Drizzle) ; ) Xeround was able to handle the workload and did so in 27 minutes. Since single-threaded took 14 minutes, perhaps Xeround doesn’t really begin to shine until we start hitting large numbers of concurrent connections?

So what can I say from the results of these informal tests?  Personally, I would hesitate to say that Xeround is a drop-in replacement.  The limitations on column sizes, changes in table naming, and differing result sets are a bit worrisome.  However, I will say that the Xeround engineers I met at the UC were very engaged and interested in my findings and have made significant strides in subquery processing since my initial tests.  I believe that with time these issues will be fixed and that not every customer will run into them (I know I’m beating this into the ground, but I was using a beta test system).  Behavior may be different on a production machine and not every MySQL user will generate such workloads and every customer should perform their own careful testing and evaluation before making any changes to their systems.

My personal interest ends here.  The UC introduced me to a number of interesting new storage engines and I was mainly curious about ways of evaluating them.  This was a quick and dirty bit of testing just to see if I could produce any interesting pyrotechnics ; )  Go go randgen!

I really want this picture to be shown when anyone searches for 'randgen' ; )

In all seriousness, I highly recommend adoption of the random query generator.  It offers a modular and customizable system for creating evaluation tools (like result set comparison, execution time comparison, replication validation, etc, etc) and has been used in production-level testing for MySQL, MariaDB and Drizzle for some time.  It also plays with Postgresql and Java DB (kind of scary that 40% of that list is owned by Oracle…), so please give it a spin and see what kinds of pretty explosions you can make…who knows, testing might actually become fun for non-QA folks >; )

Additionally, these tests only took me about half an hour to setup and execute.  Granted, I have been using the tool for some time, but 30 minutes to identify a number of potential problem areas seems pretty awesome to me, but then again, I am a QA Engineer and we live for such things.

Drizzle7 GA – testing summary

With the HUGE news of Drizzle’s first GA, I thought it appropriate that I spend some time discussing the testing that has gone into this release.

I tend to agree with Stewart’s assessment of our quality – it is solid and I think that you will find yourself pleasantly surprised and not at all angry when using it, but it is always in the eye of the user ; )  With that said, I did want to highlight some areas of interest.

To begin with, as we are a fork of MySQL, the bulk of our test suite comes directly from there as well.  Most of the standard mysql-test-run tests that are used to validate MySQL are also used for Drizzle.  All of the basics like creating tables, inserting data, etc – all tested just the same : )  Of course, we have also added significant numbers of new tests as development has progressed.

On top of this, we utilize a variety of other testing and benchmarking tools.  It seems appropriate to describe the life of a Drizzle patch really quickly:

Once a patch is approved for trunk, we merge it into our build branch.  This kicks off a set of Jenkins jobs on a variety of platforms that make sure the code will build and pass the main test suite.  If it fails, we usually ask the person who submitted the patch to sort things out.  We also provide for param-builds so that people can test their branches directly before proposing a merge – it is an invaluable tool.

Once the patch passes build, we move it into staging.  Staging consists of deeper tests and benchmarks.  We take lcov numbers, we run sysbench and dbt2, we utilize a variety of randgen tests to try to crash the server and to validate functionality like drizzledump and the transaction log.  In short, we really put our code through the wringer before we ever allow it into trunk.

For this release, my efforts were focused on a few particular areas:

Additional testing was provided by the community (many thanks to everyone who helped us out with feedback!) and by the individual developers.  We generally create test cases for our main suite.

We have a lot more interesting things planned for the future.  There are several intriguing new features on the horizon and we are also doing some cool things with our testing tools as well.