dbqp being renamed

One of the best things that can happen to a piece of software is for people to actually use it.

I’ve been fortunate enough to have received feedback on the tool from several members of both the Percona and Drizzle teams.  The most common and strongly emphasized comments were in regards to what a terrible, terrible name dbqp really is in terms of saying, seeing, and typing it ; )

As that isn’t something that can be disputed (it’s really annoying to use in conversations *and* to type several dozen times a day), the project has been renamed to kewpie.  For those that follow such things, I did present on another tool with that name at the last MySQL Conference, but *that* tool is a nice-to-have, while the test-runner sees daily use.  Better to save the good names for software that actually stands a chance of being used, I say : )

While there are probably 1*10^6 other things I need to do (Stewart is a merciless slave driver as a boss, btw…heheh), the fact that we are merging the tool into the various Percona branches meant it should be done sooner rather than later.  The tool is currently in our 5.1 branch and I have merge requests up for both Drizzle and Xtrabackup (dbqp was living there too).

I have several other interesting things going on with the tests and tool, which I’ll be blogging about over at MySQL Performance Blog.  Later this week, I’ll be talking about what we’ve been doing to work on this bug ; )

 

Also, the Percona Live MySQL Conference in DC is just around the corner.  There are going to be some great speakers and attendees

dbqp and Xtrabackup testing

So I’m back from the Percona dev team’s recent meeting.  While there, we spent a fair bit of time discussing Xtrabackup development.  One of our challenges is that as we add richer features to the tool, we need equivalent testing capabilities.  However, it seems a constant in the MySQL world that available QA tools often leave something to be desired.  The randgen is a literal wonder-tool for database testing, but it is also occasionally frustrating / doesn’t scratch every testing itch.  It is based on technology SQL Server was using in 1998 (MySQL began using it in ~2007, IIRC).  So this is no knock, it is merely meant to be an example of a poor QA engineer’s frustrations ; )  While the current Xtrabackup test suite is commendable, it also has its limitations. Enter the flexible, adaptable, and expressive answer: dbqp.

One of my demos at the dev meeting was showing how we can set up tests for Xtrabackup using the unittest paradigm.  While this sounds fancy, basically, we take advantage of Python’s unittest and write classes that use their code.  The biggest bit dbqp does is search the specified server code (to make sure we have everything we should), allocate and manage servers as requested by the test cases, and do some reporting and management of the test cases.  As the tool matures, I will be striving to let more of the work be done by unittest code rather than things I have written : )

To return to my main point, we now have two basic tests of xtrabackup:

Basic test of backup + restore:

  1. Populate server
  2. Take a validation snapshot (mysqldump)
  3. Take the backup (via innobackupex)
  4. Clean datadir
  5. Restore from backup
  6. Take restored state snapshot and compare to original state

Slave setup

  1. Similar to our basic test except we create a slave from the backup, replicating from the backed up server.
  2. After the initial setup, we ensure replication is set up ok, then we do additional work on the master and compare master and slave states

One of the great things about this is that we have the magic of assertions.  We can insert them at any point of the test we feel like validating and the test will fail with useful output at that stage.  The backup didn’t take correctly?  No point going through any other steps — FAIL! : )  The assertion methods just make it easy to express what behavior we are looking for.  We want the innobackupex prepare call to run without error?
Boom goes the dynamite!:

# prepare our backup
cmd = ("%s --apply-log --no-timestamp --use-memory=500M "
"--ibbackup=%s %s" %( innobackupex
, xtrabackup
, backup_path))
retcode, output = execute_cmd(cmd, output_path, exec_path, True)
self.assertEqual(retcode, 0, msg = output)

From these basic tests, it will be easy to craft more complex test cases.  Creating the slave test was simply matter of adapting the initial basic test case slightly.  Our plans include: *heavy* crash testing of both xtrabackup and the server, enhancing / expanding replication tests by creating heavy randgen loads against the master during backup and slave setup, and other assorted crimes against database software.  We will also be porting the existing test suite to use dbqp entirely…who knows, we may even start working on Windows one day ; )

These tests are by no means the be-all-end-all, but I think they do represent an interesting step forward.  We can now write actual, honest-to-goodness Python code to test the server.  On top of that, we can make use of the included unittest module to give us all sorts of assertive goodness to express what we are looking for.  We will need to and plan to refine things as time moves forward, but at the moment, we are able to do some cool testing tricks that weren’t easily do-able before.

If you’d like to try these tests out, you will need the following:
* dbqp (bzr branch lp:dbqp)
* DBD:mysql installed (test tests use the randgen and this is required…hey, it is a WONDER-tool!) : )
* Innobackupex, a MySQL / Percona server and the appropriate xtrabackup binary.

The tests live in dbqp/percona_tests/xtrabackup_basic and are named basic_test.py and slave_test.py, respectively.

To run them:
$./dbqp.py –suite=xtrabackup_basic –basedir=/path/to/mysql –xtrabackup-path=/mah/path –innobackupex-path=/mah/other/path –default-server-type=mysql –no-shm

Some next steps for dbqp include:
1)  Improved docs
2)  Merging into the Percona Server trees
3)  Setting up test jobs in Jenkins (crashme / sqlbench / randgen)
4)  Other assorted awesomeness

Naturally, this testing goodness will also find its way into Drizzle (which currently has a 7.1 beta out).  We definitely need to see some Xtrabackup test cases for Drizzle’s version of the tool (mwa ha ha!) >: )

Drizzle / dbqp updates

Just wanted to blog about some of the latest updates to dbqp.  We just merged some interesting changes into Drizzle (just in time for the impending Fremont beta).  In additional to general code cleanup / reorganization, we have the following goodies:

Randgen in the Drizzle tree

One of the biggest things is that the random query generator (aka randgen) is now part of the Drizzle tree.  While I did some of the work here, the major drivers of this happening were Brian and Stewart:

  1. Brian makes a fair argument that the easier / more convenient it is to run a test, the greater the likelihood of it being run.  Additional tools to install, etc = not so much.  Having something right there and ready to go = win!
  2. Stewart is also a fan of convenience, lotsa testing, and working smarter, not harder.  As a result, he did the initial legwork on merging the randgen.  I do suspect there is still much for me to learn about properly bzr joining trees and whatnot, but we’ll get it right soon enough ; )

This doesn’t mean we won’t be contributing any changes we make back to the main randgen project / branch, it is strictly to facilitate more testing for Drizzle.  As we already have our randgen tests packaged into dbqp-runnable suites, running these tests is even easier : )

–libeatmydata

Another request fulfilled in this update is the ability to use Stewart’s libeatmydata to speed up testing.  By default, dbqp uses shared memory as a workdir, similar to mysql-test-run’s –mem option (this can be bypassed in dbqp with –no-shm, fyi).  However, this isn’t always perfect or desirable to do.

An alternative is to use libeatmydata, which disables fsync() calls.  As the name implies, you don’t want to use it if care about your data, but for general testing purposes, it can greatly speed up test execution.

If you have the library installed / on your machine, you can use it like so:  ./dbqp –libeatmydata [–libeatmydata-path ] …

By default, libeatmydata-path is /usr/local/lib/libeatmydata.so (as if you used make install)

Multiple server types

IMHO, this is one of the coolest new tricks.  dbqp can now handle more than just Drizzle servers / source!  The ultimate idea is to allow tests that utilize more than one type / version of a server to have more interesting tests : )  This will be useful for scenarios like testing Drizzledump migration as we can feed in one (or more) MySQL servers and a Drizzle tree and make sure we can migrate data from all of them.

We also intend to utilize dbqp for testing a variety of Percona products, and it is kind of handy to be able to run the code you are testing ; )  I already have the tool running Percona / MySQL servers and have some randgen tests working:


$ ./dbqp.py --default_server_type=mysql --basedir=/percona-server/Percona-Server --mode=randgen
Setting --no-secure-file-priv=True for randgen usage...
20111013-163443 INFO Linking workdir /dbqp/workdir to /dev/shm/dbqp_workdir_pcrews_9dbc7e8a-2872-45a9-8a07-f347f6184246
20111013-163443 INFO Using mysql source tree:
20111013-163443 INFO basedir: /percona-server/Percona-Server
20111013-163443 INFO clientbindir: /percona-server/Percona-Server/client
20111013-163443 INFO testdir: /dbqp
20111013-163443 INFO server_version: 5.5.16-rel21.0
20111013-163443 INFO server_compile_os: Linux
20111013-163443 INFO server_platform: x86_64
20111013-163443 INFO server_comment: (Percona Server with XtraDB (GPL), Release rel21.0, Revision 188)
20111013-163443 INFO Using default-storage-engine: innodb
20111013-163443 INFO Using testing mode: randgen
20111013-163443 INFO Processing test suites...
20111013-163443 INFO Found 5 test(s) for execution
20111013-163443 INFO Creating 1 bot(s)
20111013-163449 INFO Taking clean db snapshot...
20111013-163452 INFO bot0 server:
20111013-163452 INFO NAME: s0
20111013-163452 INFO MASTER_PORT: 9307
20111013-163452 INFO SOCKET_FILE: /dbqp/workdir/bot0/s0/var/s0.sock
20111013-163452 INFO VARDIR: /dbqp/workdir/bot0/s0/var
20111013-163452 INFO STATUS: 1
20111013-163506 ===============================================================
20111013-163506 TEST NAME [ RESULT ] TIME (ms)
20111013-163506 ===============================================================
20111013-163506 main.blob [ pass ] 8624
20111013-163516 main.create_drop [ pass ] 2862
20111013-163524 main.many_indexes [ pass ] 1429
20111013-163547 main.optimizer_subquery [ pass ] 17153
20111013-163558 main.outer_join [ pass ] 4243
20111013-163558 ===============================================================
20111013-163558 INFO Test execution complete in 69 seconds
20111013-163558 INFO Summary report:
20111013-163558 INFO Executed 5/5 test cases, 100.00 percent
20111013-163558 INFO STATUS: PASS, 5/5 test cases, 100.00 percent executed
20111013-163558 INFO Spent 34 / 69 seconds on: TEST(s)
20111013-163558 INFO Test execution complete
20111013-163558 INFO Stopping all running servers...

Expect to see this up and running tests against Percona Server in the next week or so.  I’ll be writing more about this soon.

Native / unittest mode

This hasn’t made it into the Drizzle tree yet.  To ease merging the code with Percona Server / Xtrabackup, I’ve created a separate launchpad project.  One of the things we needed was the ability to write complex tests directly.  It is currently easy to plug new tools into dbqp, but we essentially needed a new tool for certain testing needs.

Our solution for this was to allow dbqp to run python unittest modules.  We still have a bit of work to do before we have some demo tests ready, but we will be creating some expanded Xtrabackup tests using this system very soon.  So far, it is turning out to be pretty neat:


./dbqp.py --default_server_type=mysql --basedir=/percona-server/Percona-Server --mode=native
20111013-190744 INFO Killing pid 1747 from /dbqp/workdir/bot0/s0/var/run/s0.pid
20111013-190744 INFO Linking workdir /dbqp/workdir to /dev/shm/dbqp_workdir_pcrews_9dbc7e8a-2872-45a9-8a07-f347f6184246
20111013-190744 INFO Using mysql source tree:
20111013-190744 INFO basedir: /percona-server/Percona-Server
20111013-190744 INFO clientbindir: /percona-server/Percona-Server/client
20111013-190744 INFO testdir: /dbqp
20111013-190744 INFO server_version: 5.5.16-rel21.0
20111013-190744 INFO server_compile_os: Linux
20111013-190744 INFO server_platform: x86_64
20111013-190744 INFO server_comment: (Percona Server with XtraDB (GPL), Release rel21.0, Revision 188)
20111013-190744 INFO Using default-storage-engine: innodb
20111013-190744 INFO Using testing mode: native
20111013-190744 INFO Processing test suites...
20111013-190744 INFO Found 1 test(s) for execution
20111013-190744 INFO Creating 1 bot(s)
20111013-190749 INFO Taking clean db snapshot...
20111013-190750 INFO bot0 server:
20111013-190750 INFO NAME: s0
20111013-190750 INFO MASTER_PORT: 9306
20111013-190750 INFO SOCKET_FILE: /dbqp/workdir/bot0/s0/var/s0.sock
20111013-190750 INFO VARDIR: /dbqp/workdir/bot0/s0/var
20111013-190750 INFO STATUS: 1
20111013-190756 ===============================================================
20111013-190756 TEST NAME [ RESULT ] TIME (ms)
20111013-190756 ===============================================================
20111013-190756 main.example_test [ pass ] 1
20111013-190756 test_choice (example_test.TestSequenceFunctions) ... ok
20111013-190756 test_sample (example_test.TestSequenceFunctions) ... ok
20111013-190756 test_shuffle (example_test.TestSequenceFunctions) ... ok
20111013-190756
20111013-190756 ----------------------------------------------------------------------
20111013-190756 Ran 3 tests in 0.000s
20111013-190756
20111013-190756 OK
20111013-190756
20111013-190756 ===============================================================
20111013-190756 INFO Test execution complete in 6 seconds
20111013-190756 INFO Summary report:
20111013-190756 INFO Executed 1/1 test cases, 100.00 percent
20111013-190756 INFO STATUS: PASS, 1/1 test cases, 100.00 percent executed
20111013-190756 INFO Spent 0 / 6 seconds on: TEST(s)
20111013-190756 INFO Test execution complete
20111013-190756 INFO Stopping all running servers...

This really only scratches the surface of what can happen, but I’ll be writing more in-depth articles on what kind of tricks we can pull off as the code gets more polished.

Three non-testing bits:

1)  Percona Live London is just around the corner and members of the Drizzle team will be there.

2)  We are *this* close to Fremont beta being ready.  The contributions and feedback have been most welcome.  Any additional testing / etc are most appreciated.

3)  Drizzle is now part of the SPI!

 

Drizzle’s Jenkins system using dbqp for randgen and crashme testing

Well, that’s pretty much it, thanks for stopping by ; )

In all seriousness, it’s kind of neat that we’re using dbqp to run some of our staging tests and we gain a few neat things:

Speed

Here are the trend charts for randgen and crashme.  While it doesn’t look like randgen is showing much of an improvement, it is worth mentioning that this job now runs both the standard and the transaction log tests in a single run >: )  Previously, we had a separate drizzle-automation job for the transaction log.  Just the trx_log tests took ~30 minutes to run (plus build time).  Long story short, we’re saving about 30-40 minutes on randgen testing per staging run and only needing to build once!

Maintainability

The jobs we run are in the tree and anyone can easily repeat them.  While Drizzle-automation kicks major butt (and I have taken many ideas from it), it is a separate piece of software that requires setup and maintenance.  Basing things around an in-tree setup means that you only need the code and any required bits and pieces.  Now if we need to set up a new randgen machine, we only need the randgen and dbd::drizzle installed (and we plan on including randgen in-tree soon, so you won’t even need that!).  If we need to set up a new crash-me machine, we only need dbd::drizzle – and everyone should have dbd::drizzle installed! ; )

Ease of use

Pretty much all tests provide the same standard output:

dtr mode

From the command:

./dbqp

Our default mode is dtr (aka using drizzletest.cc to execute standard .test files). To run all available tests, use the make target – make test-dbqp

20110621-081404  trigger_dictionary.loaded                  [ pass ]       43
20110621-081408  logging_stats.cumulative                   [ pass ]     1045
20110621-081412  errmsg_stderr.stderr                       [ pass ]       36
20110621-081412  ===============================================================
20110621-081412 INFO Test execution complete in 496 seconds
20110621-081412 INFO Summary report:
20110621-081412 INFO Executed 566/566 test cases, 100.00 percent
20110621-081412 INFO STATUS: PASS, 566/566 test cases, 100.00 percent executed
20110621-081412 INFO Spent 254 / 496 seconds on: TEST(s)
20110621-081412 INFO Test execution complete
20110621-081412 INFO Stopping all running servers...

randgen mode

From the command:

./dbqp --mode=randgen --randgen-path=/path/to/your/randgen


20110621-170141  main.subquery                              [ pass ]     3780
20110621-170148  main.subquery_semijoin                     [ pass ]     3016
20110621-170156  main.subquery_semijoin_nested              [ pass ]     3750
20110621-170202  main.varchar                               [ pass ]     2658
20110621-170202  ===============================================================
20110621-170202 INFO Test execution complete in 147 seconds
20110621-170202 INFO Summary report:
20110621-170202 INFO Executed 19/19 test cases, 100.00 percent
20110621-170202 INFO STATUS: PASS, 19/19 test cases, 100.00 percent executed
20110621-170202 INFO Spent 77 / 147 seconds on: TEST(s)
20110621-170202 INFO Test execution complete
20110621-170202 INFO Stopping all running servers...

crashme mode

From the command:

./dbqp --mode=crashme


20110621-181515  main.crashme                               [ fail ]   149840
20110621-181515  func_extra_to_days=error        # Function TO_DAYS
20110621-181515  ###
20110621-181515  ###<select to_days('1996-01-01') from crash_me_d
20110621-181515  ###>2450084
20110621-181515  ###We expected '729024' but got '2450084'
20110621-181515  func_odbc_timestampadd=error        # Function TIMESTAMPADD
20110621-181515  ###
20110621-181515  ###<select timestampadd(SQL_TSI_SECOND,1,'1997-01-01 00:00:00')
20110621-181515  ###>1997-01-01 00:00:01.000000
20110621-181515  ###We expected '1997-01-01 00:00:01' but got '1997-01-01 00:00:01.000000'
20110621-181515  ###
20110621-181515  ###<select {fn timestampadd(SQL_TSI_SECOND,1,{ts '1997-01-01 00:00:00'}) }
20110621-181515  ###>1997-01-01 00:00:01.000000
20110621-181515  ###We expected '1997-01-01 00:00:01' but got '1997-01-01 00:00:01.000000'
20110621-181515
20110621-181515 ERROR Failed test.  Use --force to execute beyond the first test failure
20110621-181515  ===============================================================
20110621-181515 INFO Test execution complete in 153 seconds
20110621-181515 INFO Summary report:
20110621-181515 INFO Executed 1/1 test cases, 100.00 percent
20110621-181515 INFO STATUS: FAIL, 1/1 test cases, 100.00 percent executed
20110621-181515 INFO FAIL tests: main.crashme
20110621-181515 INFO Spent 149 / 153 seconds on: TEST(s)
20110621-181515 INFO Test execution complete
20110621-181515 INFO Stopping all running servers...

While this isn’t a huge feature, it is nice to have a standardized report for knowing if something failed, what failed and how (we always dump test tool output on test failures).  Why is this nice?  Well, the world is a busy place and only needing to know one way of reading test output simplifies things just a teensy little bit.  This small improvement becomes a huge benefit over time if you happen to spend good chunks of your day looking at test output like me : )

Other than that, I’m still working on teaching dbqp interesting new tricks that will help me in testing SkySQL‘s Reference Architecture – expect to hear more about that next month!

Drizzle testing – now with more server stressing goodness!

One of the long term testing goals for Drizzle is to move all of our test logic directly in-tree.  Currently, we use a system called drizzle-automation to execute a variety of tests for our staging branch.  This is the final set of tests patches must pass before being allowed to merge into Drizzle trunk and includes things like sysbench, dbt2, the randgen, etc.  With the development of dbqp, we can now move this testing logic directly into the tree (and even move some of the testing tools there as well).  Of course, I’ve rambled on about this before, but I personally think it is cool and useful ; )  However enough of the sales pitch, on to the new modes!

sysbench mode

With but a simple incantation of ./dbqp –mode=sysbench [–suite=readonly|readwrite], you too can invoke the mighty sysbench configurations that we use to ensure each and every Drizzle patch is worth its salt!

Basically, each test case is a sysbench command line for a certain concurrency:


sysbench --max-time=240 --max-requests=0 --test=oltp --db-ps-mode=disable --drizzle-table-engine=innodb --oltp-read-only=off --oltp-table-size=1000000 --drizzle-mysql=on --drizzle-user=root --drizzle-db=test --drizzle-port=$MASTER_MYPORT --drizzle-host=localhost --db-driver=drizzle --num-threads=32

readonly and readwrite suites differ only with the –oltp-read-only switch being on|off.

The output looks like this (at present):

20110601-191706  ===============================================================
20110601-191706  TEST NAME                                  [ RESULT ] TIME (ms)
20110601-191706  ===============================================================
20110601-191706  readonly.concurrency_16                    [ pass ]   240019
20110601-191706  max_req_lat_ms: 21.44
20110601-191706  rwreqps: 4208.2
20110601-191706  min_req_lat_ms: 6.31
20110601-191706  deadlocksps: 0.0
20110601-191706  tps: 150.29
20110601-191706  avg_req_lat_ms: 6.65
20110601-191706  95p_req_lat_ms: 7.02
20110601-191706  ===============================================================
20110601-191706 INFO Test execution complete in 275 seconds
20110601-191706 INFO Summary report:
20110601-191706 INFO Executed 1/1 test cases, 100.00 percent
20110601-191706 INFO STATUS: PASS, 1/1 test cases, 100.00 percent executed
20110601-191706 INFO Spent 240 / 275 seconds on: TEST(s)
20110601-191706 INFO Test execution complete
20110601-191706 INFO Stopping all running servers...

This is probably the most ‘work-in-progress’ mode we have.  The reason for this is that our Jenkins system uses a database of previous results for comparison / emailing and we need to come up with some way to keep this bit working properly.  I’m still collaborating with the mighty computing wizard Monty Taylor on this.  One of the possibilities we’ve discussed is the use of the Phoronix Test Suite.  Personally, I think this looks pretty interesting / promising and if any php gurus want to assist here, we will compose ballads to honor your awesomeness.

sqlbench mode

Technically, sqlbench and crashme modes are both tied to the sql-bench test suite, however, they do different things and produce different output, so I will discuss them separately.

The biggest thing to note is that sql-bench is now in-tree.  You can read a bit more about this tool here and here

This mode basically calls the run-all-tests sql-bench script.  This executes all of the available tests for sql-bench and reports on the results (dbqp will fail if any sql-bench tests does).  NOTE – this takes some time (~45 minutes on my laptop)

To use it:
./dbqp –mode=sqlbench

Output:

20110608-135645  ===============================================================
20110608-135645  TEST NAME                                  [ RESULT ] TIME (ms)
20110608-135645  ===============================================================
20110608-135645  main.all_sqlbench_tests                    [ pass ]  2732007
20110608-135645  Test finished. You can find the result in:
20110608-135645  drizzle/tests/workdir/RUN-drizzle-Linux_2.6.38_9_generic_x86_64
20110608-135645  Benchmark DBD suite: 2.15
20110608-135645  Date of test:        2011-06-08 13:11:10
20110608-135645  Running tests on:    Linux 2.6.38-9-generic x86_64
20110608-135645  Arguments:           --connect-options=port=9306 --create-options=ENGINE=innodb
20110608-135645  Comments:
20110608-135645  Limits from:
20110608-135645  Server version:      Drizzle 2011.06.19.2325
20110608-135645  Optimization:        None
20110608-135645  Hardware:
20110608-135645
20110608-135645  alter-table: Total time: 42 wallclock secs ( 0.06 usr  0.04 sys +  0.00 cusr  0.00 csys =  0.10 CPU)
20110608-135645  ATIS: Total time: 22 wallclock secs ( 4.01 usr  0.26 sys +  0.00 cusr  0.00 csys =  4.27 CPU)
20110608-135645  big-tables: Total time: 24 wallclock secs ( 4.16 usr  0.22 sys +  0.00 cusr  0.00 csys =  4.38 CPU)
20110608-135645  connect: Total time: 31 wallclock secs ( 6.81 usr  4.50 sys +  0.00 cusr  0.00 csys = 11.31 CPU)
20110608-135645  create: Total time: 59 wallclock secs ( 2.93 usr  1.65 sys +  0.00 cusr  0.00 csys =  4.58 CPU)
20110608-135645  insert: Total time: 1962 wallclock secs (270.53 usr 66.35 sys +  0.00 cusr  0.00 csys = 336.88 CPU)
20110608-135645  select: Total time: 560 wallclock secs (23.12 usr  4.62 sys +  0.00 cusr  0.00 csys = 27.74 CPU)
20110608-135645  transactions: Total time: 21 wallclock secs ( 2.43 usr  1.98 sys +  0.00 cusr  0.00 csys =  4.41 CPU)
20110608-135645  wisconsin: Total time: 10 wallclock secs ( 2.11 usr  0.52 sys +  0.00 cusr  0.00 csys =  2.63 CPU)
20110608-135645
20110608-135645  All 9 test executed successfully
20110608-135645
20110608-135645  Totals per operation:
20110608-135645  Operation             seconds     usr     sys     cpu   tests
20110608-135645  alter_table_add                       18.00    0.02    0.00    0.02     100
20110608-135645  alter_table_drop                      17.00    0.02    0.01    0.03      91
20110608-135645  connect                                2.00    1.02    0.51    1.53    2000
<snip>
20110608-135645  update_rollback                        3.00    0.26    0.23    0.49     100
20110608-135645  update_with_key                       73.00    6.70    5.23   11.93  300000
20110608-135645  update_with_key_prefix                34.00    4.45    2.30    6.75  100000
20110608-135645  wisc_benchmark                         2.00    1.49    0.00    1.49     114
20110608-135645  TOTALS                              2865.00  310.26   79.94  390.20 2974250
20110608-135645
20110608-135645  ===============================================================
20110608-135645 INFO Test execution complete in 2735 seconds
20110608-135645 INFO Summary report:
20110608-135645 INFO Executed 1/1 test cases, 100.00 percent
20110608-135645 INFO STATUS: PASS, 1/1 test cases, 100.00 percent executed
20110608-135645 INFO Spent 2732 / 2735 seconds on: TEST(s)
20110608-135645 INFO Test execution complete
20110608-135645 INFO Stopping all running servers...

crashme mode

This mode is also provided thanks to the sql-bench suite, but the output and processing are different, thus a separate mode and section : )

Anyway, there is a script called crash-me that is provided with sql-bench.  We execute this script, look for any test failures in the output and report pass/fail.

There is an interesting story around these tests (and the sample output)- our Jenkins crashme slave has been down / having problems for a while.  Due to life and whatnot, we’ve had some issues getting it sorted.  However, once I got this mode up and running, I discovered that we were failing some tests:

20110608-152759  ===============================================================
20110608-152759  TEST NAME                                  [ RESULT ] TIME (ms)
20110608-152759  ===============================================================
20110608-152759  main.crashme                               [ fail ]   155298
20110608-152759  func_extra_to_days=error           # Function TO_DAYS
20110608-152759  ###
20110608-152759  ###<select to_days('1996-01-01') from crash_me_d
20110608-152759  ###>2450084
20110608-152759  ###We expected '729024' but got '2450084'
20110608-152759  func_odbc_timestampadd=error               # Function TIMESTAMPADD
20110608-152759  ###
20110608-152759  ###<select timestampadd(SQL_TSI_SECOND,1,'1997-01-01 00:00:00')
20110608-152759  ###>1997-01-01 00:00:01.000000
20110608-152759  ###We expected '1997-01-01 00:00:01' but got '1997-01-01 00:00:01.000000'
20110608-152759  ###
20110608-152759  ###<select {fn timestampadd(SQL_TSI_SECOND,1,{ts '1997-01-01 00:00:00'}) }
20110608-152759  ###>1997-01-01 00:00:01.000000
20110608-152759  ###We expected '1997-01-01 00:00:01' but got '1997-01-01 00:00:01.000000'
20110608-152759
20110608-152759 ERROR Failed test.  Use --force to execute beyond the first test failure
20110608-152759  ===============================================================
20110608-152759 INFO Test execution complete in 158 seconds
20110608-152759 INFO Summary report:
20110608-152759 INFO Executed 1/1 test cases, 100.00 percent
20110608-152759 INFO STATUS: FAIL, 1/1 test cases, 100.00 percent executed
20110608-152759 INFO FAIL tests: main.crashme
20110608-152759 INFO Spent 155 / 158 seconds on: TEST(s)
20110608-152759 INFO Test execution complete

So, while our tests were down, an ugly bug crept into the works.  Of course, it is terrible that we have a bug, but we can always bzrfind our way to the culprit code (expect a mode for that soon!) and we see the value of constant testing!  At any rate, we can now get our Jenkins slave back in working order and any developer or user that wants to stress their server now has an easy way to do so : )

Upcoming work

I’ve also been doing some cleaning up / reorganizing of dbqp code to allow for other neat tricks.  These changes will enable it to run other servers such as MySQL and allow it to serve as the basis of test suites for tools like mydumper and xtrabackup – I’ve already been discussing things with Stewart and Andrew about this and will be blogging / demoing the code very soon.

Additionally, we’re going to also see about moving the randgen into the Drizzle tree.  We use it for a significant portion of our testing and through the magic of bzr join, it will be easy to provide this tool for everyone (provided they have dbd::drizzle installed, of course).  Stewart was kind enough to set up an initial tree, I’ve just been too busy with SkySQL work to get it done this week.

Finally, we’re still moving forward with making dbqp *the* Drizzle test runner.  This is naturally happening in baby steps, but expect to see some changes in the next month or so.

With that said, I hope that people will enjoy playing with the new toys and I look forward to providing more fun ways of making your favorite dbms sweat in the near future >: )

New dbqp feature – using pre-created datadirs for tests

Why would one want to do this, you may ask?  Well, for starters, it makes a great ‘canary-in-the-coal-mine‘ in regards to backwards compatibility!

For Drizzle, we’ve created some tables (via the randgen’s data generator if you are curious), saved a copy of the datadir, and then created a test case that uses said datadir for the test server.  The test executes some simple SQL queries to make sure we can read the tables properly.  This way, if we ever do something to either the server or .dfe format (data format exchange – had a most enlightening conversation with the team about this format’s history at the MySQL UC), we’ll have a broken test that cries about it.  From there, we’ll know we have to take some action.  The always-amazing Stewart Smith has also created some foreign key backwards compatibility tests, which I believe marks further progress towards the magical goodness that is catalogs!

We signal that we want to do this by using a .cnf file:


[test_servers]
servers = [[]]

[s0]
load-datadir=backwards_compat_data

Each server is named s0,s1,sN.  If a server name is contained in the .cnf file, the test-runner will do the appropriate magic to use the specified datadir for that server.  The argument to load-datadir is the name of the directory that is intended for use in the test.  All datadirs are expected to live in drizzle/tests/std_data.  Tests that do use a .cnf file, like main.backwards_compatibility and slave.basic are skipped by test-run.pl automatically (you *can’t* run them via test-run.pl).

This is something that I don’t believe could be accomplished with the old test runner, or at least not *easily* done (see Rube Goldberg) ; ).  At some point, we will switch over to dbqp entirely and remove test-run.pl.  Seeing comments like this, makes me happy and think things are on track.

dbqp was created with the idea that it should be easy to express complex testing setups (multiple servers, using a preloaded datadir, etc, etc) and it looks like the incubation is starting to pay some benefits.  In addition to allowing this voodoo to happen, the code I’ve added to the test runner will allow us to start doing proper tests of the super Mr. Shrewsbury’s multi-master replication.  Joe Daly has also been doing some very promising work for hierarchical replication based on Dave’s tree.  I’ll be creating some example tests for these badass features soon.  The moral of the story is that by rethinking our test-runner, one tiny bit of code helps us move the ball forward on testing replication, backwards compatibility, and catalogs.

It’s honestly one of the best parts of working on the Drizzle project – being encouraged to experiment and rethink problems has enabled all sorts of innovation (but one example of Monty Taylor’s computing wizardry!) and cool features.  Thanks to this freedom to experiment, we now have even more ways of making sure we are producing quality code.

My view of QA is that we do help test, but that we also help other people answer their own questions about quality (via tools, documentation, examples, etc).  Ultimately, a test is a question – “Do you return the right answer for this query?”, “Can you survive a beating from the randgen?”, etc – and asking questions should be easy and informative.  QA shouldn’t be the sole province of some obscure priest class, but everyone’s playground.  When I see developers like Stewart writing interesting test cases and even contributing to the test tool itself, I’m even happier than when I find a bug (and finding bugs is quite awesome!).

Anyway, the code is proposed for a merge to trunk and documentation is available (under testing/writing test cases).  I hope that this makes trying to break things even more fun for people >: )

More on kewpie (the query probulator)

My presentation from the MySQL UC didn’t give a lot of detail on the actual tool I have hacked up, nor did it go into how to play with it / try it out.  I figured I should rectify that (at least one person seemed interested in trying it out <g>)

To begin with, you should have the random query generator installed (see the docs for handling that).  Besides being *the* cutting edge, production-ready testing tool in the open-source dbms world, it comes with a handy data generator.

One of the key features of kewpie, is that it can easily generate test queries against any test bed.  A standard randgen practice is to develop grammars and gendata files (which generates a user-specified test-bed) that are designed to work together.  By knowing which tables and columns will be available to the randgen at runtime, the grammar writer can tune the randgen to produce a higher percentage of valid queries.

It is possible to just use the built in _field element, which will randomly retrieve some available field, however, being able to match columns by data type (for both joins and comparisons) results in much more interesting queries >:)  At some point, the randgen will likely be improved to overcome this, but it is a sad fact of qa that we often spend more time producing tests, than working on beefing up our testing infrastructure.

At any rate, the kewpie demos are designed to work with the random data generator.  It is a very cool tool, and one can also use it with –start-and-exit to have a populated test server.

Requirements:

  • randgen
  • dbd::drizzle (see randgen docs)
  • MySQLDB (Drizzle speaks the MySQL protocol.  MySQLDB enables us to play well with SQLAlchemy too!)
  • the demo branch from launchpad: bzr branch lp:~patrick-crews/drizzle/dbqp_kewpie_demo

It is important to remember that kewpie is more of a science project than something you’ll use for testing any time soon.  It is meant to help illustrate the power behind large-scale automated query generation and analysis, but it pales in comparison to the mighty, mighty randgen.  However, if you are interested, please read on : )

Config files

kewpie runs are controlled via a .cnf file.  Python has a very nice ConfigParser library and it seems like a solid way of organizing this information and getting at it.  Also, the very well-designed drizzle-automation uses similar files.  I’ll just digress a bit here to give big props to Jay Pipes of the fu for his work here.  It has informed a lot of the infrastructure work I’ve been doing for Drizzle. : )

test_info section:


[test_info]
comment = basic test of kewpie seeding
test_schema = test
init_query_count = 2
init_table_count = 3
init_column_count = 4
init_where_count = 0
# limits for various values
max_table_count = 10
max_column_count = 25
max_where_count = 10
max_mutate_count = 3

This section seeds the initial query population.  In the example above, we produce 2 queries that each have 4 columns and use 2 tables (and no WHERE clause).  It is an eventual dream to have more fine-grained control over such things, but this was a proof-of-concept as much as anything.

Next we have limits.  We don’t want to go over 10 tables, 25 columns (in the SELECT clause), or 10 conditions in the WHERE clause.  We also set max_mutate_count so that only 3 mutant queries will ever be produced from a seed.  Setting it higher = more variants that are possible from each query.

mutators section:


[mutators]
add_table = 5
add_column = 1
add_where = 3

At the moment, kewpie only has 3 ways to manipulate a query – add_table, add_column, and add_where.  These should be fairly self-explanatory ; )  The vision is that these will eventually have a variety of parameters that can be set, so that we can one day ask that we only add conditions to the WHERE clause that use an integer column, for example.  The numeric values following each mutator name is how we stack the deck in favor of one mutator over another.  When we evaluate this section, we create a python list object that contains N occurances of each mutator name, when it comes time to mutate a query, we randomly choose one mutator from the list and then call that method against the query.

test_servers section:


[test_servers]
servers = [[--innodb.replication-log]]

As we do in other dbqp tests, we provide a list of python lists.  Each sublist represents the server options we want to use for the test server.  At present, there is no need to start more than 1 server, though there may be value in altering certain options.

evaluators section:


[evaluators]
row_count = True
explain_output = False

Currently, we only have the row_count evaluator.  This ensures that at least one row of data was returned for a given query.  It is surprising how valuable just this tiny filter can be.  In Microsoft’s research, they found that purely random systems only produced valid queries 50% of the time.  The remainder tended to short out at the parser level.  The evaluator is what helps us produce useful queries, the mutators are what help the system hit its evaluation targets (whatever they may be).

Future evaluators can measure code-coverage, server variable effect, log file effect, pretty much anything.  We want testing to be flexible and have borrowed heavily from the modular Validator and Reporter design of the randgen.

Now to see it in action!

We are going to take our join.cnf file and seed it so we create 2 initial queries, with 4 columns and 3 tables each.  We run this in conjunction with the conf/drizzle/drizzle.zz gendata file (sort of our go-to test bed for the randgen).


./dbqp --mode=kewpie --randgen-path=$RANDGEN_PATH --gendata=$RANDGEN_PATH/conf/drizzle/drizzle.zz join --verbose
Setting --no-secure-file-priv=True for randgen mode...
21 Apr 2011 11:38:59 VERBOSE: Initializing system manager...
21 Apr 2011 11:38:59 VERBOSE: Processing source tree under test...
21 Apr 2011 11:38:59 INFO: Using Drizzle source tree:
21 Apr 2011 11:38:59 INFO: basedir: /home/user/repos/kewpie_demo
<snip>
21 Apr 2011 11:39:00 INFO: Taking clean db snapshot...
21 Apr 2011 11:39:00 VERBOSE: Starting executor: bot0
21 Apr 2011 11:39:00 VERBOSE: Executor: bot0 beginning test execution...
21 Apr 2011 11:39:00 VERBOSE: Restoring from db snapshot
21 Apr 2011 11:39:00 VERBOSE: Starting server: bot0.s0
21 Apr 2011 11:39:00 INFO: bot0 server:
21 Apr 2011 11:39:00 INFO: NAME: s0
21 Apr 2011 11:39:00 INFO: MASTER_PORT: 9306
21 Apr 2011 11:39:00 INFO: DRIZZLE_TCP_PORT: 9307
21 Apr 2011 11:39:00 INFO: MC_PORT: 9308
21 Apr 2011 11:39:00 INFO: PBMS_PORT: 9309
21 Apr 2011 11:39:00 INFO: RABBITMQ_NODE_PORT: 9310
21 Apr 2011 11:39:00 INFO: VARDIR: /home/user/repos/kewpie_demo/tests/workdir/bot0/s0/var
21 Apr 2011 11:39:00 INFO: STATUS: 1
# 2011-04-21T11:39:00 Default schema: test
# 2011-04-21T11:39:00 Executor initialized, id GenTest::Executor::Drizzle 2011.03.14.2269 ()
# 2011-04-21T11:39:00 # Creating Drizzle table: test.A; engine: ; rows: 0 .
# 2011-04-21T11:39:00 # Creating Drizzle table: test.B; engine: ; rows: 0 .
# 2011-04-21T11:39:00 # Creating Drizzle table: test.C; engine: ; rows: 1 .
# 2011-04-21T11:39:00 # Creating Drizzle table: test.D; engine: ; rows: 1 .
# 2011-04-21T11:39:00 # Creating Drizzle table: test.AA; engine: ; rows: 10 .
# 2011-04-21T11:39:00 # Creating Drizzle table: test.BB; engine: ; rows: 10 .
# 2011-04-21T11:39:00 # Creating Drizzle table: test.CC; engine: ; rows: 100 .
# 2011-04-21T11:39:00 # Creating Drizzle table: test.DD; engine: ; rows: 100 .
21 Apr 2011 11:39:01 INFO: Executing query: SELECT table_1.col_char_1024_not_null_key AS column_1, table_3.col_char_1024 AS column_2, table_3.col_enum AS column_3, table_1.pk AS column_4 FROM AA AS table_1 RIGHT JOIN D AS table_2 ON table_1.col_char_1024_not_null_key = table_2.col_char_10_not_null_key LEFT OUTER JOIN D AS table_3 ON table_2.col_text_key = table_3.col_text
21 Apr 2011 11:39:01 INFO: EVALUATOR: row_count STATUS: True EXTRA: 1
21 Apr 2011 11:39:01 VERBOSE: ORIG QUERY:  SELECT table_1.col_char_1024_not_null_key AS column_1, table_3.col_char_1024 AS column_2, table_3.col_enum AS column_3, table_1.pk AS column_4 FROM AA AS table_1 RIGHT JOIN D AS table_2 ON table_1.col_char_1024_not_null_key = table_2.col_char_10_not_null_key LEFT OUTER JOIN D AS table_3 ON table_2.col_text_key = table_3.col_text
21 Apr 2011 11:39:01 VERBOSE: USING ADD_TABLE mutation
21 Apr 2011 11:39:01 VERBOSE: MUTANT QUERY: SELECT table_1.col_char_1024_not_null_key AS column_1, table_3.col_char_1024 AS column_2, table_3.col_enum AS column_3, table_1.pk AS column_4 FROM AA AS table_1 RIGHT JOIN D AS table_2 ON table_1.col_char_1024_not_null_key = table_2.col_char_10_not_null_key LEFT OUTER JOIN D AS table_3 ON table_2.col_text_key = table_3.col_text RIGHT JOIN B AS table_4 ON table_3.col_text_key = table_4.col_text_not_null
<snip>

From this output we can see how the query was executed, evaluated, and mutated.  As we wanted, we have 4 columns and 3 tables in the original query and we add extra tables to queries that evaluate well.

Now let’s see what happens when we use a different gendata file.  We’ll use one called varchar_drizzle.zz which, surprisingly enough, only uses varchars:


./dbqp --mode=kewpie --randgen-path=$RANDGEN_PATH --gendata=$RANDGEN_PATH/conf/drizzle/varchar_drizzle.zz join --verbose
Setting --no-secure-file-priv=True for randgen mode...
21 Apr 2011 11:44:20 VERBOSE: Initializing system manager...
21 Apr 2011 11:44:20 VERBOSE: Processing source tree under test...
21 Apr 2011 11:44:20 INFO: Using Drizzle source tree:
21 Apr 2011 11:44:20 INFO: basedir: /home/user/repos/kewpie_demo
<snip>
21 Apr 2011 11:44:20 INFO: Taking clean db snapshot...
21 Apr 2011 11:44:20 VERBOSE: Starting executor: bot0
21 Apr 2011 11:44:20 VERBOSE: Executor: bot0 beginning test execution...
21 Apr 2011 11:44:20 VERBOSE: Restoring from db snapshot
21 Apr 2011 11:44:20 VERBOSE: Starting server: bot0.s0
21 Apr 2011 11:44:20 INFO: bot0 server:
21 Apr 2011 11:44:20 INFO: NAME: s0
21 Apr 2011 11:44:20 INFO: MASTER_PORT: 9306
21 Apr 2011 11:44:20 INFO: DRIZZLE_TCP_PORT: 9307
21 Apr 2011 11:44:20 INFO: MC_PORT: 9308
21 Apr 2011 11:44:20 INFO: PBMS_PORT: 9309
21 Apr 2011 11:44:20 INFO: RABBITMQ_NODE_PORT: 9310
21 Apr 2011 11:44:20 INFO: VARDIR: /home/user/repos/kewpie_demo/tests/workdir/bot0/s0/var
21 Apr 2011 11:44:20 INFO: STATUS: 1
# 2011-04-21T11:44:20 Default schema: test
# 2011-04-21T11:44:20 Executor initialized, id GenTest::Executor::Drizzle 2011.03.14.2269 ()
# 2011-04-21T11:44:20 # Creating Drizzle table: test.table0_varchar_150_not_null; engine: ; rows: 0 .
# 2011-04-21T11:44:20 # Creating Drizzle table: test.table1_varchar_150_not_null; engine: ; rows: 1 .
# 2011-04-21T11:44:20 # Creating Drizzle table: test.table2_varchar_150_not_null; engine: ; rows: 2 .
# 2011-04-21T11:44:20 # Creating Drizzle table: test.table10_varchar_150_not_null; engine: ; rows: 10 .
# 2011-04-21T11:44:20 # Creating Drizzle table: test.table100_varchar_150_not_null; engine: ; rows: 100 .
21 Apr 2011 11:44:20 INFO: Executing query: SELECT table_1.col_varchar_1024 AS column_1, table_3.pk AS column_2, table_2.col_varchar_1024_key AS column_3, table_3.col_varchar_1024_not_null AS column_4 FROM table10_varchar_150_not_null AS table_1 RIGHT OUTER JOIN table2_varchar_150_not_null AS table_2 ON table_1.col_varchar_1024 = table_2.col_varchar_1024_not_null LEFT JOIN table10_varchar_150_not_null AS table_3 ON table_1.col_varchar_1024_key = table_3.pk
21 Apr 2011 11:44:20 INFO: EVALUATOR: row_count STATUS: True EXTRA: 2
21 Apr 2011 11:44:20 VERBOSE: ORIG QUERY:  SELECT table_1.col_varchar_1024_not_null_key AS column_1, table_3.pk AS column_2, table_3.col_varchar_1024_not_null_key AS column_3, table_1.col_varchar_1024 AS column_4 FROM table0_varchar_150_not_null AS table_1 RIGHT JOIN table2_varchar_150_not_null AS table_2 ON table_1.col_varchar_1024_not_null_key = table_2.col_varchar_1024 LEFT OUTER JOIN table1_varchar_150_not_null AS table_3 ON table_2.col_varchar_1024_not_null_key = table_3.col_varchar_1024_not_null
21 Apr 2011 11:44:20 VERBOSE: USING ADD_TABLE mutation
21 Apr 2011 11:44:21 VERBOSE: MUTANT QUERY: SELECT table_1.col_varchar_1024_not_null_key AS column_1, table_3.pk AS column_2, table_3.col_varchar_1024_not_null_key AS column_3, table_1.col_varchar_1024 AS column_4 FROM table0_varchar_150_not_null AS table_1 RIGHT JOIN table2_varchar_150_not_null AS table_2 ON table_1.col_varchar_1024_not_null_key = table_2.col_varchar_1024 LEFT OUTER JOIN table1_varchar_150_not_null AS table_3 ON table_2.col_varchar_1024_not_null_key = table_3.col_varchar_1024_not_null RIGHT JOIN table100_varchar_150_not_null AS table_4 ON table_3.col_varchar_1024_key = table_4.col_varchar_1024_not_null

As you can see, the testbed (created / populated tables have changed).  As a result, the generated queries have changed as well.  Allowing this kind of flexibility will allow qa engineers to not only look for good queries, but also for interesting query / test bed combinations (sometimes optimizations and code paths executed rely on both) in an easy and automated manner.

Next, we’ll take a look at how to add other things into the mix.  Suppose you want to also add WHERE conditions to your generated queries – it is as simple as tweaking the following line in join.cnf from:

[mutators]
add_table = 5
add_column = 0
add_where = 0

to


[mutators]
add_table = 2
add_column = 0
add_where = 4

We are now twice as likely to add a WHERE condition as we are to add a table to a query.  Let’s see what happens from the exact same command line:


./dbqp --mode=kewpie --randgen-path=$RANDGEN_PATH --gendata=$RANDGEN_PATH/conf/drizzle/varchar_drizzle.zz join --verbose
Setting --no-secure-file-priv=True for randgen mode...
21 Apr 2011 11:50:16 VERBOSE: Initializing system manager...
21 Apr 2011 11:50:16 VERBOSE: Processing source tree under test...
21 Apr 2011 11:50:16 INFO: Using Drizzle source tree:
21 Apr 2011 11:50:16 INFO: basedir: /home/user/repos/kewpie_demo
<snip>
21 Apr 2011 11:50:16 INFO: Taking clean db snapshot...
21 Apr 2011 11:50:16 VERBOSE: Starting executor: bot0
21 Apr 2011 11:50:16 VERBOSE: Executor: bot0 beginning test execution...
21 Apr 2011 11:50:16 VERBOSE: Restoring from db snapshot
21 Apr 2011 11:50:16 VERBOSE: Starting server: bot0.s0
21 Apr 2011 11:50:16 INFO: bot0 server:
21 Apr 2011 11:50:16 INFO: NAME: s0
21 Apr 2011 11:50:16 INFO: MASTER_PORT: 9306
21 Apr 2011 11:50:16 INFO: DRIZZLE_TCP_PORT: 9307
21 Apr 2011 11:50:16 INFO: MC_PORT: 9308
21 Apr 2011 11:50:16 INFO: PBMS_PORT: 9309
21 Apr 2011 11:50:16 INFO: RABBITMQ_NODE_PORT: 9310
21 Apr 2011 11:50:16 INFO: VARDIR: /home/user/repos/kewpie_demo/tests/workdir/bot0/s0/var
21 Apr 2011 11:50:16 INFO: STATUS: 1
# 2011-04-21T11:50:16 Default schema: test
# 2011-04-21T11:50:16 Executor initialized, id GenTest::Executor::Drizzle 2011.03.14.2269 ()
# 2011-04-21T11:50:16 # Creating Drizzle table: test.table0_varchar_150_not_null; engine: ; rows: 0 .
# 2011-04-21T11:50:16 # Creating Drizzle table: test.table1_varchar_150_not_null; engine: ; rows: 1 .
# 2011-04-21T11:50:16 # Creating Drizzle table: test.table2_varchar_150_not_null; engine: ; rows: 2 .
# 2011-04-21T11:50:16 # Creating Drizzle table: test.table10_varchar_150_not_null; engine: ; rows: 10 .
# 2011-04-21T11:50:16 # Creating Drizzle table: test.table100_varchar_150_not_null; engine: ; rows: 100 .
<snip>
21 Apr 2011 11:50:17 INFO: Executing query: SELECT table_1.col_varchar_1024 AS column_1, table_3.pk AS column_2, table_2.col_varchar_1024_key AS column_3, table_3.col_varchar_1024_not_null AS column_4 FROM table10_varchar_150_not_null AS table_1 RIGHT OUTER JOIN table2_varchar_150_not_null AS table_2 ON table_1.col_varchar_1024 = table_2.col_varchar_1024_not_null LEFT JOIN table10_varchar_150_not_null AS table_3 ON table_1.col_varchar_1024_key = table_3.pk
21 Apr 2011 11:50:17 INFO: EVALUATOR: row_count STATUS: True EXTRA: 2
21 Apr 2011 11:50:17 VERBOSE: ORIG QUERY:  SELECT table_1.col_varchar_1024_not_null_key AS column_1, table_3.pk AS column_2, table_3.col_varchar_1024_not_null_key AS column_3, table_1.col_varchar_1024 AS column_4 FROM table0_varchar_150_not_null AS table_1 RIGHT JOIN table2_varchar_150_not_null AS table_2 ON table_1.col_varchar_1024_not_null_key = table_2.col_varchar_1024 LEFT OUTER JOIN table1_varchar_150_not_null AS table_3 ON table_2.col_varchar_1024_not_null_key = table_3.col_varchar_1024_not_null
21 Apr 2011 11:50:17 VERBOSE: USING ADD_WHERE mutation
21 Apr 2011 11:50:17 VERBOSE: MUTANT QUERY: SELECT table_1.col_varchar_1024_not_null_key AS column_1, table_3.pk AS column_2, table_3.col_varchar_1024_not_null_key AS column_3, table_1.col_varchar_1024 AS column_4 FROM table0_varchar_150_not_null AS table_1 RIGHT JOIN table2_varchar_150_not_null AS table_2 ON table_1.col_varchar_1024_not_null_key = table_2.col_varchar_1024 LEFT OUTER JOIN table1_varchar_150_not_null AS table_3 ON table_2.col_varchar_1024_not_null_key = table_3.col_varchar_1024_not_null WHERE table_1.pk >= 'W'

As I said, it is still beta software ; )  However, in all seriousness, we want to be able to generate ‘bad’ queries, but to have the option of not using them and filtering them out of a test if they serve no purpose.

Hopefully, this will give anyone that is interested a better idea of how to play with the code.  Development will likely continue, but this is still more of a prototype of how things could be.  If you *really* want to test a database, I still highly recommend the amazing random query generator – it is good for blowing things up!

I really want this picture to be shown when anyone searches for 'randgen' ; )

dbqp / randgen docs are up

As I mentioned earlier, dbqp now has a randgen mode.  This allows us to run test cases for the randgen in much the same way as we execute drizzletest cases.  At present, all of our randgen tests are defined as dbqp cases and live in drizzle/tests/randgen_tests.  They are organized into suites (much like drizzletest cases).  You can also use it as a quick means of creating an ad hoc replication setup!

 

Anyway, we now have proper documentation up.  Instructions and prereqs for the randgen are included in the docs.  If anyone runs into any issues, please write a bug and I’ll get on it.  Thanks and have fun testing! : )

 

We have lots more cool stuff planned for dbqp and drizzle.  Ultimately, the goal is to make it fast, easy and efficient for anyone to make sure the server is behaving as they expect it to – we want happy users!  With that said –   GA release tomorrow!  It is a huge deal for us and lots of blood, sweat and tears have gone into the software.  We are really excited and hope that everyone else will wonder how they lived without Drizzle for so long (I know I have) : )