Friday, March 28, 2014

The range access method and why you should use EXPLAIN JSON

I got an interesting question about EXPLAIN and the range access method recently. The person had a query that could be written either with a BETWEEN predicate or an IN predicate, something similar to this:
    -> FROM orders WHERE customer_id BETWEEN 7 AND 10 AND value > 500;
| id | select_type | table  | type  | key      | key_len  | rows | Extra
|  1 | SIMPLE      | orders | range | cust_val | 10       |   91 |  ...

    -> FROM orders WHERE customer_id IN (7,8,9,10) AND value > 500;
| id | select_type | table  | type  | key      | key_len  | rows | Extra
|  1 | SIMPLE      | orders | range | cust_val | 10       |   44 |  ...

The table was:

   customer_id INT,
   value INT,
   order_date DATE,
   KEY custid_value (customer_id, value)

Given that customer_id is an integer value, these queries should be equivalent. And that's what EXPLAIN seems to tell us too: same access method, same key, same key_length etc.

You may have guessed the question by now: If the queries are equivalent, why isn't the rows estimate identical? Is one of the numbers wrong or do these queries have different execution plans after all?

Monday, October 21, 2013

FAQ: InnoDB extended secondary keys

MySQL 5.6 introduced a new feature called extended secondary keys. We get a lot of questions about it and find that most of them come from a few incorrect assumption. In this post I'll try to get rid of the confusion once and for all. Famous last words... here goes:

Q1: Do I need to do anything to enable extended secondary keys?

No, nothing at all. It's on by default and I can't see any sensible reason why you would want to disable it. However, it is possible to disable it by tuning the optimizer_switch: SET optimizer_switch='use_index_extensions={on|off}'.

Q2: Does extended secondary keys only work with InnoDB?

No, it should work with any storage engine that uses the primary key columns as reference to the row, which means most storage engines with clustered primary keys. I say "should" because it requires a minimum of work from the storage engine provider; it must announce to MySQL that it is supported.

Friday, October 18, 2013

The second MySQL seminar: a summary

Once again, Geir Høydalsvik and I had the pleasure of hosting a MySQL mini-seminar in Trondheim. 25+ attendants from at least 7 different companies and a few professors from the computer science dept. at NTNU showed up on yesterdays event. I recognized many of these from the first seminar but there were some new faces as well.

This time, Mark Leith came on a visit from the UK. He gave an introduction to Performance Schema and ps_helper. ps_helper is a really nice tool to make sense of the overwhelming amount of data collected by PS. He also gave a very convincing demo of MySQL Enterprise Monitor (MEM). More than a few attendants now plan to give MEM a try in their environment. You can too - there's a 30 day trial, which should be more than enough to decide if you need it :-)

Like last time, we had great discussions, pizza and a very good time. I'm looking forward to the next seminar already.

If you're interested, Mark's PS presentation can be found on slideshare. ps_helper is both on github and on his blog. ps_helper comes highly recommended!

Friday, September 13, 2013

Next MySQL mini-seminar in Trondheim October 17

We have the pleasure to announce that the next MySQL mini-seminar in Trondheim will be held on October 17 15:00-18:00. This time, MySQL Monitoring and performance schema evangelist Mark Leith will visit us. The agenda will be similar to last mini-seminar:

  • Presentation: MySQL monitoring and performance Schema, by Mark Leith
  • Q&A, discussions
  • Suggestions for presentations on future seminars
  • Food and mingling
For more info, see the registration page.

Don't miss this opportunity to meet experts, developers and other MySQL users.

Thursday, June 20, 2013

The first MySQL mini-seminar in Trondheim was a huge success!

Yesterday, Geir Høydalsvik and I had the pleasure of hosting a MySQL mini-seminar in Trondheim, Norway.

The topic of the day was a presentation by yours truly on how the MySQL optimizer works. Geir briefly explained how the MySQL teams are organized in Oracle, and that our focus is on delivering high quality on time.

We had lots of interesting questions and discussions (and pizza) afterwards. Of particular interest was:
  • How the MySQL code base can be modularized to make it maintainable and testable. The takeaway was that MySQL has invested a lot on refactoring the last couple of years to improve in this area and will continue to do so in both the near and far future.
  • How testing is done in MySQL. The answer was that the QA teams have been significantly ramped up since Sun acquired MySQL. In addition to much more resources to the QA teams, developers are now expected to write unit tests using the Google unit test framework for new functionality whenever it makes sense. Refactoring and modularization of the code makes this a lot easier to do now.

Close to 20 MySQL users attended in addition to a handful of MySQL hackers located in the Trondheim office. The users had very different backgrounds: from start-ups to solid established companies, and deployment on MySQL 5.1, 5.5 and 5.6. All agreed that the mini-seminar was a great initiative and that it should not be a one-time event. Yesterday's event will therefore not be the mini-seminar, but rather the first in a series.

We'll get back to you with agenda and date for the next mini-seminar once the details have been decided.

Wednesday, February 13, 2013

DBT-3 Q3: 6 x performance in MySQL 5.6.10

When MySQL gets a query, it is the job of the optimizer to find the cheapest way to execute that query. Decisions include access method (range access, table scan, index lookup etc), join order, sorting strategy etc. If we simplify a bit, the optimizer first identifies the different ways to access each table and calculate their cost. After that, the join order is decided.

However, some access methods can only be considered after the join order has been decided and therefore gets special treatment in the MySQL optimizer. For join conditions, e.g. "WHERE table1.col1 = table2.col2",  index lookup can only be used in table2 if table1 is earlier in the join sequence. Another class of access methods is only meaningful for tables that are first in the join order. An example is queries with ORDER BY ... LIMIT. Prior to MySQL 5.6.10 there was a bug in MySQL that made the optimizer choose inefficient execution plans for this query type. That's the topic I discuss below.

Thursday, December 20, 2012

Gluh on InnoDB extended secondary keys

Sergey Glukhov (Gluh) recently wrote an interesting blog about InnoDB secondary key improvements in MySQL 5.6. His blog isn't aggregated to but certainly deserves some attention. 

Here it is: InnoDB, extended secondary keys.

Monday, December 17, 2012

Favorite MySQL 5.6 features: an optimizer perspective

There are so many exciting new features in MySQL 5.6 that I almost don't know where to start. To mention a few, MySQL's multi-core scalability has been significantly improved to meet modern hardware, InnoDB has better index statistics, much better performance, and online ALTER, replication has multi-threaded slaves and global transaction identifiers, performance schema has added capabilities to provide a much more detailed view of what's bogging a server down, and more... much more.

However, my prime interest is the optimizer, which is why I've compiled a list of my favorite new optimizer features in 5.6. Here goes:

New ways to understand query plans:
The most common requests from DBAs is to get more information to understand how and why MySQL behaves like it does. You can come a very long way at squeezing performance out of MySQL if you can answer questions like "What Query Execution Plan (QEP) did the server decide to use?" and "Why was this QEP picked?". In this department, MySQL 5.6 delivers a bunch of new features.

Tuesday, October 2, 2012

Index merge annoyances fixed in MySQL 5.6

While the index merge access types certainly are useful for a number of queries, there has been some frustration expressed both from customers and the community about how it...
  1. is not used when it should have been
  2. is used when ref access is obviously better
  3. merges suboptimal indexes
  4. is too restricted in which conditions can be used
I could come up with numerous examples of related bugs and feature requests dating back more than six years. To list a few: 17673, 30151, 23322, 65274, 65359.

The good news is that we have fixed all these issues in the latest released MySQL 5.6 server.

Consider the problem in BUG#65274 (simplified a bit): 
CREATE TABLE phpbb_posts (
  KEY `topic_id` (`topic_id`),
  KEY `forum_id` (`forum_id`)
  KEY `tid_fid` (`topic_id`,`forum_id`),

SELECT count(*) FROM phpbb_posts WHERE topic_id = 110126 AND forum_id = 19;

Friday, May 4, 2012

Performance improvements for big INFORMATION_SCHEMA tables

A short while after I fixed the legacy bug that prevented temporary MyISAM tables from using the dynamic record format, I got an email from Davi Arnaut @ Twitter. It turned out that Twitter needed to fix the very same problem, but for the case when INFORMATION_SCHEMA temporary tables use MyISAM.

In short, INFORMATION_SCHEMA tables provide access to database metadata. Despite their name, they are more like views than tables: when you query them, relevant data is gathered from the dictionary and other server internals, not from tables. The gathered data is stored in a temporary table (memory or MyISAM depending on size) and then returned to the user.

The reason Davi emailed me was to let me know that he had further improved the fix for temporary MyISAM tables to also enable the use of dynamic record format for INFORMATION_SCHEMA tables. I usually don't have huge databases on my development box so the problem of querying metadata had gone unnoticed. But it turns out that Davi and his colleagues at Twitter do deal with massive amounts of data :-)

This was one of the queries that caused problems:


Wednesday, April 18, 2012

Copying unused bytes is bad (duh!)

Last summer my colleague Marko Mäkelä committed this seemingly innocent performance fix for InnoDB in MySQL 5.6:

3581 Marko Makela    2011-08-10
Bug#12835650 VARCHAR maximum length performance impact

row_sel_field_store_in_mysql_format(): Do not pad the unused part of
the buffer reserved for a True VARCHAR column (introduced in 5.0.3).
Add Valgrind instrumentation ensuring that the unused part will be
flagged uninitialized.

Before this, buffers which were used to send VARCHARs from InnoDB to the MySQL server were padded with 0s if the string was shorter than specified by the column. If, e.g., the string "foo" was stored in a VARCHAR(8), InnoDB used to write "3foo00000" to the buffer (the first character - 3 - determines the actual length of the string). However, even though these trailing bytes are not used anywhere, writing 0s to the buffer certainly has a cost. Hence the fix which stops InnoDB from writing them.

Oh my were we up for a ride! All of a sudden Valgrind started barking like this (and similar) for a number of tests:
Thread 19:
Syscall param pwrite64(buf) points to uninitialised byte(s)
at 0x381C60EEE3: ??? (in /lib64/
by 0x84B703: my_pwrite (my_pread.c:162) by 0x86FE6F: mi_nommap_pwrite (mysql_file.h:1203)
   by 0x88621B: _mi_write_static_record (mi_statrec.c:65)
   by 0x889592: mi_write (mi_write.c:142)
   by 0x532085: handler::ha_write_row(unsigned char*) (
   by 0x69B7F7: end_write(JOIN*, st_join_table*, bool) (
   by 0x69A43C: evaluate_join_record(JOIN*, st_join_table*, int)
   by 0x69A8E0: sub_select(JOIN*, st_join_table*, bool) (
The modifications we did to the server to remedy the problems can be divided into two categories as described below. In the end, the result was significantly improved performance.

Friday, April 13, 2012

New MySQL optimizer team page launched

FYI: A MySQL Optimizer Team blog aggregate page has been launched. It will be continuously updated with the latest blogs published by the MySQL optimizer developers. Now you can easily stay updated ;-)

Tuesday, April 10, 2012

Improvements for many-table joins in MySQL 5.6

A lot has happened in MySQL 5.6 for queries joining many tables. For the most common use cases we have drastically reduced the cost of finding the execution plan. We have also improved the heuristics and removed bugs so that the final plan is often better than it used to be. Read on if you are one of those people who do 15 way joins!

Finding a query execution plan
First some background. You can skip this part if you know how MySQL picks the table join order in 5.5.

When presented with a query, MySQL will try to find the best order to join tables by employing a greedy search algorithm. The outcome is what we call a query execution plan, QEP. When you join just a few tables, there's no problem calculating the cost of all join order combinations and then pick the best plan. However, since there are (#tables)! possible combinations, the cost of calculating them all soon becomes too high: for five tables, e.g., there are 120 combinations which is no problem to compute. For 10 tables there are 3.6 million combinations and for 15 tables there are 1307 billion. You get the picture. For this reason, MySQL makes a trade off: use heuristics to only explore promising plans. This is supposed to significantly reduce the number of plans MySQL needs to calculate, but at the same time you risk not finding the best one.

On queries with many values in the IN clause

A few customers with rather extreme needs have contacted us about a performance issue with the range optimizer. Our solution to the problem is to introduce a new variable in MySQL 5.6, eq_range_index_dive_limit, which can be used to control whether or not the range optimizer will a) do index dives, or b) use index statistics when estimating the number of rows in the ranges of the query. The former method gives a far more accurate estimate while the latter costs a lot less to compute.

This is what the help text has to tell about the variable:

The optimizer will use existing index statistics instead of doing index dives for equality ranges if the number of equality ranges for the index is larger than or equal to [the value of variable]. If set to 0, index dives are always used.
"Equality range" means predicates using operators IN() or =, and it's important to notice that the number of such ranges is counted on a per index basis. Furthermore, index dives are not used on unique/primary indexes, so only non-unique indexes are affected. 

The two ways row estimates can be computed

For as long as there have been a range access method in MySQL, the number of rows in a range has been estimated by diving down the index to find the start and end of the range and use these to count the number of rows between them. This technique is accurate, and is therefore a good basis to make the best possible execution plan (QEP) for the query. Unfortunately, it involves two index dives for each range. That's not a big deal if you run normal queries with a few equality ranges like

SELECT title FROM albums WHERE artist IN (10, 12)
But some MySQL users have queries with hundreds of values in the IN clause. For such queries it may take considerably longer to optimize the query than to execute it. This is when it makes sense to use the less accurate index statistics. Consider this example:

Monday, March 19, 2012

Index Condition Pushdown to the rescue!

A while ago, I explained how range access in a multiple-part index works and why MySQL can't utilize key parts beyond the first occurrence of some often used comparison operators. Luckily, there is a great improvement underway in MySQL 5.6 that will remedy much of this limitation. Meet Index Condition Pushdown.

How does ICP work?

Index Condition Pushdown is a new way for MySQL to evaluate conditions. Instead of evaluating conditions on rows read from a table, ICP makes it possible to evaluate conditions in the index and thereby avoid looking at the table if the condition is false.

Let's assume that we have a multiple-part index covering columns (keypart_1, ..., keypart_n). Further assume that we have a condition with a comparison operator on keypart_1 that does not allow keypart_2,...,keypart_n to be used as a range condition (more on that here).

When MySQL does range access without ICP enabled, the index is traversed to locate rows in the table that are within the range(s) relevant for the conditions on keypart_1. The rows in these ranges are read from the table and returned from the storage engine to the MySQL server. The server then evaluates the remaining conditions, including those that apply to keypart_2,..,keypart_n.