How fast can you sort data with MySQL ?

灰暗的星星灰暗的星星灰暗的星星灰暗的星星灰暗的星星
 
分类:技术文章

I took the same table as I used for MySQL Group by Performance Tests to see how much MySQL can sort 1.000.000 rows, or rather return top 10 rows from sorted result set which is the most typical way sorting is used in practice.

I tested full table scan of the table completes in 0.22 seconds giving us about 4.5 Million of rows/sec. Obviously we can’t get sorted result set faster than that.

I placed temporary sort files on tmpfs (/dev/shm) to avoid disk IO as a variable as my data set fits in memory anyway and decided to experiment with sort_buffer_size variable.

The minimum value for sort_buffer_size is 32K which gives us the following speed:

 

 

Not bad ! Even though MySQL does not optimize “get top N sorted rows” very well it takes just 2.5 times longer than full table scan to get the data. And this is with minimum sort_buffer allowed when a lot of sort merge passes are required for sort completion:

 

 

As you can see from this show status output MySQL only counts completely sorted rows in Sort_rows variable. In this case 1.000.000 of rows were partially sorted but only 10 rows fetched from the data file and sent and only they are counted. In practice this means Sort_rows may well understate sort activity happening on the system.

Lets now increase sort_buffer_size and see how performance is affected:

 

 

OK raising sort_buffer_size to 100K gives quite expected performance benefit, now we’re just 2 times slower than table scan of the query and considering table size was about 60MB we have 120MB/sec sort speed, while 2.000.000 rows/sec is of course more relevant in this case.

Still a lot of sort merge passes lets go with even higher buffer sizes.

 

 

Wait it is not right. We’re increasing sort_buffer_size and number of sort_merge_passes decreases appropriately but it does not help sort speed instead it drops 3 times from 0.44sec to do 1.34sec !

Lets try it even higher to finally get rid of sort merge passes – may be it is sort merge which is inefficient with large sort_buffer_size ?

 

 

Nope. We finally got rid of sort_merge_passes but our sort performance got even worse !

I decided to experiment a bit further to see what sort_buffer_size is optimal for given platform and given query (I did not test if it is the same for all platforms or data sets) – The optimal sort_buffer_size in this case was 70K-250K which is quite smaller than even default value.

The CPU in question was Pentium 4 having 1024K of cache.

A while ago I already wrote what large buffers are not always better but I never expected optimal buffer to be so small at least in some conditions.

What do we learn from these results: