Peter ZaitsevHow to close POODLE SSLv3 security flaw (CVE-2014-3566) (15.10.2014, 17:55 UTC)

Padding Oracle On Downgraded Legacy Encryption

POODLE security flaw disables SSLv3 secure browsing (CVE-2014-3566)First off, the naming “convention” as of late for security issues has been terrible. The newest vulnerability (CVE­-2014-3566) is nicknamed POODLE, which at least is an acronym and as per the header above has some meaning.

The summary of this issue is that it is much the same as the earlier B.E.A.S.T (Browser Exploit Against SSL TLS), however there’s no known mitigation method in this case – other than entirely disabling SSLv3 support, in short, an attacker has a vector by which they can retrieve the plaintext form your encrypted streams.

So let’s talk mitigation, the Mozilla Security Wiki Serverside TLS has for some time made strict recommendations of ciphers and protocols; and is certainly worth your attention.

Apache

Disable SSLv2 and SSLv3 in your ssh apache configuration by setting:
SSLProtocol all -SSLv2 -SSLv3

Nginx

Allow support only for TLS in Nginx with the following:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

MySQL

This is where things get far more interesting; unlike Apache and Nginx there’s no way to allow / disallow entire protocols of the SSL / TLS spec within mysql; there is however the ability to specify the cipher spec to be used in SSL communication.

As such to remove SSLv3 support from MySQL you need only ensure that none of the SSLv3 ciphers are in use wihtin your configuration.

As per information in this bug you can find a list of SSLv3 ciphers by simply
openssl ciphers -v 'DEFAULT' | awk '/SSLv3 Kx=(RSA|DH|DH(512))/ { print $1 }'
DHE-RSA-AES256-SHA
DHE-DSS-AES256-SHA
DHE-RSA-CAMELLIA256-SHA
DHE-DSS-CAMELLIA256-SHA
AES256-SHA
CAMELLIA256-SHA
EDH-RSA-DES-CBC3-SHA
EDH-DSS-DES-CBC3-SHA
DES-CBC3-SHA
DHE-RSA-AES128-SHA
DHE-DSS-AES128-SHA
DHE-RSA-SEED-SHA
DHE-DSS-SEED-SHA
DHE-RSA-CAMELLIA128-SHA
DHE-DSS-CAMELLIA128-SHA
AES128-SHA
SEED-SHA
CAMELLIA128-SHA
RC4-SHA
RC4-MD5
EDH-RSA-DES-CBC-SHA
EDH-DSS-DES-CBC-SHA
DES-CBC-SHA
EXP-EDH-RSA-DES-CBC-SHA
EXP-EDH-DSS-DES-CBC-SHA
EXP-DES-CBC-SHA
EXP-RC2-CBC-MD5
EXP-RC4-MD5

Removing the above form your ssl-cipher configuration should disable SSLv3 support; of course ensuring your MySQL service is NOT generally accessible is by far one of the most important steps you can take in securing your MySQL deployment against CVE-2014-3566.

You can read more about POODLE here.

The following script will help to identify support for any none SSLv3 ciphers; unfortunately in my limited testing I have yet to have found a supported none SSLv3 cipher.

Formatting is an issue for the script as such please see the Github gist

 

UPDATE 2014-10-16: openssl updates are now becoming available with patches against this issue

AMI Linux: openssl-1.0.1j-1.80.amzn1 “add patch for CVE-2014-3566 (Padding Oracle On Downgraded Legacy Encryption attack)”

RedHat: no update is yet available

 

The post How to close POODLE SSLv3 security flaw (CVE-2014-3566) appeared first on MySQL Performance Blog.

Link
Peter ZaitsevRackspace doubling-down on open-source databases, Percona Server (15.10.2014, 07:00 UTC)

Rackspace doubling-down on OpenStack TroveFounded in 1998, Rackspace has evolved over the years to address the way customers are using data – and more specifically, databases. The San Antonio-based company is fueling the adoption of cloud computing among organizations large and small.

Today Rackspace is doubling down on open source database technologies. Why? Because that’s where the industry is heading, according to Sean Anderson, Manager of Data Services at Rackspace. The company, he said, created a separate business unit of 100+ employees focused solely on database workloads.

The key technologies under the hood include both relational databases (e.g., MySQL, Percona Server, and MariaDB) and NoSQL databases (e.g., MongoDB, Redis, and Apache Hadoop).

Last July Rackspace added support for Percona Server and MariaDB to their Cloud Databases DBaaS (Database-as-a-Service) product, primarily at the request of application developers who had been requesting more open source database support options.

Matt Griffin, Percona director of product management, and I recently sat down with Sean and his colleague Neha Verma, product manager of Cloud Databases. Our discussion focused on the shift to DBaaS as well as what to expect in the future from Rackspace in terms of Cloud Databases, OpenStack Trove and more.

* * *

Matt: Why did you expand the Cloud Databases product this past summer?
Sean:  We launched cloud databases about a year and a half ago. Since then we’ve rolled feature after feature (backups, monitoring, configuration management, etc…) focused on simplifying our customers life, this backed by Fanatical support has made the product easier to use and more production ready than ever. We understand that features aren’t enough so in addition to all the features we have also made significant improvements to the hardware and network infrastructure. All this means that we’ve been very busy not just expanding the offering but also making the offering simpler to use, more complete and more scalable.

Our vision is to offer a robust platform that with the most popular Big Data, SQL, and NoSQL databases on dedicated, bare metal, and public cloud infrastructure.

Matt: What type of customer is your Cloud Databases offering aimed at?
Sean: Currently we have a variety of customers running multiple Cloud Database instances ranging from customers running a two-month marketing campaign to customers running web applications, ecommerce applications with highly transactional database workloads. Our customers prefer the simplicity and reliability of the service which allows them to focus on their business and not worry about the heavy lifting associated with scaling and managing databases.

Matt: How is your Cloud Databases offering backed-up?
Neha: We use Percona XtraBackup  to perform a hot copy of all databases on a instance and then stream the backups to Cloud Files for storage. A customer can anytime restore the backup to a new instance. Percona XtraBackup is the only option we offer customers right now.

Tom: In terms of security, how do you address customer concerns? Are cloud-based open source databases more secure?
Sean: Data security concerns are at an all-time high and we have a number of up and coming features that continue to address those concerns.   Today we offer a number of unique features specifically Cloud Databases can only be accessed on the private network so the database can only be accessed by systems on your private network. Additionally, we support SSL for communication between user application and database instance so that any data transfer is encrypted in transit.  These features along with the built in user controls and authentication mechanisms help significantly address customers security concerns.  Ultimately Cloud-based open source databases or no more or less secure than any other database, security is about more than features it is about the process and people that build and manage your database and we have those more than covered.

Matt: Is this for production applications or pre-production?

Truncated by Planet PHP, read more at the original (another 3744 bytes)

Link
Peter ZaitsevRecover orphaned InnoDB partition tablespaces in MySQL (14.10.2014, 15:25 UTC)

A few months back, Michael wrote about reconnecting orphaned *.ibd files using MySQL 5.6. I will show you the same procedure, this time for partitioned tables. An InnoDB partition is also a self-contained tablespace in itself so you can use the same method described in the previous post.

To begin with, I have an example table with a few orphaned partitions and we will reconnect each partition one by one to the original table.

mysql [localhost] {msandbox} (recovery) > SHOW CREATE TABLE t1 G
*************************** 1. row ***************************
Table: t1
Create Table: CREATE TABLE `t1` (
[...]
KEY `h_date` (`h_date`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (year(h_date))
(PARTITION p0 VALUES LESS THAN (2006) ENGINE = InnoDB,
PARTITION p1 VALUES LESS THAN (2010) ENGINE = InnoDB,
PARTITION px VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */
1 row in set (0.00 sec)
mysql [localhost] {msandbox} (recovery) > SELECT COUNT(*) FROM t1;
+----------+
| COUNT(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)
-rw-rw----.  1 revin revin 176M Oct  8 08:41 t1#P#p0.ibd
-rw-rw----.  1 revin revin 612M Oct  8 08:41 t1#P#p1.ibd
-rw-rw----.  1 revin revin 932M Oct  8 08:42 t1#P#px.ibd

The first step is to create a dummy table and remove partitioning so that we can reattach individual partitions to this table.

mysql [localhost] {msandbox} (recovery) > CREATE TABLE t1_t LIKE t1;
Query OK, 0 rows affected (0.02 sec)
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t1_t REMOVE PARTITIONING;
Query OK, 0 rows affected (0.01 sec)
Records: 0  Duplicates: 0  Warnings: 0
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t1_t DISCARD TABLESPACE;
Query OK, 0 rows affected (0.05 sec)

Once the tablespace for our dummy table has been discarded, we copy  one of the partitions to take the place of the dummy table’s tablespace. For example, we copy 
t1#P#p0.ibd
  as 
t1_t.ibd
  into the MySQL data directory, of course taking into account the permissions afterward. The next step is to import the tablespace to the dummy table.
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t1_t IMPORT TABLESPACE;
Query OK, 0 rows affected, 1 warning (7.34 sec)

And for the secret sauce, we will exchange our dummy table recently imported tablespace to replace the target partition in our original table.
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t1 EXCHANGE PARTITION px WITH TABLE t1_t;
Query OK, 0 rows affected (6.42 sec)
mysql [localhost] {msandbox} (recovery) > SELECT COUNT(*) FROM t1;
+----------+
| COUNT(*) |
+----------+
|  8523686 |
+----------+
1 row in set (2.50 sec)

You can do the same with subpartitions, too! Here’s my slightly different table with subpartitions where I reconnect one of the orphaned tablespaces
t2#P#px#SP#pxsp1.ibd
 .
mysql [localhost] {msandbox} (recovery) > SHOW CREATE TABLE t2 G
*************************** 1. row ***************************
       Table: t2
Create Table: CREATE TABLE `t2` (
[...]
  KEY `h_date` (`h_date`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (year(h_date))
SUBPARTITION BY HASH (u_id)
SUBPARTITIONS 2
(PARTITION p0 VALUES LESS THAN (2006) ENGINE = InnoDB,
 PARTITION p1 VALUES LESS THAN (2010) ENGINE = InnoDB,
 PARTITION px VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */
1 row in set (0.00 sec)
mysql [localhost] {msandbox} (recovery) > SELECT COUNT(*) FROM t2;
+----------+
| COUNT(*) |
+----------+
|        0 |
+----------+
1 row in set (0.94 sec)
-rw-rw----.  1 revin revin  92M Oct  8 08:44 t2#P#p0#SP#p0sp0.ibd
-rw-rw----.  1 revin revin  92M Oct  8 08:44 t2#P#p0#SP#p0sp1.ibd
-rw-rw----.  1 revin revin 304M Oct  8 08:44 t2#P#p1#SP#p1sp0.ibd
-rw-rw----.  1 revin revin 316M Oct  8 08:44 t2#P#p1#SP#p1sp1.ibd
-rw-rw----.  1 revin revin 480M Oct  8 08:45 t2#P#px#SP#pxsp0.ibd
-rw-rw----.  1 revin revin 460M Oct  8 08:45 t2#P#px#SP#pxsp1.ibd
mysql [localhost] {msandbox} (recovery) > CREATE TABLE t2_t LIKE t2;
Query OK, 0 rows affected (0.02 sec)
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t2_t REMOVE PARTITIONING;
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t2_t DISCARD TABLESPACE;
Query OK, 0 rows affected (0.04 sec)

Once again, after copying 
t2#P#px#SP#pxsp1.ibd
  to replace 
t2_t.ibd
  in

Truncated by Planet PHP, read more at the original (another 3499 bytes)

Link
Peter ZaitsevHow to avoid hash collisions when using MySQL’s CRC32 function (13.10.2014, 14:43 UTC)

Percona Toolkit’s  pt-table-checksum performs an online replication consistency check by executing checksum queries on the master, which produces different results on replicas that are inconsistent with the master – and the tool pt-table-sync synchronizes data efficiently between MySQL tables.

The tools by default use the CRC32. Other good choices include MD5 and SHA1. If you have installed the FNV_64 user-defined function, pt-table-sync will detect it and prefer to use it, because it is much faster than the built-ins. You can also use MURMUR_HASH if you’ve installed that user-defined function. Both of these are distributed with Maatkit. For details please see the tool’s documentation.

Below are test cases similar to what you might have encountered. By using the table checksum we can confirm that the two tables are identical and useful to verify a slave server is in sync with its master. The following test cases with pt-table-checksum and pt-table-sync will help you use the tools more accurately.

For example, in a master-slave setup we have a table with a primary key on column “a” and a unique key on column “b”. Here the master and slave tables are not in sync and the tables are having two identical values and two distinct values. The pt-table-checksum tool should be able to identify the difference between master and slave and the pt-table-sync in this case should sync the tables with two REPLACE queries.

+-----+-----+    +-----+-----+
|  a  |  b  |    |  a  |  b  |
+-----+-----+    +-----+-----+
|  2  |  1  |    |  2  |  1  |
|  1  |  2  |    |  1  |  2  |
|  4  |  3  |    |  3  |  3  |
|  3  |  4  |    |  4  |  4  |
+-----+-----+    +-----+-----+

Case 1:  Non-cryptographic Hash function (CRC32) and the Hash collision.

The tables in the source and target have two different columns and in general way of thinking the tools should identify the difference. But the below scenarios explain how the tools can be wrongly used and how to avoid them – and make things more consistent and reliable when using the tools in your production.

The tools by default use the CRC32 checksums and it is prone to hash collisions. In the below case the non-cryptographic function (CRC32) is not able to identify the two distinct values as the function generates the same value even we are having the distinct values in the tables.

CREATE TABLE `t1` (
  `a` int(11) NOT NULL,
  `b` int(11) NOT NULL,
  PRIMARY KEY (`a`),
  UNIQUE KEY `b` (`b`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Master           Slave
+-----+-----+    +-----+-----+
|  a  |  b  |    |  a  |  b  |
+-----+-----+    +-----+-----+
|  2  |  1  |    |  2  |  1  |
|  1  |  2  |    |  1  |  2  |
|  4  |  3  |    |  3  |  3  |
|  3  |  4  |    |  4  |  4  |
+-----+-----+    +-----+-----+
Master:
[root@localhost mysql]#  pt-table-checksum --replicate=percona.checksum --create-replicate-table --databases=db1 --tables=t1
localhost --user=root --password=*** --no-check-binlog-format
            TS ERRORS  DIFFS     ROWS  CHUNKS SKIPPED    TIME TABLE
09-17T00:59:45      0      0        4       1       0   1.081 db1.t1
Slave:
[root@localhost bin]# ./pt-table-sync --print --execute --replicate=percona.checksum --tables db1.t1 --user=root --password=***
 --verbose  --sync-to-master  192.**.**.**
# Syncing via replication h=192.**.**.**,p=...,u=root
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE

Narrowed down to BIT_XOR:

Master:
mysql> SELECT BIT_XOR(CAST(CRC32(CONCAT_WS('#', `a`, `b`)) AS UNSIGNED)) FROM `db1`.`t1`;
+------------------------------------------------------------+
| BIT_XOR(CAST(CRC32(CONCAT_WS('#', `a`, `b`)) AS UNSIGNED)) |
+------------------------------------------------------------+
|                                                    6581445 |
+------------------------------------------------------------+
1 row in set (0.00 sec)
Slave:
mysql> SELECT BIT_XOR(CAST(CRC32(CONCAT_WS('#', `a`, `b`)) AS UNSIGNED)) FROM `db1`.`t1`;
+------------------------------------------------------------+
| BIT_XOR(CAST(CRC32(CONCAT_WS('#', `a`, `b`)) AS UNSIGNED)) |
+------------------------------------------------------------+
|                                                    6581445 |
+------------------------------------------------------------+
1 row in set (0.16 sec)

Case 2: As the tools are not able to identify the di

Truncated by Planet PHP, read more at the original (another 7497 bytes)

Link
Shlomi Noachorchestrator 1.1.18: new features, support for orchestrator-agent (13.10.2014, 13:00 UTC)

Outbrain's orchestrator 1.1.18 is released:

  • Support for orchestrator-agent (see announcement): agent pages, support for agent actions, initiation of seeds (provisioning of new/corrupted servers), auditing of seeds.
  • Clusters dashboard
  • Support for long query auditing
  • SSL
  • Proxy authentication (e.g. apache2 serving as reverse-proxy with LDAP)
  • User control
  • Better slave moving rules.

Quick links:

orchestrator now allows for seeding/provisioning of servers via orchestrator-agent. It communicates with agents executing on the MySQL hosts and coordinate transfer of data between them. orchestrator now supports invocation and auditing of seeding operations, and protects you from breaking your seeds. The orchestrator-agent is a solution to Outbrain's specific use case, and may not appeal to the greater crowd. Nonetheless it is extendible and is released as open source.

A nice overview of all topologies is provided in the clusters > dashboard page. The dashboard points out the clusters, amount of servers per cluster, and amount and type of problems per cluster. It also provides configurable aliasing to your clusters, so you may name your mysql01.myservice.mydomain.com:3306 as "customer db".

orchestrator will list each instance's long running queries in its own backend database, and will present the user with such data. It also provides with a "kill query" functionality (do note that polling for long running queries is done once per minute only).

SSL is supported in agents communication.

orchestrator now accepts proxy authentication (configurable). This lets you put, for example, apache2 as reverse proxy, which communicates with LDAP and authenticates access. The authenticated user is then relayed to orchestrator. With this method of authentication you are now able to pre-define "power users". In this authentication mode orchestrator will limit user actions to read-only, unless the user is a power-user. Thus, viewing the topologies is open to everyone, but moving slaves around is limited to a smaller group of people. Read more.

You may now move up a slave if it's master is in stopped-slave state, which allows extracting good, healthy slaves from underneath busy, slow relay-slaves.

orchestrator is released by Outbrain under the Apache 2.0 license, and is free to use.

 

Link
Shlomi NoachAnnouncing orchestrator-agent (13.10.2014, 12:59 UTC)

orchestrator-agent is a side-kick, complementary project of orchestrator, implementing a daemon service on one's MySQL hosts which communicates with and accepts commands from orchestrator, built with the original purpose of providing an automated solution for provisioning new or corrupted slaves.

It was built by Outbrain, with Outbrain's specific use case in mind. While we release it as open source, only a small part of its functionality will appeal to the public (this is why it's not strictly part of the orchestrator project, which is a general purpose, wide-audience solution). Nevertheless, it is a simple implementation of a daemon, such that can be easily extended by the community. The project is open for pull-requests!

A quick breakdown of orchestrator-agent is as follows:

  • Executes as a daemon on linux hosts
  • Interacts and invokes OS commands (via bash)
  • Does not directly interact with a MySQL server running on that host (does not connect via mysql credentials)
  • Expects a single MySQL service on host
  • Can control the MySQL service (e.g. stop, start)
  • Is familiar with LVM layer on host
  • Can take LVM snapshots, mount snapshots, remove snapshots
  • Is familiar with the MySQL data directory, disk usage, file system
  • Can send snapshot data from a mounted snapshot on a running MySQL host
  • Can prepare data directory and receive snapshot data from another host
  • Recognizes local/remote datacenters
  • Controlled by orchestrator, two orchestrator-agents implement an automated and audited solution for seeding a new/corrupted MySQL host based on a running server.

Offline, hard-copy backups aside, Outbrain implements quick backups via LVM snapshots. Some slaves (depending on strength, datacenter etc.) take the role of snapshot servers. These slaves take an LVM snapshot once per day, and keep it open unmounted. At any given time we might have multiple open snapshots on a snapshot server. At a few minutes notice, we are able to restore MySQL to yesterday's, the day before, two days before, ... state. This only takes mounting of the desired snapshot and starting MySQL over the mounted volume.

We've in fact put this method to practice in at least one major occasion last year, that proved it to be of significant worth: we were able to fix a compromise to some dataset within minutes of action.

But we also use these snapshots on a daily basis, putting them constantly to practice (hence validating this backup method routinely): we provision new MySQL servers based on those snapshots. A new or corrupted server can be seeded with a snapshot data (preferably from a host in same datacenter). We don't need to perform a backup now. We can take the one from tonight.

There is some fuss around this: need to verify MySQL not running on target host, clean up the target host, choose the best source host, choose the best snapshot on source host, mount it, deliver the data, accept the data, do post-copy cleanup, start the MySQL service on target host, ... Or we might choose to actually do take a fresh, immediate snapshot from a server not configured as snapshot server... We used to do all these by shell scripts, but realized this will not sustain us.

Orchestrator-agent was invented with this particular use case in mind, and orchestrator was extended to support these operations. Orchestrator can act as the controller of a seed/provisioning operation, getting info from/to orchestrator-agent and coordinating two agents to send/receive the data.

The agent still relies on some external commands (via bash) to implement some functionality; to some extent, we have broken down complex shell scripts then wrapped again, now controlled and audited by the agent; but this will slowly change as we move more code into orchestrator-agent itself.

Should you use orchestrator-agent?

If, like us, you use LVM snapshots, at least as a partial form of backup - bingo!

Otherwise, orchestrator-agent can assist in stopping/starting the MySQL service, tailing the error log; simple stuff like that -- probably too much fuss for such basic operations.

But it can be easily extended to support other forms of backup (though do note that orchestrator-agent does not and should not perform scheduled backups, merely manipulate them); so if you like to contribute code into orchestrator-agent, please let us know! It would be nice to have a solution that works for multiple use cases.

Get it

orchestrator-agent is released under the Apache 2.0 license, and is free to use.

Pre-compiled, RPM and deb packages can be found on the releases page.

 

Link
Daniël van EedenDisabling old_passwords=1 (12.10.2014, 09:15 UTC)
It is possible to disallow users from using old_passwords=1. This can be done by adding 'maximum-old_passwords=0' to your my.cnf

This prevents users from generating passwords hashes in pre-4.1 format. In MySQL 5.7 old_passwords=1 is already disabled, so this is only useful on 5.6 and earlier.

Be aware that this also restricts old_passwords=2 which is needed for sha256 hashes.

mysql> select @@old_passwords;
+-----------------+
| @@old_passwords |
+-----------------+
| 0 |
+-----------------+
1 row in set (0.00 sec)

mysql> set old_passwords=1;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> show warnings;
+---------+------+----------------------------------------------+
| Level | Code | Message |
+---------+------+----------------------------------------------+
| Warning | 1292 | Truncated incorrect old_passwords value: '1' |
+---------+------+----------------------------------------------+
1 row in set (0.00 sec)

mysql> select @@old_passwords;
+-----------------+
| @@old_passwords |
+-----------------+
| 0 |
+-----------------+
1 row in set (0.00 sec)

Link
Sergey PetruniaMariaDB 10.1: Better query optimization for ORDER BY … LIMIT (10.10.2014, 23:47 UTC)

For some reason, we’ve been getting a lot of issues with ORDER BY optimization recently. The fixes have passed Elena Stepanova’s scrutiny and I’ve pushed them to MariaDB 10.1. Now, MariaDB’s ORDER BY ... LIMIT optimizer:

  • Doesn’t make stupid choices when several multi-part keys and potential range accesses are present (MDEV-6402)
  • Always uses “range” and (not full “index” scan) when it switches to an index to satisfy ORDER BY … LIMIT (MDEV-6657)
  • Tries hard to be smart and use cost/number of records estimates from other parts of the optimizer (MDEV-6384, MDEV-465, MySQL Bug#36817)
  • Takes full advantage of InnoDB’s Extended Keys feature when checking if filesort() can be skipped (MDEV-6796)

so, if you are struggling with poor optimization of ORDER BY ... LIMIT queries, help might be underway.

I have to admit that optimizations for ORDER BY... LIMIT are still far from perfect (both in MySQL and in MariaDB). The code is old and ugly and having spent some time debugging it, I can easily believe that it still does a poor job in some cases. The good part is that now we have some fresh knowledge of the ORDER BY … LIMIT optimization code, as well as RQG scripts for testing ORDER BY .. LIMIT queries. If you have any issues with ORDER BY LIMIT optimization, we will appreciate to see bugs filed for them.

Link
Chris CalenderMariaDB 5.5.40 Overview and Highlights (10.10.2014, 22:46 UTC)

MariaDB 5.5.40 was recently released (it is the latest MariaDB 5.5), and is available for download here:

https://downloads.mariadb.org/mariadb/5.5.40/

This is a maintenance release, and so there are not too many big changes of note, just a number of normal bug fixes. However, there are a few items worth mentioning:

If interested, the official MariaDB 5.5.40 release notes are here:

https://mariadb.com/kb/en/mariadb/development/release-notes/mariadb-5540-release-notes/

And the full list of fixed bugs and changes in MariaDB 5.5.40 can be found here:

https://mariadb.com/kb/en/mariadb/development/changelogs/mariadb-5540-changelog/

Hope this helps.

 

Link
Sergey PetruniaA discovery - Index Condition Pushdown can cause a slowdown after all (10.10.2014, 21:56 UTC)

MariaDB 5.5 and then MySQL 5.6 got Index Condition Pushdown (ICP) optimization (initially coded by yours truly). The idea of ICP is simple: after reading the index record, check the part of WHERE condition that can be computed using index columns, and only then read the table record. That way, we avoid reading table rows that don’t satisfy index condition:

It seems apparent that ICP can never make things slower. The WHERE clause has to be checked anyway, and not reading certain records can only make things faster.

That was what I thought, too, until recently Joffrey Michaie observed the contrary “in the wild”: we’ve got a real-world case where using Index Condition Pushdown was slower than not using it: MDEV-6713. The slowdown was about 20%, both on MariaDB and MySQL.

From what I could investigate so far, the slowdown is caused by these three factors working together:

  • a VARCHAR(255) column in the index. MySQL’s in-memory data representation for VARCHARs is not space efficient. If a column is defined as VARCHAR(255), any value will occupy the entire 255 bytes.
  • InnoDB’s row prefetch cache. When InnoDB detects that one is reading a lot of rows from an index, it pre-fetches index records and stores them in an internal cache. The cache uses the inefficient in-memory data representation.
  • Design of MySQL’s Item classes. One can’t evaluate an expression over a table record that is in InnoDB prefetch cache. Expression evaluation functions expect to find column values in the table’s “primary location”, internally known as table->record[0]. In order for ICP to check the condition, index columns have to be copied to table->record[0], first.

I hope we will be able to investigate this problem and post more about it soon. For now, the news is that ICP can cause a slowdown, when the index has big VARCHAR columns.

Link
LinksRSS 0.92   RDF 1.
Atom Feed