Peter ZaitsevMySQL community set to meet at Percona Live London 2014 (22.10.2014, 07:00 UTC)

The countdown is on for Europe’s largest annual MySQL event, Percona Live London 2014. The two days of technical tutorials and sessions, November 3-4, will focus on the latest MySQL industry trends, news, best practices – and a look at what’s on the near- and long-term horizon within the global MySQL ecosystem.

Percona Live London 2014 will bring attendees up to date on key areas including MySQL 5.7, database security, database as a service (DBaaS), Hadoop and high availability (HA), disaster recovery, replication and backup, performance and scalability, WebScaleSQL and much, much more.

Team Oracle will be in London, led by Tomas Ulin, vice president of Oracle’s MySQL engineering team, who will explain why MySQL just keeps getting better with the latest news for the MySQL Database, MySQL Cluster, MySQL Workbench… and more. Oracle’s Luis Soares, principle software engineer, and Andrew Morgan, MySQL HA product management, will provide insight into what’s in the latest 5.7 development milestone release and also what’s going on in the labs… particularly around MySQL replication. Seize the opportunity to learn how to leverage MySQL 5.7 replication to grow your business from their session, “MySQL Replication: What’s New in 5.7 and Beyond.”

If anything keeps DBAs up at night it’s database security – especially with recent revelations of vulnerabilities like the POODLE SSLv3 security flaw (CVE-2014-3566) and “Bash Bug,” also known as Shellshock (CVE-2014-6271). Attendees will have the opportunity to talk face-to-face with database security expert David Busby of Percona, who will also lead a session titled, “Security it’s more than just your database you should worry about.”

The official Percona Live London 2014 t-shirt!

The official Percona Live London 2014 t-shirt!
(Click image for larger view)

Observe how to incorporate semi-synchronous replication to achieve failover – without data loss. Facebook’s Yoshinori Matsunobu and Santosh Banda will share how they did it at scale (across data centers) by extending MySQL internals along with some handy new self-made tools.

Meet the next-generation C connector for MySQL: libAttachSQL. It’s a new lightweight async C connector library for MySQL being developed from scratch by HP’s Advanced Technology Group. Andrew Hutchings, principal software engineer at Hewlett-Packard, will be on hand to share the latest on libAttachSQL.

Successful applications often become limited by MySQL performance. But tracking down and fixing those issues can be a huge drain on time and resources. Unless you think smart – spending time on what gives you the best return. Percona CEO Peter Zaitsev will explain how in his session, “Practical MySQL Performance Optimization.”

Percona Live London attendees will also learn from the real-life experiences of MySQL experts who share case studies. Shake hands with Art van Scheppingen, head of database engineering at Spil Games, who will explain how to serve out any page with an HA Sphinx environment.

Save yourself a quarter century by absorbing Tim Callaghan’s MySQL performance benchmarking tips, tricks and lessons learned. Tim, vice president of engineering at Tokutek, with share what he’s learned in the past 25 years maintaining the performance of database applications.

And of course there will be a MySQL community dinner! But be sure to

Truncated by Planet PHP, read more at the original (another 740 bytes)

Shlomi NoachPseudo GTID (22.10.2014, 05:22 UTC)

Pseudo GTID is a method to implement a GTID-like solution where slaves are easily connected to one another. This blog post and the following ones will describe work in progress (some 80% completed), where simulation of GTID makes for a good enough basis for refactoring replication topologies. I'm coding this in orchestrator, which already provides a substantial infrastructure support for this.

The final goal: orchestrator will allow you to move a slave below another, using only the data available by those two slaves. The usage is obvious:

  • Easy master failover (master dead? Orchestrator will choose the most advanced slave to promote and make it master of its siblings)
  • Slave promotion in complex topologies (with deep nested topologies, be able to move a slave up the hierarchy even if its local master is corrupted).

This can all happen with your normal, non GTID, MySQL replication, using your normal binary log files & positions.

This work in progress is inspired by Sam Lambert at GitHub, who has worked on a similar solution with different implementation. I also recall discussions with other DBAs having similar solution.

Pseudo GTID

First thing's first, the basis for proposed solution is a pseudo-GTID. A unique entry in the binary logs (not necessarily sequential; not necessarily in ascending order). While in GTID implementations we have a unique identifier for each entry in the binary log, with pseudo-GTID we accept an occasional (or frequent) unique entry in the binary log.

There are many ways to do so. Certainly a client can generate a unique Id and invoke some statement on MySQL involving that ID. That would serve as valid grounds for the proposed solution. But I like things to be contained within MySQL. Consider, for example, the following event, which would be my preferred choice in Statement Based Replication (for RBR solution, see next post):

drop table if exists test.pseudo_gtid;
create table if not exists test.pseudo_gtid (
  id int unsigned not null primary key,
  ts timestamp,
  gtid varchar(64) charset ascii

drop event if exists test.update_pseudo_gtid_event;

delimiter ;;
create event if not exists
  on schedule every 10 second starts current_timestamp
  on completion preserve
      set @pseudo_gtid := uuid();
      insert into test.pseudo_gtid (id, ts, gtid) values (1, NOW(), @pseudo_gtid) on duplicate key update ts=NOW(), gtid=VALUES(gtid);

delimiter ;

The above is based on Making UUID() and RAND() replication safe. What do we get? Once in 10 seconds (or what have you), a unique entry is written to the binary log.

Consider that the event is already running by now, and the next conventional statements executed by the application:

master [localhost] {msandbox} (test) > create table test.vals(id int);
master [localhost] {msandbox} (test) > insert into test.vals (id) values (17);
master [localhost] {msandbox} (test) > insert into test.vals (id) values (18);
master [localhost] {msandbox} (test) > insert into test.vals (id) values (19);
master [localhost] {msandbox} (test) > insert into test.vals (id) values (23);

master [localhost] {msandbox} (test) > show master logs;
| Log_name         | File_size |
| mysql-bin.000036 |       531 |
| mysql-bin.000037 |      1269 |
| mysql-bin.000038 |      6627 |
| mysql-bin.000039 |      3313 |

Let's look at the binary logs content:

master [localhost] {msandbox} (test) > show binlog events in 'mysql-bin.000039';
| Log_name         | Pos  | Event_type  | Server_id | End_log_pos | Info                                                                                                                                        |

Truncated by Planet PHP, read more at the original (another 26593 bytes)

Peter ZaitsevPercona XtraDB Cluster: How to run a 2-node cluster on a single server (21.10.2014, 13:53 UTC)
I reckon there’s little sense in running 2 or more Percona XtraDB Cluster (PXC) nodes in a single physical server other than for educational and testing purposes – but doing so is still useful in those cases. The most popular way of achieving this seems to be with server virtualization, such as making use of Vagrant boxes. But in the same way you can have multiple instances of MySQL running in parallel on the OS level in the form of concurrent mysqld processes, so too can you have multiple Percona XtraDB Cluster nodes. And the way to achieve this is precisely the same: using dedicated datadirs and different ports for each node.


Which ports?

4 tcp ports are used by Pecona XtraDB Cluster:
  • the regular MySQL port (default 3306)
  • port for group (Galera) communication (default 4567)
  • port for State Transfer (default 4444)
  • port for Incremental State Transfer (default is: port for group communication (4567) + 1 = 4568)
Of course, when you have multiple instances in the same server default values won’t work for all of them so we need to define new ports  for the additional instances and make sure to have the local firewall open to them, if there is one active (iptables, selinux,…).

[{ loading ... }]

Installing Percona XtraDB Cluster, configuring and starting the first node

My test server was a fresh CentOS 6.5 configured with Percona yum repository, from which I installed the latest Percona XtraDB Cluster (5.6.20-25.7.888.el6); note that you’ll need the EPEL repository as well to install socat, which is a dependency (see this bug). To avoid confusion, I’ve prevented the mysql service to start automatically:
chkconfig --level 3 mysql off
chkconfig --del mysql

I could have installed PXC from the tarball but I decided to do it from the repositories to have all dependencies covered by yum. This is how my initial /etc/my.cnf looked like (note the use of default values):
datadir = /var/lib/mysql
wsrep_cluster_name = singlebox
wsrep_node_name = node1
I’ve started by manually bootsrapping the cluster with this single node with the command:
$ mysqld_safe --defaults-file=/etc/my.cnf --wsrep-new-cluster

You should then be able to access this node through the local socket:
$ mysql -S /var/lib/mysql/mysql-node1.sock


Configuring and starting the second node

Then I created a similar configuration configuration file for the second instance, which I named /etc/my2.cnf, with the following modifications:
datadir = /var/lib/mysql2
wsrep_cluster_name = singlebox
wsrep_node_name = node2
wsrep_provider_options = "base_port=5020;"

Note the use of base_port: by having it defined, port 5020 is used for group communication and 5021 (the one above it) is reserved for IST (it’s the same as using gmcast.listen_addr=tcp://, just simpler).
You need to create and setup the right permissions to the datadir on this second instance, otherwise MySQL won’t be able to create some files (like .pid and .err), though you don’t need to run the mysql_install_db script:
$ chown -R mysq

Truncated by Planet PHP, read more at the original (another 3368 bytes)

Jean-Jerome SchmidtNew Webinar: 9 DevOps Tips for Going in Production with Galera Cluster for MySQL - November 11th (21.10.2014, 09:49 UTC)
October 21, 2014
By Severalnines


Galera is a MySQL replication technology that can simplify the design of a high availability application stack. With a true multi-master MySQL setup, an application can now read and write from any database instance without worrying about master/slave roles, data integrity, slave lag or other drawbacks of asynchronous replication.


And that all sounds great until it’s time to go into production. Throw in a live migration from an existing database setup and devops life just got a bit more interesting ...


So if you are in devops, then this webinar is for you!


Operations is not so much about specific technologies, but about the techniques and tools you use to deploy and manage them. Monitoring, managing schema changes and pushing them in production, performance optimizations, configurations, version upgrades, backups; these are all aspects to consider – preferably before going live.


Let us guide you through 9 key tips to consider before taking Galera Cluster into production. 


New Webinar: 9 DevOps Tips for Going in Production with Galera Cluster for MySQL - November 11th





Tuesday, November 11th at 09:00 GMT (UK) / 10:00 CET (Germany, France, Sweden)

Register Now


North America/LatAm

Tuesday, November 11th at 9:00 Pacific Time (US) / 12:00 Eastern Time (US)

Register Now



Johan Andersson, CTO, Severalnines


read more

Peter ZaitsevAutumn: A season of MySQL-related conferences. Here’s my list (20.10.2014, 14:58 UTC)

Autumn is a season of MySQL-related conferences and I’m about to hit the road to speak and attend quite a  few of them.

Peter Zaitsev prepares for a tour of worldwide MySQL-related conferences including Percona Live London, All Things Open, Highload++, AWS re:Invent - Percona will also be at OpenStack Paris.This week I’ll participate in All Things Open, a local conference for me here in Raleigh, N.C. and therefore one I do not have to travel for. All Things Open explores open source, open tech and the open web in the enterprise. I’ll be speaking on SSDs for Databases at 3:15 p.m. on Thursday, Oct. 23 and I’ll also be participating in a book signing for the High Performance MySQL Book at 11:45 p.m. at the “Meet the Speaker” table. We are also proud to be sponsor of this show so please stop by and say “Hi” at our booth in the expo hall.

Following this show I go to Moscow, Russia to the Highload++ conference. This is wonderful show for people interested in high-performance solutions for Internet applications and I attend almost every year. It has a great lineup of speakers from leading Russian companies as well as many top International speakers covering a lot of diverse technologies. I have 3 talks at this show around Application Architecture, Using Indexes in MySQL and about SSD and Flash Storage for Databases. I’m looking forward to reconnecting with my many Russian friends at this show.

From Highload I go directly to Percona Live London 2014 (Nov. 3-4) which is the show we’re putting together – which of course means it is filled with great in-depth information about MySQL and its variants. I think this year we have a good balance of talks from MySQL users such as Facebook, Github,, Ebay, Spil Games, IE Domain registry as well as vendors with in-depth information about products and having experiences with many customer environments – MySQL @ Oracle, HP, HGST, Percona, MariaDB, Pythian, Codership, Continuent, Tokutek, FromDual, OlinData. It looks like it is going to be a great show (though of course I’m biased) so do not forget to get registered if you have not already. (On Twitter use hashtag #PerconaLive)

The show I’m sorry to miss is the OpenStack Paris Summit. Even though it is so close to London, the additional visa logistics make it unfeasible for me to visit. There is going to be a fair amount of Perconians on the show, though. Our guys will be speaking about a MySQL and OpenStack Deep Dive as well as Percona Server Features for OpenStack and Trove Ops. We’re also exhibiting on this show so please stop by our booth and say “hi.”

Finally there is AWS re:Invent in Las Vegas Nov. 11-14. I have not submitted any talks for this one but I’ll drop in for a day to check it out. We’re also exhibiting at this show so if you’re around please stop by and stay “hi.”

This is going to be quite a busy month with a lot of events! There are actually more where we’re speaking or attending. If you’re interested about events we’re participating, there is a page on our web site to tell you just that! I also invite you to submit papers to speak at the new

Truncated by Planet PHP, read more at the original (another 742 bytes)

Shlomi NoachMaking UUID() and RAND() replication safe (20.10.2014, 06:40 UTC)

MySQL's UUID() and RAND() functions both provide with (pseudo) indeterministic result. UUID()'s result is moreover bound to the host on which it executes. For this reason, both are unsafe to replicate with STATEMENT binlog format. As an example, consider:

master> create table test.uuid_test (id int, u varchar(64));

master> insert into test.uuid_test values (1, UUID());
Query OK, 1 row affected, 1 warning (0.03 sec)

master> select * from test.uuid_test;
| id   | u                                    |
|    1 | 7e3596d8-56ac-11e4-b284-3c970ea31ea8 |

The warning we got on the insert directly relates to the following inconsistency on a slave:

slave1> select * from test.uuid_test;
| id   | u                                    |
|    1 | 7e379d63-56ac-11e4-8477-3c970ea31ea8 |

The data on the slave is clearly inconsistent with the master's. The slave, replicating via STATEMENT binlog format, re-executes the INSERT command and gets a different UUID value.


One solution to the above is to generate the UUID value from your application. By the time MySQL gets the INSERT statement, the UUID value is a constant string, as far as MySQL is concerned.


However there's a way to do it from within MySQL, by decoupling the UUID() function from the INSERT statement. It takes a session variable. Consider:

master> set @safe_uuid := UUID();
Query OK, 0 rows affected (0.00 sec)

master> insert into test.uuid_test values (2, @safe_uuid);
Query OK, 1 row affected (0.02 sec)

master> select * from test.uuid_test;
| id   | u                                    |
|    1 | 7e3596d8-56ac-11e4-b284-3c970ea31ea8 |
|    2 | 29c51fb9-56ad-11e4-b284-3c970ea31ea8 |

And on a slave:

slave1> select * from test.uuid_test;
| id   | u                                    |
|    1 | 7e379d63-56ac-11e4-8477-3c970ea31ea8 |
|    2 | 29c51fb9-56ad-11e4-b284-3c970ea31ea8 |

The reason why this succeeds is that MySQL stores session variable values that are being used by DML queries in the binary log. It just so happened that @safe_uuid was assigned the UUID() value, but it could just as well have been assigned a constant or other computation. MySQL stored the resulting value into the binary log, where it is forces upon the slave to use. Check out this binary log snippet:

# at 14251
#141018 12:57:35 server id 1  end_log_pos 14319         Query   thread_id=2     exec_time=0     error_code=0
SET TIMESTAMP=1413626255/*!*/;
SET @@session.sql_auto_is_null=0/*!*/;
# at 14319
#141018 12:57:35 server id 1  end_log_pos 14397         User_var
SET @`safe_uuid`:=_utf8 0x32396335316662392D353661642D313165342D623238342D336339373065613331656138 COLLATE `utf8_general_ci`/*!*/;
# at 14397
#141018 12:57:35 server id 1  end_log_pos 14509         Query   thread_id=2     exec_time=0     error_code=0
SET TIMESTAMP=1413626255/*!*/;
insert into test.uuid_test values (2, @safe_uuid)
# at 14509
#141018 12:57:35 server id 1  end_log_pos 14536         Xid = 145

The same can be applied for RAND(). Funny thing about RAND() is that it is already taken care of by the binary log via SET @@RAND_SEED1, SET @@RAND_SEED2 statements (i.e. it works), though the documentation clearly states it is unsafe.

With Row Based Replication (RBR) the problem never arises in the first place since the binlog contains the values of the new/updated rows.

Peter ZaitsevInnodb transaction history often hides dangerous ‘debt’ (17.10.2014, 14:02 UTC)

In many write-intensive workloads Innodb/XtraDB storage engines you may see hidden and dangerous “debt” being accumulated – unpurged transaction “history” which if not kept in check over time will cause serve performance regression or will take all free space and cause an outage. Let’s talk about where it comes from and what can you do to avoid running into the trouble.

Technical Background: InnoDB is an MVCC engine which means it keeps multiple versions of the rows in the database, and when rows are deleted or updated they are not immediately removed from the database but kept for some time – until they can be removed. For a majority of OLTP workloads they can be removed seconds after the change actually took place. In some cases though they might need to be kept for a long period of time – if there are some old transactions running in the system that might still need to look at an old database state. As of MySQL 5.6 Innodb has one or several “purge threads” which remove the old data that can be removed, though they might not be doing it fast enough for workloads with very intensive writes.

Does it really happen? I started looking into this problem based on some customer concerns and to my surprise I could very easily get the history to grow rapidly using basic sysbench “update” workload. It is especially easy with default innodb_purge_threads=1 setting but even with innodb_purge_threads=8 it grows rather rapidly.

If we take a look at the purging speed (which comes from innodb-metrics table) we can see what purge is being very much starved by the active concurrent sysbench process and it speeds up greatly when it is finished:

Now to be frank this is not an easy situation to get in the majority of workloads with short transactions when the undo space is kept in memory purge and is able to keep up. If Undo space however happens to be gone from buffer pool the purge speed can slow down drastically and the system might not be able to keep up anymore. How it could happen? There are 2 common variants….

Long Running Transaction: If you’re having some long running transaction, for example mysqldump, on the larger table the purging has to pause while that transaction is running and a lot of history will be accumulated. If there is enough IO pressure a portion of undo space will be removed from the buffer pool.

MySQL Restart: Even with modest history length restarting MySQL will wash away from memory and will cause purge to be IO bound. This is of course if you’re not using InnoDB Buffer Pool save and reload.

How do you check if your UNDO space is well cached? In Percona Server I can use those commands:

mysql> select sum(curr_size)*16/1024 undo_space_MB from XTRADB_RSEG;
| undo_space_MB |
|     1688.4531 |
1 row in set (0.00 sec)
mysql> select count(*) cnt, count(*)*16/1024 size_MB, page_type from INNODB_BUFFER_PAGE group by page_type;
| cnt    | size_MB   | page_type         |
|     55 |    0.8594 | EXTENT_DESCRIPTOR |
|      2 |    0.0313 | FILE_SPACE_HEADER |
|    108 |    1.6875 | IBUF_BITMAP       |
|  17186 |  268.5313 | IBUF_INDEX        |
| 352671 | 5510.4844 | INDEX             |
|     69 |    1.0781 | INODE             |
|    128 |    2.0000 | SYSTEM            |
|      1 |    0.0156 | TRX_SYSTEM        |
|   6029 |   94.2031 | UNDO_LOG          |
|  16959 |  264.9844 | UNKNOWN           |
10 rows in set (1.65 sec)

This shows what the total undo space size is now, 1.7GB, with less than 100MB cached in the buffer pool size….

Here are a few graphs from Running Heavy concurrent query during lighter workload where purging could keep up. In this case I used the “injection” benchmark in sysbench setting –trx-rate to 50% of what the system shown as peak.

mysql> select count(distinct k+ length(pad)) from sbtest1;
| count(distinct k+ length(pad)) |
|                       30916851 |
1 row in set (28 min 32.38 sec)


Truncated by Planet PHP, read more at the original (another 6815 bytes)

Jean-Jerome SchmidtIntegrating ClusterControl with FreeIPA and Windows Active Directory for Authentication (17.10.2014, 10:21 UTC)
October 17, 2014
By Severalnines

Integrating ClusterControl with a corporate LDAP directory is a common task for many IT organizations. In an earlier blog, we showed you how to integrate ClusterControl with OpenLDAP. In this post, we will show you how to integrate with FreeIPA and Windows Active Directory. 


How ClusterControl Performs LDAP Authentication


ClusterControl supports up to LDAPv3 protocol based on RFC2307. More details on this in the documentation.


When authenticating, ClusterControl will first bind to the directory tree server (LDAP Host) using the specified Login DN user and password, then it will check if the username you entered exists in the form of uid or cn of the User DN. If it exists, it will then use the username to bind against the LDAP server to check whether it has the configured group as in LDAP Group Name in ClusterControl. If it has, ClusterControl will then map the user to the appropriate ClusterControl role and grant access to the UI.


The following flowchart summarizes the workflow:




FreeIPA is a Red Hat sponsored open source project which aims to provide an easily managed Identity, Policy and Audit (IPA) suite primarily targeted towards networks of Linux and Unix computers. It is easy to install/configure, and is an integrated security information management solution combining Linux (Fedora), 389 Directory Server, MIT Kerberos, NTP, DNS, Dogtag (Certificate System).


read more

Peter ZaitsevPercona Toolkit for MySQL with MySQL-SSL Connections (16.10.2014, 13:06 UTC)

I recently had a client ask me how to use Percona Toolkit tools with an SSL connection to MySQL (MySQL-SSL). SSL connections aren’t widely used in MySQL due to most installations being within an internal network. Still, there are cases where you could be accessing MySQL over public internet or even over a public “private” network (ex: WAN between two colo datacenters). In order to keep packet sniffers at bay, the connection to MySQL should be encrypted.

If you are connecting to Amazon RDS from home or office (ie: not within the AWS network) you better be encrypted!

As there is already a MySQL Performance Blog post on how to setup MySQL SSL connections, we can skip that and dive right in.

As you probably know, the mysql client can read multiple configuration files; the primary one being /etc/my.cnf  You probably also know that the client reads a config file in your $HOME directory: .my.cnf (that’s dot-my-dot-cnf).  It is inside this file that we can set parameters for our shell-user account when connecting to MySQL hosts.

Percona Toolkit uses Perl’s DBI:mysql to make connections to MySQL hosts. This library is linked to the libmysqlclient C library which is responsible for reading and parsing the global config file as well as your $HOME config file. Let’s set some options here that are not directly available in the toolkit scripts. Using $MY_FAVORITE_EDITOR, edit your $HOME/.my.cnf as such:

user = myuser
password = foobar
ssl-ca = /Users/drmac/ca-cert.pem

You must use the absolute path to the CA file. Relative paths won’t cut it:

ERROR 2026 (HY000): SSL connection error: SSL_CTX_set_default_verify_paths failed

Test your connection first using the mysql client:

asura:~ drmac$ mysql -h -e "SHOW STATUS LIKE 'Ssl_cipher'"
| Variable_name | Value              |
| Ssl_cipher    | DHE-RSA-AES256-SHA |

Excellent! Now we can use any Percona Toolkit script and connect via SSL:

asura:~ drmac$ pt-table-checksum -h -d foo -t zipcodes
10-13T14:10:02      0      0    45358       7       0   5.959 foo.myzipcodes


Unfortunately, Percona Toolkit scripts are hard-coded to read the [client] section of your .my.cnf. If you don’t want to overwrite any existing configuration that may be present, you can make a new configuration and specify that file to any toolkit script using -F. Again, relative paths won’t work here. Use the absolute path; even if you are in the same directory.

asura:~ drmac$ cp .my.cnf mytestconfig.cnf
asura:~ drmac$ rm .my.cnf
asura:~ drmac$ pt-table-checksum -h -d foo -t zipcodes -F /Users/drmac/mytestconfig.cnf

Now you can continue using our awesome tools in a secure manner.


The post Percona Toolkit for MySQL with MySQL-SSL Connections appeared first on MySQL Performance Blog.

Chris CalenderIgnoring the lost+found Directory in your Datadir (15.10.2014, 22:37 UTC)

I still get asked about the lost+found directory enough, and so I wanted to provide a current update.

The lost+found directory is a filesystem directory created at the root level of a mapped drive. Thus this is common to see if you create your mysql datadir at the root level of a mapped drive.

In the past, you could ignore it, if it wasn’t too problematic for you, or you could move your datadir down a level, and then it wouldn’t be created in the datadir anymore.

However, there is now the –ignore-db-dir option. It is actually not too new (it’s been in MariaDB since 5.3.9 and 5.5.28, and in MySQL as of 5.6.3), but I don’t think many are too familiar with it.

But when you do run into this problem, some/many would prefer to add a single line to the config file rather than move the datadir.

To do this, just add the following option to your my.cnf file, under the [mysqld] section (it cannot be set dynamically):


And just to show the example:

Before updating my.cnf file:

mysql> select version();
| version()          |
| 5.5.40-MariaDB-log |

mysql> show global variables like 'ignore_db_dirs';
| Variable_name  | Value |
| ignore_db_dirs |       |

Update my.cnf file and restart mysqld:

mysql> show global variables like 'ignore_db_dirs';
| Variable_name  | Value      |
| ignore_db_dirs | lost+found |

mysql> use lost+found
ERROR 1102 (42000): Incorrect database name 'lost+found'

Now you see the lost+found directory is ignored now.

Of course, you can omit multiple directories. However, if you need to add more than one, then you *must* use multiple instances of the ignore_db_dirs= option, one for each directory you want to ignore. That is, you cannot separate them by comma, even though that is how it will be displayed when you have more than one being ignored (I think it treats the comma as part of the name, so then neither of the dirs you want to ignore would be ignored):

For instance, if I want to ignore both “lost+found” and “test”, then you must add the following to the config file:


Then restart mysqld:

mysql> show global variables like 'ignore_db_dirs';
| Variable_name  | Value           |
| ignore_db_dirs | lost+found,test |

Hope this helps.


LinksRSS 0.92   RDF 1.
Atom Feed