Jean-Jerome Schmidt MySQL Replication and GTID-based failover - A Deep Dive into Errant Transactions (27.2.2015, 11:13 UTC)

For years, MySQL replication used to be based on binary log events - all a slave knew was the exact event and the exact position it just read from the master. Any single transaction from a master may have ended in different binary logs, and in different positions in these logs. It was a simple solution that came with limitations - more complex topology changes could require an admin to stop replication on the hosts involved. Or these changes could cause some other issues, e.g., a slave couldn’t be moved down the replication chain without time-consuming rebuild process (we couldn’t easily change replication from A -> B -> C to A -> C -> B without stopping replication on both B and C). We’ve all had to work around these limitations while dreaming about a global transaction identifier.

GTID was introduced along with MySQL 5.6, and brought along some major changes in the way MySQL operates. First of all, every transaction has an unique identifier which identifies it in a same way on every server. It’s not important anymore in which binary log position a transaction was recorded, all you need to know is the GTID: ‘966073f3-b6a4-11e4-af2c-080027880ca6:4’. GTID is built from two parts - the unique identifier of a server where a transaction was first executed, and a sequence number. In the above example, we can see that the transaction was executed by the server with server_uuid of ‘966073f3-b6a4-11e4-af2c-080027880ca6’ and it’s 4th transaction executed there. This information is enough to perform complex topology changes - MySQL knows which transactions have been executed and therefore it knows which transactions need to be executed next. Forget about binary logs, it’s all in the GTID.

So, where can you find GTID’s? You’ll find them in two places. On a slave, in 'show slave status;' you’ll find two columns: Retrieved_Gtid_Set and Executed_Gtid_Set. First one covers GTID’s which were retrieved from the master via replication, the second informs about all transactions which were executed on given host - both via replication or executed locally. 

 

Setting up a Replication Cluster the easy way

We’ll use the Severalnines Configurator to automatically deploy our replication setup. First, you need to point your browser at:
http://www.severalnines.com/replication-configurator/

The first page doesn’t give you too many options so you can as well click on ‘Next’ button.

The next screen contains some options regarding operating system, where the infrastructure will be created and so forth. All of options are explained in details, one thing that may be worth commenting is the ‘Number of MySQL Slaves’. By default, the deploy scripts create a master-master pair connected by semi-sync replication. This is the smallest possible block created when you set the number of slaves to 0. Every slave that you want to create will be connected to this master-master pair.

The third screen is related to the MySQL configuration of the database nodes - you can define how ‘large’ nodes will be in terms of CPU and memory, you can also set up InnoDB buffer pool sizes and predicted workload pattern.

The last screen lets you fill in the IP addresses of the ClusterControl server and the nodes in the replication setup. 

Finally, you need to fill in your email address to which file with deployment scripts will be sent.

 

Deployment

For this blog post, let’s a

Truncated by Planet PHP, read more at the original (another 11145 bytes)

Link
Peter Zaitsev3 handy tools to remove problematic MySQL processes (27.2.2015, 08:00 UTC)

3 handy tools to remove problematic MySQL processesDBAs often encounter situations where they need to kill queries to ensure there are no long-running queries on a MySQL server that would impact performance. Long-running queries can be the result of many factors. Fortunately, Percona Server contains some handy tools to remove problematic MySQL processes. I will highlight all of the tools via some examples in this post.

pt-kill:
There have been some good posts on this blog about the pt-kill tool, like this one by Arunjith Aravindan titled “How a set of queries can be killed in MySQL using Percona Toolkit’s pt-kill.” Let’s dive into pt-kill a bit further with a few more examples. What does pt-kill do? It kills MySQL connections. Say you wanted to run pt-kill from a cronjob and then get an email on every killed process/query. Here is typical example for that.

$ pt-kill --interval 1 --run-time 1 --busy-time 5 --log /path/to/kill_long_running_thread.log --match-info "^(select|SELECT|Select)" --kill --print --user=xxxxxx --password=xxxxxxxxxx

Assume this is running from a cronjob, When pt-kill executes, it will kill queries longer than 5 seconds. By default, pt-kill runs forever –run-time option tells how long pt-kill to run before exiting If –interval and –busy-time parameters are used together then the explicit –interval value is used. Otherwise the default interval is 30 seconds. Note: this will only kill all read queries as per the –match-info parameter.

The above command will log all killed queries in the file referenced with the –log option. If you need to be notified via email for every killed query, the command below will do it. Off-course, you need to have the system configured to send e-mail.

#!/bin/bash
tail -n 0 -F /path/to/kill_long_running_thread.log | while read LOG
do
echo "$LOG" | mail -s "pt-kill alert" sample@test.com
done

You can execute this shell script in the background within screen or with the nohup utility continuously to monitor the log file. It will send an email whenever any new killed query arrives to the referenced log file in the pt-kill command. Unfortunately, there is no option to notify-by-email in pt-kill at the moment, so this is sort of a workaround.

In order to log all killed queries into a database table you will need to use the –log-dsn option as per the example below.

$ pt-kill --interval 1 --busy-time 1 --create-log-table --log-dsn=h=localhost,D=percona,t=kill_log --daemonize --match-info "^(select|SELECT|Select)" --kill

All killed queries will be logged into percona.kill_log table. The –daemonize option will run this command in the background forever and will kill all SELECT queries running longer than 1 second (–busy-time 1). The –interval option instructs pt-kill to scan processes every 1 second (–interval 1).

mysql> select * from kill_log;
+---------+-----------+---------------------+-------------------------+------------+-------+------+-----------+------+---------+------+------------+-

Truncated by Planet PHP, read more at the original (another 10446 bytes)

Link
Jean-Jerome SchmidtPress Release: Severalnines primed for even stronger customer growth (26.2.2015, 15:29 UTC)

Severalnines primed for even stronger customer growth 
Europe’s database ‘anti-startup’ sets big objectives to improve open source Enterprise IT 

London, UK, 26 February 2015 - Severalnines, a database clustering pioneer which helps businesses automate and manage highly available open source SQL and NoSQL database clusters, reported over 100 percent sales growth last year. The growth of its revenue and customer base was fuelled by winning enterprise accounts such as BT, Orange and Cisco. Today, Severalnines serves over 100 enterprise customers and is in use in over 7,000 database deployments by community users.    

Severalnines’ flagship ClusterControl platform helps deploy, monitor, manage and scale SQL and NoSQL open source database clusters with cloud providers like AWS and Google, on Openstack, and on-premise deployments. Control of database infrastructure across such mixed environments, which is usually the case in large enterprises, makes Severalnines the ideal platform to support modern business IT.

Severalnines ClusterControl provides full operational visibility for database clusters. It reduces administrative burden by configuring and deploying these clusters within minutes - whether on-premise or in the cloud. Databases currently supported include MySQL with its main variations MariaDB, Percona XtraDB and Galera Cluster, clustered or for single instances, plus MongoDB. 

Bringing their experience working for the European company behind the most popular open source database, MySQL, Severalnines Co-Founders Vinay Joosery and Johan Andersson have created a new type of European software company. To date, Severalnines is self-funded, having taken no external capital. Its commercial success is based on its vision of providing one intuitive way to manage today’s complex, heterogeneous database environments.

Vinay Joosery praised the ongoing growth of the company and said: “In the current age of virtualised environments in public/private clouds, high availability is even more important and is typically achieved via database clustering. However, many companies struggle to implement database clustering, as it requires a great deal of costly expertise which is difficult to find. ClusterControl completely automates the deployment and ongoing management of database clusters; our ultimate aim is to provide telco-grade high availability without the telco price tag."

To join Severalnines’ growing customer base please, click here.

About Severalnines

Severalnines provides automation and management software for database clusters. We help companies deploy their databases in any environment, and manage all operational aspects to achieve high-scale availability.

Severalnines' products are used by developers and administrators of all skills levels to provide the full 'deploy, manage, monitor, scale' database cycle, thus freeing them from the complexity and learning curves that are typically associated with highly available database clusters. The company has enabled over 7,000 deployments to date via its popular online database configurator. Currently counting BT, Orange, Cisco, CNRS, Technicolour, AVG, Ping Identity and Paytrail as customers.  Severalnines is a private company headquartered in Stockholm, Sweden with offices in Singapore and Tokyo, Japan. To see who is using Severalnines today visit, http://www.severalnines.com/company.

Press contact
Positive Marketing
Ed Stevenson | Tom Foster
estevenson@positivemarketing.com
0203 637 0644

or 

Severalnines AB
Jean-Jérôme Schmidt
jj@severalnines.com

Blog category:

Truncated by Planet PHP, read more at the original (another 1467 bytes)

Link
Peter ZaitsevWorrying about the ‘InnoDB: detected cycle in LRU for buffer pool (…)’ message? (26.2.2015, 08:00 UTC)

If you use Percona Server 5.5 and you have configured it to use multiple buffer pool instances than sooner or later you’ll see the following lines on the server’s error log and chances are you’ll be worried about them:

InnoDB: detected cycle in LRU for buffer pool 5, skipping to next buffer pool.
InnoDB: detected cycle in LRU for buffer pool 3, skipping to next buffer pool.
InnoDB: detected cycle in LRU for buffer pool 7, skipping to next buffer pool.

Worry not as this is mostly harmless. It’s becoming a February tradition for me (Fernando) to face a question about this subject (ok, it’s maybe a coincidence) and this time I’ve teamed up with my dear colleague and software engineer George Lorch to provide you the most complete blog post ever published on this topic(with a belated thank you! to Ernie Souhrada, with whom I’ve also discussed this same matter one year ago).

InnoDB internals: what is “LRU” ?

There’s a short and to-the-point section of the MySQL manual that explains in a clear way what is the InnoDB buffer pool, how it operates and why it plays such an important role in MySQL performance. If you’re interested in understanding InnoDB internals then that page is a good start. In this section we’ll refrain ourselves to explain what the “LRU” that shows in our subject message is so we’ll only slightly dig into InnoDB internals, enough to make for some context. Here’s a quick introduction to the buffer pool, quoting from the above manual page:

InnoDB maintains a storage area called the buffer pool for caching data and indexes in memory. (…) Ideally, you set the size of the buffer pool to as large a value as practical, leaving enough memory for other processes on the server to run without excessive paging. The larger the buffer pool, the more InnoDB acts like an in-memory database, reading data from disk once and then accessing the data from memory during subsequent reads.

In practice, however, we can rarely fit our whole dataset inside the InnoDB buffer pool so there must be a process to manage this limited pool of memory pages:

InnoDB manages the pool as a list, using a variation of the least recently used (LRU) algorithm. When room is needed to add a new block to the pool, InnoDB evicts the least recently used block and adds the new block to the middle of the list.

There you go, InnoDB employs a variation of the Least Recently Used algorithm called midpoint insertion strategy to manage the pages within the buffer pool. We should mention it does makes exceptions, such as during a full table scan, when it knows the loaded pages might end up being read only a single time.

Dumping and reloading the buffer pool

Before we can get to the main point of this article lets first examine why would you want to dump the buffer pool to disk, which is at the core of the matter here: that’s when those warning messages we’re discussing may appear.

When you start a MySQL server the buffer pool is empty by default. Performance is at it’s worse at this point because no data can be found in memory so in practice each request for data results in an I/O operation to retrieve the data in the disk and bring it to memory. With time the buffer pool gets filled and performance improves – more and more data can now be found in memory. With yet more time we reach a peek performance state: the buffer pool not only is full but it is filled with the most popular data. The time between the start of the server and reaching this optimum state in the buffer pool is called server warm up. How long it takes depends mostly on two things: the size of the buffer pool and the level of activity of the server – the less busy it is the less requests it will get and thus more time is needed until the popular data is fully loaded.

Now, there could be a shortcut: what if before we save the buffer pool on a disk file before we stop MySQL? We could later use it to reload the buffer pool to an optimum state when we restart the server, thus decreasing the warm up period dramatically.

Percona was a pioneer in this field

Truncated by Planet PHP, read more at the original (another 4976 bytes)

Link
Oli SennhauserMySQL Enterprise Incremental Backup simplified (25.2.2015, 18:41 UTC)

MySQL Enterprise Backup (MEB) has the capability to make real incremental (differential and cumulative?) backups. The actual releases are quite cool and you should really look at it...

Unfortunately the original MySQL documentation is much too complicated for my simple mind. So I did some testing and simplified it a bit for our customers...

If you want to dive into the original documentation please look here: Making an Incremental Backup .

If you want to use MySQL Enterprise Backup please let us know and we send you a quote...

Prepare MySQL Backup infrastructure

mkdir /backup/full /backup/incremental1 /backup/incremental2

Full MySQL Backup

mysqlbackup --defaults-file=/etc/my.cnf --user=root --backup-dir=/backup/full backup

mysqlbackup --defaults-file=/etc/my.cnf --user=root --backup-dir=/backup/full apply-log

First MySQL Incremental Backup

mysqlbackup --defaults-file=/etc/my.cnf --user=root --incremental --incremental-base=dir:/backup/full --incremental-backup-dir=/backup/incremental1 backup

mysqlbackup --defaults-file=/etc/my.cnf --user=root --backup-dir=/backup/full --incremental-backup-dir=/backup/incremental1 apply-incremental-backup

Second MySQL Incremental Backup

mysqlbackup --defaults-file=/etc/my.cnf --user=root --incremental --incremental-base=dir:/backup/full --incremental-backup-dir=/backup/incremental2 backup

mysqlbackup --defaults-file=/etc/my.cnf --user=root --backup-dir=/backup/full --incremental-backup-dir=/backup/incremental2 apply-incremental-backup

and so on...

MySQL Restore

mysqlbackup --defaults-file=/etc/my.cnf --user=root --backup-dir=/backup/full copy-back

Have fun with MySQL Enterprise Backup. If you need any help with your MySQL Backup concept, please let us know.

Link
Peter ZaitsevUsing MySQL Event Scheduler and how to prevent contention (25.2.2015, 11:00 UTC)

MySQL introduced the Event Scheduler in version 5.1.6. The Event Scheduler is a MySQL-level “cron job”, which will run events inside MySQL. Up until now, this was not a very popular feature, however, it has gotten more popular since the adoption of Amazon RDS – as well as similar MySQL database as a service offerings where there is no OS level.

What is important to understand about the Event Scheduler is that it does not have any protection against multiple execution (neither does linux cron). Let’s imagine you have created an event that executes every 10 seconds, but the logic inside the event (i.e. queries or stored procedure call) can take longer than 10 seconds (may be in case of the high load), so it can pile-up. In the worst case, when an event contains a set of “insert” + “update”/”delete” statement inside a transaction, it can cause a deadlock.

Adding “get_lock” conditions inside of the event will help to prevent such situation:

If a repeating event does not terminate within its scheduling interval, the result may be multiple instances of the event executing simultaneously. If this is undesirable, you should institute a mechanism to prevent simultaneous instances. For example, you could use the GET_LOCK() function, or row or table locking. Read more at event_scheduler documentation.

Function GET_LOCK() can be used for communications between threads:

The following example can illustrate using get_lock:

DELIMITER //
CREATE EVENT testlock_event ON SCHEDULE EVERY 2 SECOND DO
BEGIN
 DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
 BEGIN
   DO RELEASE_LOCK('testlock_event');
 END;
 IF GET_LOCK('testlock_event', 0) THEN
   -- add some business logic here, for example:
   -- insert into test.testlock_event values(NULL, NOW());
  END IF;
  DO RELEASE_LOCK('testlock_event');
END;
//
DELIMITER ;

DECLARE CONTINUE HANDLER FOR SQLEXCEPTION is needed here to release lock even if the event failed or was killed.

The above GET_LOCK / RELEASE_LOCK combination will help to prevent contention inside the MySQL Event Scheduler.

The post Using MySQL Event Scheduler and how to prevent contention appeared first on MySQL Performance Blog.

Link
Peter ZaitsevIs MySQL’s innodb_file_per_table slowing you down? (24.2.2015, 11:00 UTC)

MySQL’s innodb_file_per_table is a wonderful thing – most of the time. Having every table use its own .ibd file allows you to easily reclaim space when dropping or truncating tables. But in some use cases, it may cause significant performance issues.

Many of you in the audience are responsible for running automated tests on your codebase before deploying to production. If you are, then one of your goals is having tests run as quickly as possible so you can run them as frequently as possible. Often times you can change specific settings in your test environment that don’t affect the outcome of the test, but do improve throughput. This post discusses how innodb_file_per_table is one of those settings.

I recently spoke with a customer whose use case involved creating hundreds of tables on up to 16 schemas concurrently as part of a Jenkins testing environment. This was not in production, so performance was far more important than durability. They’d run their tests, and then drop the schemas. This process took close to 20 minutes. They asked “How can we make this faster?”

Due to the number of tables involved innodb_file_per_table seemed a likely culprit.

It’s been noted here on the MySQL Performance Blog that innodb_file_per_table can cause table creation and drops to slow down. But what exactly is the performance hit? We wanted to find out.

The innodb_file_per_table Test:

On a test server running CentOS release 6.5, xfs filesystem, and 5.6.22-71.0-log Percona Server, I ran the following homemade benchmark bash script:

[root@host ~]# time $(for db in {1..16};
do mysql -e "create database bench$db";
$(for tb in {1..500}; do $(mysql bench$db -e "create table tab${tb} (i int) engine=innodb"); done) & done)

If you open the mysql client in another screen or terminal, you should see something like this:

...
+-------+------+-----------+---------+---------+------+----------------+------------------------------------------+-----------+---------------+         
| Id    | User | Host      | db      | Command | Time | State          | Info                                     | Rows_sent | Rows_examined |         
+-------+------+-----------+---------+---------+------+----------------+------------------------------------------+-----------+---------------+         
| 80013 | root | localhost | NULL    | Query   |    0 | init           | show processlist                         |         0 |             0 |         
| 89462 | root | localhost | bench5  | Query   |    0 | creating table | create table tab95 (i int) engine=innodb |         0 |             0 |         
| 89466 | root | localhost | bench8  | Query   |    0 | creating table | create table tab81 (i int) engine=innodb |         0 |             0 |         
| 89467 | root | localhost | bench1  | Query   |    0 | creating table | create table tab91 (i int) engine=innodb |         0 |             0 |         
| 89468 | root | localhost | bench13 | Query   |    0 | creating table | create table tab90 (i int) engine=innodb |         0 |             0 |         
| 89469 | root | localhost | bench15 | Query   |    0 | creating table | create table tab90 (i int) engine=innodb |         0 |             0 |         
| 89472 | root | localhost | bench9  | Query   |    0 | creating table | create table tab86 (i int) engine=innodb |         0 |             0 |         
| 89473 | root | localhost | bench10 | Query   |    0 | creating table | create table tab94 (i int) engine=innodb |         0 |             0 |         
| 89474 | root | localhost | bench11 | Query   |    0 | creating table | create table tab80 (i int) engine=innodb |         0 |             0 |         
| 89475 | root | localhost | bench3  | Query   |    0 | creating table | create table tab80 (i int) engine=innodb |         0 |             0 |         
| 89476 | root | localhost | bench2  | Query   |    0 | creating table | create table tab82 (i int) engine=innodb |         0 |             0 |         
| 89478 | root | local

Truncated by Planet PHP, read more at the original (another 2999 bytes)

Link
Peter ZaitsevMySQL community t-shirt contest for Percona Live 2015 (23.2.2015, 11:00 UTC)

MySQL community t-shirt contest for Percona Live 2105Have designs on Percona Live this April in Silicon Valley? Send them to us! The winning entry will appear on a cool limited edition t-shirt that we’ll award to a few dozen lucky recipients at our booth’s new Percona T-Shirt Bar. The winner will also get a t-shirt, of course, along with a pair “Beats by Dre” headphones!

That’s right: We’re calling on the creative types within the MySQL community to come up with designs for a unique t-shirt.

Let your imaginations run free! Just make sure to include “Percona Live 2015” in there somewhere. You might also want to include your signature, hanko, seal or mark… treat the cotton as the canvas of your masterpiece… let the world know who created it!

Send your t-shirt designs to me as a high-resolution PDF or in .EPS format. The deadline for entries is March 6. The winner will be chosen under the sole discretion of Percona’s marketing team, taking into consideration quality of design, values alignment, trademark clearance and general awesomeness. (Submitted designs assume unlimited royalty free rights usage by Percona. We also reserve the right to declare no winners if there are no suitable designs submitted. You do not need to register or attend the conference to submit a design.)

Click here to submit your design for the MySQL community t-shirt contest!

By the way, the image on this post is not a template. You have free rein so go get ‘em! And just to be clear: this won’t be the t-shirt that everyone receives at the registration booth. However, it just might be one of the most coveted t-shirts at the conference!

I’ll share the winning design the week of March 9. Good luck and I hope to see you all this April and the Percona Live MySQL Conference and Expo! The conference runs April 13-16 at the Hyatt Regency Santa Clara & the Santa Clara Convention Center.

 

The post MySQL community t-shirt contest for Percona Live 2015 appeared first on MySQL Performance Blog.

Link
Jean-Jerome SchmidtA DevOps Guide to Database Infrastructure Automation for eCommerce - Replay & Slides (20.2.2015, 13:56 UTC)

Thanks to everyone who attended and participated in last week’s webinar on ‘A DevOps Guide to Database Infrastructure Automation for eCommerce’. If you missed the sessions or would like to watch the webinar again & browse through the slides, they are now available online.

Our guest speaker this time was Riaan Nolan of Foodpanda/Rocket Internet. Topics included a roundup of infrastructure challenges faced by online retailers: multi-datacenter/cloud environments, configuration management, health and performance monitoring, capacity analysis and planning, elastic scaling, and automatic failure handling. Thanks again to Riaan for taking the time to speak to us!

Screen Shot 2015-02-20 at 12.30.12.png

The full agenda included: 

  • eCommerce infrastructure challenges in 2014, including a sample workflow chart outlining: 
  • Puppet, GitHub, Capistrano, Nginx, PHP5-FPM / Ruby, Percona XtraDB, Couchbase, SOLR, GlusterFS
  • Provisioning of test/QA and highly available production environments across multi-datacenter and multi-cloud environments (AWS Cloud Formation) 
  • Building and maintaining configuration management systems such as Puppet and Chef
  • Enabling self-service infrastructure services to internal dev teams
  • Health and performance monitoring 
  • Capacity analysis and planning
  • Elastic scaling 
  • Automating failure handling
  • Disaster recovery

 

Watch the replay

 

Read the slides

 

Screen Shot 2015-02-20 at 12.28.33.png

RELATED BLOGS

 

Truncated by Planet PHP, read more at the original (another 2992 bytes)

Link
Peter ZaitsevHow to setup a PXC cluster with GTIDs (and have async slaves replicating from it!) (20.2.2015, 08:00 UTC)

This past week was marked by a series of personal findings related to the use of Global Transaction IDs (GTIDs) on Galera-based clusters such as Percona XtraDB Cluster (PXC). The main one being the fact that transactions touching MyISAM tables (and FLUSH PRIVILEGES!) issued on a giving node of the cluster are recorded on a GTID set bearing the node’s server_uuid as “source id” and added to the binary log (if the node has binlog enabled), thus being replicated to any async replicas connected to it. However, they won’t be replicated across the cluster (that is, all of this is by design, if wsrep_replicate_myisam is disabled, which it is by default).

My colleague Stéphane covered this story in one of his recent blog posts titled, “Percona XtraDB Cluster 5.6: a tale of 2 GTIDs,” explaining how those local (in reference to the node) transactions are saved in a different GTID set to the cluster’s main one and the impact this may cause when re-pointing an async slave to replicate from a different node.

GTIDs is a feature introduced in MySQL 5.6 that made replication management much easier and considering there’s a series of advantages in having an async replica attached to a PXC cluster, why hasn’t this popped out earlier to either of us? I guess there aren’t so many people using GTIDs with Galera-based clusters around yet so here’s a post to show you how to do it.

Initializing a PXC cluster configured with GTIDs

My testing environment for a 3-node cluster is composed of node1 (192.168.70.2), node2 (.3) and node3 (.4). All of them have the same PXC binaries installed:

$ rpm -qa |grep -i percona-xtradb-cluster
Percona-XtraDB-Cluster-56-5.6.21-25.8.938.el6.x86_64
Percona-XtraDB-Cluster-galera-3-3.8-1.3390.rhel6.x86_64
Percona-XtraDB-Cluster-server-56-5.6.21-25.8.938.el6.x86_64
Percona-XtraDB-Cluster-client-56-5.6.21-25.8.938.el6.x86_64
Percona-XtraDB-Cluster-shared-56-5.6.21-25.8.938.el6.x86_64

and are configured with an almost identical /etc/my.cnf (apart from wsrep_node_address and the prompt line):

[mysql]
prompt='mysql {node1} > '
[mysqld]
datadir = /var/lib/mysql
log_warnings=2
server_id=1
log_bin=percona-bin
log_slave_updates
binlog_format = ROW
enforce_gtid_consistency=1
gtid_mode=on
wsrep_cluster_name = my-three-node-cluster
wsrep_cluster_address = gcomm://192.168.70.2,192.168.70.3,192.168.70.4
wsrep_node_address = 192.168.70.2
wsrep_provider = /usr/lib64/libgalera_smm.so
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = sst:secret
innodb_locks_unsafe_for_binlog = 1
innodb_autoinc_lock_mode = 2

server_id, log_bin, log_slave_updates and binlog_format are not needed for the cluster to operate but they are required to setup regular replication later on so I’ve added those to let the nodes ready to operate as masters.

We start with an empty, inexisting datadir on all nodes so I’ll use the mysql_install_db script to create a base datadir with all that is needed for MySQL to work on node1, which will be the reference node of the cluster:

[node1]$  mysql_install_db --user=mysql

We’re now ready to bootstrap the cluster from this reference node:

[node1]$ service mysql bootstrap-pxc

With that, we have an operational reference node:

mysql [node1] > select @@server_uuid;
+--------------------------------------+
| @@server_uuid                        |
+--------------------------------------+
| 834bca7f-b45e-11e4-a7b5-0800272e951c |
+--------------------------------------+
1 row in set (0.00 sec)

Note from above my.cnf settings that I’ve chosen xtrabackup-v2 as the State Snapshot Transfer (SST) method, which requires authentication (wsrep_sst_auth). For this reason, if we now try to start MySQL on node2 it will fail with its error log showing:
2015-02-14 16:58:26 24149 [ERROR] WSREP: Process completed w

Truncated by Planet PHP, read more at the original (another 17215 bytes)

Link
LinksRSS 0.92   RDF 1.
Atom Feed