Jean-Jerome Schmidt9 Tips for Going in Production with Galera Cluster for MySQL (23.8.2014, 00:04 UTC)
August 25, 2014
By Severalnines

Are you going in production with Galera Cluster for MySQL? Here are 9 tips to consider before going live. These are applicable to all 3 Galera versions (Codership, Percona XtraDB Cluster and MariaDB Galera Cluster). 

 

1. Galera strengths and weaknesses

 

There are multiple types of replication and cluster technologies for MySQL, make sure you understand how Galera works so you set the right expectations. Applications that run on single instance MySQL might not work well on Galera, you might need to make some changes to the application or the workload might not be appropriate. We’d suggest you have a look at these resources: 

 

2. Database schema

 

Synchronous replication has implications on how well some transactions will run, so consider the following:

  • Each table must have at least one explicit primary key defined.
  • Each table must run under InnoDB or XtraDB storage engine.
  • Chunk up your big transaction in batches. For example, rather than having one transaction insert 100,000 rows, break it up into smaller chunks of e.g., insert 1000 rows per transaction. 
  • Your application can tolerate non-sequential auto-increment values.
  • Schema changes are handled differently. Watch this webinar for details.
  • Handle hotspots/Galera deadlocks by sending writes to a single node.

 

3. Hardware and network

 

Galera will perform as fast as the slowest node. This is due to the nature of synchronous replication, where all nodes need to acknowledge a writeset before committing. Choosing uniform hardware for all nodes is recommended. If you’re running on virtualized environments, monitor CPU Steal Time (the percentage of time a virtual CPU waits for a real CPU while the hypervisor is servicing another virtual processor). 

The disk subsystem is important (and also applies to single instance MySQL). Remember that upgrading the disk layer is a bigger hassle than upgrading the other components like RAM or network interface card. Hardware upgrades can be performed in round-robin fashion, i.e upgrading one node at a time.

read more

Link
Peter ZaitsevWhen (and how) to move an InnoDB table outside the shared tablespace (22.8.2014, 14:29 UTC)

In my last post, “A closer look at the MySQL ibdata1 disk space issue and big tables,” I looked at the growing ibdata1 problem under the perspective of having big tables residing inside the so-called shared tablespace. In the particular case that motivated that post, we had a customer running out of disk space in his server who was looking for a way to make the ibdata1 file shrink. As you may know, that file (or, as explained there, the set of ibdata files composing the shared tablespace) stores all InnoDB tables created when innodb_file_per_table is disabled, but also other InnoDB structures, such as undo logs and data dictionary.

For example, when you run a transaction involving InnoDB tables, MySQL will first write all the changes it triggers in an undo log, for the case you later decide to “roll them back”. Long standing, uncommited transactions are one of the causes for a growing ibdata file. Of course, if you have innodb_file_per_table disabled then all your InnoDB tables live inside it. That was what happened in that case.

So, how do you move a table outside the shared tablespace and change the storage engine it relies on? As importantly, how does that affects disk space use? I’ve explored some of the options presented in the previous post and now share my findings with you below.

The experiment

I created a very simple InnoDB table inside the shared tablespace of a fresh installed Percona Server 5.5.37-rel35.1 with support for TokuDB and configured it with a 1GB buffer pool. I’ve used a 50G partition to host the ‘datadir’ and another one for ‘tmpdir’:

Filesystem                 Size Used Avail Use% Mounted on
/dev/mapper/vg0-lvFernando1 50G 110M  47G   1%  /media/lvFernando1  # datadir
/dev/mapper/vg0-lvFernando2 50G  52M  47G   1%  /media/lvFernando2  # tmpdir

Here’s the table structure:

CREATE TABLE `joinit` (
  `i` int(11) NOT NULL AUTO_INCREMENT,
  `s` varchar(64) DEFAULT NULL,
  `t` time NOT NULL,
  `g` int(11) NOT NULL,
  PRIMARY KEY (`i`)
) ENGINE=InnoDB  DEFAULT CHARSET=latin1;

I populated it with 134 million rows using the following routine:

INSERT INTO joinit VALUES (NULL, uuid(), time(now()),  (FLOOR( 1 + RAND( ) *60 )));
INSERT INTO joinit SELECT NULL, uuid(), time(now()),  (FLOOR( 1 + RAND( ) *60 )) FROM joinit;  # repeat until you reach your target number of rows

which resulted in the table below:

mysql> show table status from test like 'joinit'G
*************************** 1. row ***************************
Name: joinit
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 134217909
Avg_row_length: 72
Data_length: 9783214080
Max_data_length: 0
Index_length: 0
Data_free: 1013972992
Auto_increment: 134872552
Create_time: 2014-07-30 20:42:42
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.00 sec)

The resulting ibdata1 file was showing to have 11G, which accounted in practice for 100% of the datadir partition use then. What follows next is a few experiences I did by converting that table to use a different storage engine, moving it outside the shared tablespace, compressing it, and dumping and restoring the database back to see the effects in disk space use. I haven’t timed how long running each command took and focused mostly on the generated files size. As a bonus, I’ve also looked at how to extend the shared table space by adding an extra ibdata file.

#1) Converting to MyISAM

Technical characteristics and features apart, MyISAM tables are know to occupy less disk space than InnoDB’s ones. How much less depends on the actual table structure. Here I made the conversion in the most simplest way:

mysql> ALTER TABLE test.joinit ENGINE=MYISAM;

which created the following files (the .frm file already existed):

$ ls -lh /media/lvFernando1/data/test/
total 8.3G
-rw-rw---- 1 fernando.laudares admin 8.5K Jul 31 16:21 joinit.frm
-rw-rw---- 1 fernando.laudares admin 7.0G Jul 31 16:27 joinit.MYD
-rw-rw---- 1 fernando.laudares admin 1.3G Jul 31 16:27 joinit.MYI

The resulting MyISAM files amounted for an additional 8.3G of disk space use:

/dev/mapper/vg0-lvFernando1 50G 19G 29G 40% /media/lvFernando1

I was expecting smaller files but, of course, the result depends largely on the data types of the columns composing the table. The problem (or the consequence) is that we end up with close to the double of the initial disk space being used:

As it happens with th

Truncated by Planet PHP, read more at the original (another 13357 bytes)

Link
Peter ZaitsevWhen (and how) to move an InnoDB table outside the shared tablespace (22.8.2014, 14:29 UTC)

In my last post, “A closer look at the MySQL ibdata1 disk space issue and big tables,” I looked at the growing ibdata1 problem under the perspective of having big tables residing inside the so-called shared tablespace. In the particular case that motivated that post, we had a customer running out of disk space in his server who was looking for a way to make the ibdata1 file shrink. As you may know, that file (or, as explained there, the set of ibdata files composing the shared tablespace) stores all InnoDB tables created when innodb_file_per_table is disabled, but also other InnoDB structures, such as undo logs and data dictionary.

For example, when you run a transaction involving InnoDB tables, MySQL will first write all the changes it triggers in an undo log, for the case you later decide to “roll them back”. Long standing, uncommited transactions are one of the causes for a growing ibdata file. Of course, if you have innodb_file_per_table disabled then all your InnoDB tables live inside it. That was what happened in that case.

So, how do you move a table outside the shared tablespace and change the storage engine it relies on? As importantly, how does that affects disk space use? I’ve explored some of the options presented in the previous post and now share my findings with you below.

The experiment

I created a very simple InnoDB table inside the shared tablespace of a fresh installed Percona Server 5.5.37-rel35.1 with support for TokuDB and configured it with a 1GB buffer pool. I’ve used a 50G partition to host the ‘datadir’ and another one for ‘tmpdir’:

Filesystem                 Size Used Avail Use% Mounted on
/dev/mapper/vg0-lvFernando1 50G 110M  47G   1%  /media/lvFernando1  # datadir
/dev/mapper/vg0-lvFernando2 50G  52M  47G   1%  /media/lvFernando2  # tmpdir

Here’s the table structure:

CREATE TABLE `joinit` (
  `i` int(11) NOT NULL AUTO_INCREMENT,
  `s` varchar(64) DEFAULT NULL,
  `t` time NOT NULL,
  `g` int(11) NOT NULL,
  PRIMARY KEY (`i`)
) ENGINE=InnoDB  DEFAULT CHARSET=latin1;

I populated it with 134 million rows using the following routine:

INSERT INTO joinit VALUES (NULL, uuid(), time(now()),  (FLOOR( 1 + RAND( ) *60 )));
INSERT INTO joinit SELECT NULL, uuid(), time(now()),  (FLOOR( 1 + RAND( ) *60 )) FROM joinit;  # repeat until you reach your target number of rows

which resulted in the table below:

mysql> show table status from test like 'joinit'G
*************************** 1. row ***************************
Name: joinit
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 134217909
Avg_row_length: 72
Data_length: 9783214080
Max_data_length: 0
Index_length: 0
Data_free: 1013972992
Auto_increment: 134872552
Create_time: 2014-07-30 20:42:42
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.00 sec)

The resulting ibdata1 file was showing to have 11G, which accounted in practice for 100% of the datadir partition use then. What follows next is a few experiences I did by converting that table to use a different storage engine, moving it outside the shared tablespace, compressing it, and dumping and restoring the database back to see the effects in disk space use. I haven’t timed how long running each command took and focused mostly on the generated files size. As a bonus, I’ve also looked at how to extend the shared table space by adding an extra ibdata file.

#1) Converting to MyISAM

Technical characteristics and features apart, MyISAM tables are know to occupy less disk space than InnoDB’s ones. How much less depends on the actual table structure. Here I made the conversion in the most simplest way:

mysql> ALTER TABLE test.joinit ENGINE=MYISAM;

which created the following files (the .frm file already existed):

$ ls -lh /media/lvFernando1/data/test/
total 8.3G
-rw-rw---- 1 fernando.laudares admin 8.5K Jul 31 16:21 joinit.frm
-rw-rw---- 1 fernando.laudares admin 7.0G Jul 31 16:27 joinit.MYD
-rw-rw---- 1 fernando.laudares admin 1.3G Jul 31 16:27 joinit.MYI

The resulting MyISAM files amounted for an additional 8.3G of disk space use:

/dev/mapper/vg0-lvFernando1 50G 19G 29G 40% /media/lvFernando1

I was expecting smaller files but, of course, the result depends largely on the data types of the columns composing the table. The problem (or the consequence) is that we end up with close to the double of the initial disk space being used:

As it happens with the other

Truncated by Planet PHP, read more at the original (another 13325 bytes)

Link
Peter ZaitsevA closer look at the MySQL ibdata1 disk space issue and big tables (21.8.2014, 15:24 UTC)

A recurring and very common customer issue seen here at the Percona Support team involves how to make the ibdata1 file “shrink” within MySQL. I can only imagine there’s a degree of regret by some of the InnoDB architects on their design decisions regarding disk-space management by the shared tablespace* because this has been a big frustration for many MySQL users over the years.

There’s a very old bug (“InnoDB ibdata1 never shrinks after data is removed,” Sept. 8 2003) documenting user dissatisfaction. Shortly before that issue celebrated its 10th anniversary, James Day, MySQL senior principal support engineer at Oracle, posted a comment explaining why things haven’t changed and he also offered possible alternative solutions. Maybe we’ll see it fixed in a future release of MySQL. We can only be sure that any new storage engine aiming to warrant the sympathy of MySQL users can’t make that same mistake again.

One general misunderstanding that exacerbates the problem is the belief that if we enable innodb_file_per_table then all InnoDB tables will live in their own tablespace (a “private” .ibd file), even though the manual is clear about the role this variable plays. The truth is that when you enable innodb_file_per_table it will only immediately affect how new InnoDB tables are created – it won’t magically export tables currently living in the shared tablespace into their own separate .ibd files. That can be manually accomplished at any time afterwards by running ALTER TABLE or OPTIMIZE TABLE on the target table. The “gotcha” here is that by doing so you’ll actually be using additional disk space – as much as the table size,  for the newly created .ibd file. ibdata1 won’t automatically shrink: the space previously used by that table will be marked as being “free” (for internal InnoDB use) but won’t be returned to the file system.

Note: Throughout this post I make a common reference between ibdata1 and the shared (also called system) tablespace. In fact, the latter can be composed by a list of file definitions (ibdata1;ibdata2, …). Each file can have a fixed size or be specified with the autoextend parameter and have a cap limiting how big they can grow (well, actually only the last file defined in that list can). The default setting for the variable ruling how the shared tablespace is defined has it living on a file located in the datadir, named ibdata1, and configured with autoextend, hence the popular reference of a “growing ibdata1″.

A big table scenario

The shared tablespace contains more than data and indexes from InnoDB tables created inside it, such as the rollback segment for running transactions (a.k.a. “undo logs”). My colleague Miguel Angel Nieto wrote a blog post last year that explains in detail what is stored inside ibdata files, why they can grow bigger than the sum of data from the tables it hosts and won’t “shrink,” and explains the only real way to reclaim unused disk space to the file system.

But there’s one scenario where the ibdata1 file grows in a “legitimate” way: When it’s storing a big table. If there’s a big table living inside the shared tablespace accounting for most of its size then there’s no “shrinking” it. We can argue this would run counter to best practices (or not) but let’s remember that innodb_file_per_table used to be disabled by default. The first releases of MySQL 5.5 had it enabled by default but then the later ones had it disabled. We find the exact opposite happening with MySQL

Truncated by Planet PHP, read more at the original (another 6943 bytes)

Link
Peter ZaitsevA closer look at the MySQL ibdata1 disk space issue and big tables (21.8.2014, 15:24 UTC)

A recurring and very common customer issue seen here at the Percona Support team involves how to make the ibdata1 file “shrink” within MySQL. I can only imagine there’s a degree of regret by some of the InnoDB architects on their design decisions regarding disk-space management by the shared tablespace* because this has been a big frustration for many MySQL users over the years.

There’s a very old bug (“InnoDB ibdata1 never shrinks after data is removed,” Sept. 8 2003) documenting user dissatisfaction. Shortly before that issue celebrated its 10th anniversary, James Day, MySQL senior principal support engineer at Oracle, posted a comment explaining why things haven’t changed and he also offered possible alternative solutions. Maybe we’ll see it fixed in a future release of MySQL. We can only be sure that any new storage engine aiming to warrant the sympathy of MySQL users can’t make that same mistake again.

One general misunderstanding that exacerbates the problem is the belief that if we enable innodb_file_per_table then all InnoDB tables will live in their own tablespace (a “private” .ibd file), even though the manual is clear about the role this variable plays. The truth is that when you enable innodb_file_per_table it will only immediately affect how new InnoDB tables are created – it won’t magically export tables currently living in the shared tablespace into their own separate .ibd files. That can be manually accomplished at any time afterwards by running ALTER TABLE or OPTIMIZE TABLE on the target table. The “gotcha” here is that by doing so you’ll actually be using additional disk space – as much as the table size,  for the newly created .ibd file. ibdata1 won’t automatically shrink: the space previously used by that table will be marked as being “free” (for internal InnoDB use) but won’t be returned to the file system.

Note: Throughout this post I make a common reference between ibdata1 and the shared (also called system) tablespace. In fact, the latter can be composed by a list of file definitions (ibdata1;ibdata2, …). Each file can have a fixed size or be specified with the autoextend parameter and have a cap limiting how big they can grow (well, actually only the last file defined in that list can). The default setting for the variable ruling how the shared tablespace is defined has it living on a file located in the datadir, named ibdata1, and configured with autoextend, hence the popular reference of a “growing ibdata1″.

A big table scenario

The shared tablespace contains more than data and indexes from InnoDB tables created inside it, such as the rollback segment for running transactions (a.k.a. “undo logs”). My colleague Miguel Angel Nieto wrote a blog post last year that explains in detail what is stored inside ibdata files, why they can grow bigger than the sum of data from the tables it hosts and won’t “shrink,” and explains the only real way to reclaim unused disk space to the file system.

But there’s one scenario where the ibdata1 file grows in a “legitimate” way: When it’s storing a big table. If there’s a big table living inside the shared tablespace accounting for most of its size then there’s no “shrinking” it. We can argue this would run counter to best practices (or not) but let’s remember that innodb_file_per_table used to be disabled by default. The first releases of MySQL 5.5 had it enabled by default but then the later ones had it disabled. We find the exact opposite happening with MySQL

Truncated by Planet PHP, read more at the original (another 6919 bytes)

Link
Jean-Jerome SchmidtAutomation & Management of MariaDB Galera Clusters: New European Webinars with SkySQL - The MariaDB Company (21.8.2014, 09:48 UTC)
August 21, 2014
By Severalnines

MariaDB Galera Cluster involves more effort and resource to administer than standalone MariaDB systems. If you would like to learn how to better manage your MariaDB cluster, then this webinar series is for you. 

 

We will give you practical advice on how to introduce clusters into your MariaDB / MySQL  environment, automate deployment and make it easier for operational staff to manage and monitor the cluster using ClusterControl.

 

Language, Date & Time: 

 

English - Tuesday, September 30th @ 11am CEST: Management & Automation of MariaDB Galera Clusters

French - Tuesday, October 7th @ 10am CEST: Gestion et Automatisation de Clusters Galera pour MariaDB

German - Wednesday, October 8th @ 10am CEST: Verwaltung und Automatisierung von MariaDB Galera Cluster

 

High availability cluster configurations tend to be complex, but once they are designed, they tend to be duplicated many times with minimal variation. Automation can be applied to provisioning, upgrading, patching and scaling. DBAs and Sysadmins can then focus on more critical tasks, such as performance tuning, query design, data modeling or providing architectural advice to application developers. A well managed system can mitigate operational risk, that can result in significant savings and reduced downtime. 

 

mariadbgaleracluster.jpg

 

MariaDB Roadshow - London

 

And if you’re in London this September, do join us at the MariaDB Roadshow event on Thursday, September 18th. We’ll be talking about Automation & Management of Database Clusters there as well and would love to talk to you in person! 

 

read more

Link
Peter ZaitsevHow to use MySQL Global Transaction IDs (GTIDs) in production (20.8.2014, 08:00 UTC)

Reconfiguring replication has always been a challenge with MySQL. Each time the replication topology has to be changed, the process is tedious and error-prone because finding the correct binlog position is not straightforward at all. Global Transaction IDs (GTIDs) introduced in MySQL 5.6 aim at solving this annoying issue.

The idea is quite simple: each transaction is associated with a unique identifier shared by all servers in a given replication topology. Now reconfiguring replication is easy as the correct binlog position can be automatically calculated by the server.

Awesome? Yes it is! However GTIDs are also changing a lot of things in how we can perform operations on replication. For instance, skipping transactions is a bit more difficult. Or you can get bitten by errant transactions, a concept that did not exist before.

Percona MySQL webinarsThis is why I will be presenting a webinar on Aug. 27 at 10 a.m. PDT: Using MySQL Global Transaction IDs in Production.

You will learn what you need to operate a replication cluster using GTIDs: how to monitor replication status or to recover from replication errors, tools that can help you and tools that you should avoid and also the main issues that can occur with GTIDs.

This webinar is free but you can register today to reserve your seat. And a recording will be available afterwards. See you next week!

The post How to use MySQL Global Transaction IDs (GTIDs) in production appeared first on MySQL Performance Blog.

Link
Peter ZaitsevHow to use MySQL Global Transaction IDs (GTIDs) in production (20.8.2014, 08:00 UTC)

Reconfiguring replication has always been a challenge with MySQL. Each time the replication topology has to be changed, the process is tedious and error-prone because finding the correct binlog position is not straightforward at all. Global Transaction IDs (GTIDs) introduced in MySQL 5.6 aim at solving this annoying issue.

The idea is quite simple: each transaction is associated with a unique identifier shared by all servers in a given replication topology. Now reconfiguring replication is easy as the correct binlog position can be automatically calculated by the server.

Awesome? Yes it is! However GTIDs are also changing a lot of things in how we can perform operations on replication. For instance, skipping transactions is a bit more difficult. Or you can get bitten by errant transactions, a concept that did not exist before.

Percona MySQL webinarsThis is why I will be presenting a webinar on Aug. 27 at 10 a.m. PDT: Using MySQL Global Transaction IDs in Production.

You will learn what you need to operate a replication cluster using GTIDs: how to monitor replication status or to recover from replication errors, tools that can help you and tools that you should avoid and also the main issues that can occur with GTIDs.

This webinar is free but you can register today to reserve your seat. And a recording will be available afterwards. See you next week!

The post How to use MySQL Global Transaction IDs (GTIDs) in production appeared first on MySQL Performance Blog.

Link
Peter Zaitsev5 great new features from Percona Cloud Tools for MySQL (19.8.2014, 13:00 UTC)

It’s been three months since we announced anything for Percona Cloud Tools, not because we’ve been idle but because we’ve been so busy the time flew by!  Here’s the TL;DR to pique your interest:

  • EXPLAIN queries in real-time through the web app
  • Query Analytics for Performance Schema
  • Dashboards: customizable, shared groups of charts
  • Install and upgrade the agent with 1 command line
  • Unified UI: same time range, same host wherever you go

Percona Cloud Tools for MySQL is a hosted service providing access to query performance insights for all MySQL uses. After a brief setup, unlock new information about your database and how to improve your applications. There’s a lot more, but let’s just look at these five new features…

 

EXPLAIN queries in real-time through the web app

PCT Real-time EXPLAINLike many people, to get a query’s EXPLAIN plan you probably copy the query, ssh to the server, log in to MySQL, then paste the query after typing “EXPLAIN”.  With Percona Cloud Tools’ new real-time EXPLAIN feature you can simply click a button.  It’s a real timesaver.

The EXPLAIN plan is a vital part of diagnosing the query.  Now with Percona Cloud Tools you have a lot of powerful information in one place: the query, its metrics, its EXPLAIN plan, and more–and more to come, too!

 

Query Analytics for Performance Schema

The MySQL slow log is a wealth of indispensable data about queries that you cannot get anywhere else.  That’s why it’s the default for Percona Cloud Tools Query Analytics.  Like most things, however, it has tradeoffs: for one, it can be time-consuming to parse, especially on very busy servers.  Or, in the case of Amazon RDS, the slow log may simply not be available.  That’s ok now because with MySQL 5.6 or newer (including Percona Server 5.6 or newer) you can parse queries from the Performance Schema.  It’s not as data-rich as the slow log, but it has the basics and it’s a great alternative (and sometimes the only alternative) to the slow log.

 

Dashboards: customizable, shared groups of charts

PCT CPU Usage ChartMetrics Monitor has a default dashboard (a collection of charts) for MySQL.  The default dashboard is a great start because it’s created by us (Vadim, actually) so you know it’s relevant and meaningful for MySQL.  However, it presents only a fraction of the data that percona-agent collects, so we need more dashboards to organize and present other data.  Now you can create new dashboards which are accessible to everyone in the organization.  For example, Peter was recently benchmarking TokuDB, so he created a TokuDB-specific dashboard.

 

Install and upgrade the agent with 1 command line

As of percona-agent 1.0.6, you can install, upgrade, and uninstall the agent with a single command line, ran as root, like:

# curl -s https://cloud.percona.com/install | bash /dev/stdin -api-key <API KEY>

For many environments this is all you need for a first-time install of a new agent.  The install will auto-detect MySQL, configure, and run all services by default.  You can tweak things later in the web app.  This also means you can install percona-agent in an automated environment.

 

Unified UI: same time range, same host wherever you go

PCT Unified UI

Like most projects, Percona Cloud Tools has evolved over time.  Consequently,

Truncated by Planet PHP, read more at the original (another 972 bytes)

Link
Peter ZaitsevMeasuring failover time for ScaleArc load balancer (19.8.2014, 12:00 UTC)

ScaleArc hired Percona to benchmark failover times for the ScaleArc database traffic management software in different scenarios. We tested failover times for various clustered setups, where ScaleArc itself was the load balancer for the cluster. These tests complement other performance tests on the ScaleArc software – sysbench testing for latency and testing for WordPress acceleration.

We tested failover times for Percona XtraDB Cluster (PXC) and MHA (any traditional MySQL replication-based solution works pretty much the same way).

In each case, we tested failover with a rate limited sysbench benchmark. In this mode, sysbench generates roughly 10 transactions each second, even if not all 10 were completed in the previous second. In that case, the newly generated transactions are queued.

The sysbench command we used for testing is the following.

# while true ; do sysbench --test=sysbench/tests/db/oltp.lua
                           --mysql-host=172.31.10.231
                           --mysql-user=root
                           --mysql-password=admin
                           --mysql-db=sbtest
                           --oltp-table-size=10000
                           --oltp-tables-count=4
                           --num-threads=4
                           --max-time=0
                           --max-requests=0
                           --report-interval=1
                           --tx-rate=10
                           run ; sleep 1; done

The command is run in a loop, because typically at the time of failover, the application receives some kind of error while the virtual IP is moved, or while the current node is declared dead. Well-behaving applications are reconnecting and retrying in this case. Sysbench is not a well-behaving application from this perspective – after failover it has to be restarted.

This is good for testing the duration of errors during a failover procedure – but not the number of errors. In a simple failover scenario (ScaleArc is just used as a regular load balancer), the number of errors won’t be any higher than usual with ScaleArc’s queueing mechanism.

ScaleArc+MHA

In this test, we used MHA as a replication manager. The test result would be similar regardless of how its asynchronous replication is managed – only the ScaleArc level checks would be different. In the case of MHA, we tested graceful and non-graceful failover. In the graceful case, we stopped the manager and performed a manual master switchover, after which we informed the ScaleArc software via an API call for failover.

We ran two tests:

  • A manual switchover with the manager stopped, switching over manually, and informing the load balancer about the change.
  • An automatic failover where the master was killed, and we let MHA and the ScaleArc software discover it.

ScaleArc+MHA manual switchover

The manual switchover was performed with the following command.

# time (
          masterha_master_switch --conf=/etc/cluster_5.cnf
          --master_state=alive
          --new_master_host=10.11.0.107
          --orig_master_is_new_slave
          --interactive=0
          &&
          curl -k -X PUT https://172.31.10.30/api/cluster/5/manual_failover
          -d '{"apikey": "0c8aa9384ddf2a5756299a9e7650742a87bbb550"}' )
{"success":true,"message":"4214   Failover status updated successfully.","timestamp":1404465709,"data":{"apikey":"0c8aa9384ddf2a5756299a9e7650742a87bbb550"}}
real    0m8.698s
user    0m0.302s
sys     0m0.220s

The curl command calls ScaleArc’s API to inform the ScaleArc software about the master switchover.

During this time, sysbench output was the following.

[  21s] threads: 4, tps: 13.00, reads/s: 139.00, writes/s: 36.00, response time: 304.07ms (95%)
[  21s] queue length: 0, concurrency: 1
[  22s] threads: 4, tps: 1.00, reads/s: 57.00, writes/s: 20.00, response time: 570.13ms (95%)
[  22s] queue length: 8, concurrency: 4
[  23s] threads: 4, tps: 19.00, reads/s: 237.99, writes/s: 68.00, response time: 976.61ms (95%)
[  23s] queue length: 0, concurrency: 2
[  24s] threads: 4, tps: 9.00, reads/s: 140.00, writes/s: 40.00, response time: 477.55ms (95%)
[  24s] queue length: 0, concurrency: 3
[  25s] threads: 4, tps: 10.00, reads/s: 105.01, writes/s: 28.00, response time: 586.23ms (95%)
[  25s] queue length: 0, concurrency: 1

Only a slight hiccup is visible at 22 seconds. In this second, only 1 transaction was done, and 8 others were

Truncated by Planet PHP, read more at the original (another 8964 bytes)

Link
LinksRSS 0.92   RDF 1.
Atom Feed