Jean-Jerome SchmidtBenchmark of Load Balancers for MySQL/MariaDB Galera Cluster (31.10.2014, 09:12 UTC)
October 31, 2014
By Severalnines

When running a MariaDB Cluster or Percona XtraDB Cluster, it is common to use a load balancer to distribute client requests across multiple database nodes. Load balancing SQL requests aims to optimize the usage of the database nodes, maximize throughput, minimize response times and avoid overload of the Galera nodes. 

In this blog post, we’ll take a look at four different open source load balancers, and do a quick benchmark to compare performance:

  • HAproxy by HAproxy Technologies
  • IPVS by Linux Virtual Server Project
  • Galera Load Balancer by Codership
  • mysqlproxy by Oracle (alpha)

Note that there are other options out there, e.g. MaxScale from the MariaDB team, that we plan to cover in a future post.

 

When to Load Balance Galera Requests

 

Although Galera Cluster does multi-master synchronous replication, you should really read/write on all database nodes provided that you comply with the following:

  • Table you are writing to is not a hotspot table
  • All tables must have an explicit primary key defined
  • All tables must run under InnoDB storage engine
  • Huge writesets must run in batch, for example it is recommended to run 100 times of 1000 row inserts rather than one time of 100000 row inserts
  • Your application can tolerate non-sequential auto-increment values.

If above requirements are met, you can have a pretty safe multi-node write cluster without the need to split the writes on multiple masters (sharding) as  you would need to do in a MySQL Replication setup because of slave lag problems. Furthermore, having load balancers between the application and database layer can be very convenient where load balancers may assume that all nodes are equal and no extra configuration such as read/write splitting and promoting a slave node to a master are required.

Note that if you run into deadlocks with Galera Cluster, you can send all writes to a single node and avoid concurrency issues across nodes. Read requests can still be load balanced across all nodes. 

 

Load Balancers

 

HAproxy

 

HAProxy stands for High Availability Proxy, it is an open source TCP/HTTP-based load balancer and proxying solution. It is commonly used to improve the performance and availability of a service by distributing the workload across multiple servers. Over the years it has become the de-facto open source load balancer, is now shipped with most mainstream Linux distributions.

read more

Link
Shlomi NoachPercona Live 2015: Call for Papers is open (31.10.2014, 06:25 UTC)

And not for long!

The Call for Papers for Percona Live MySQL Conference and Expo, to be held at Santa Clara in April 2015, is open. The dead line for submissions is Nov. 16th; that's just around the corner.

As with previous years, we will hold a 4 day conference, the first being a tutorials day and three days for sessions, BoF and lightning talks, as well as community events. The committee is expecting to review at about 250-300 submissions, out of which it will pick at about 100 talks to schedule or reserve.

We will be using these tracks:

  • High Availability
  • DevOps
  • Programming
  • Performance Optimization
  • Replication and Backup
  • MySQL in the Cloud
  • MySQL and NoSQL
  • MySQL Case Studies
  • Security
  • What’s New in MySQL

This year we will roughly pre-define the desired number of sessions we wish to have per track. This is not set in stone and everything is fluid. Yet, this will give us better guidelines at choosing and pursuing content for this conference.

Submitting a proposal

We encourage all members of the community to submit their tutorial/session/BoF proposals as soon as possible. Please register/login at the conference home page.

The guidelines for submitting a proposal are generally unchanged; please review past recommendations: [1], [2], [3], [4]. To add to all these:

  • Do note that we are likely to only review a proposal just once. Please submit only after you have finalized your draft.
  • Make a reasonable length of proposal. We believe 250 - 300 words are quite enough for a good proposal. Please don't write an essay, and remember that you proposal is what gets printed on the schedule, and what is read by the conference attendees when choosing the next talk to go to.
  • Write a descent Bio.

The committee

This year's committee includes:

  • Calvin Sun (Twitter)
  • Chris Schneider (Groupon)
  • Colin Charles (MariaDB)
  • Gwen Shapira (Cloudera)
  • Harrison Fisk (Facebook)
  • Jay Janssen (Percona)
  • Jeremy Cole (Google)
  • John Scott (Wellcentive)
  • Morgan Tocker (Oracle)
  • Patrick Gilbraith (HP)
  • Peter Zaitsev (Percona)
  • Sean Chighizola (Big Fish)
  • Shivinder Singh (Verizon Wireless)
  • Tamar Bercovici (Box)
  • myself, Shlomi Noach (Conference Chairman)

Looking forward to reviewing your papers!

 

Link
Peter ZaitsevGet a handle on your HA at Percona Live London 2014 (31.10.2014, 05:00 UTC)
Liz_Fred_Kenny_Percona Live London

From left: Liz van Dijk, Frédéric Descamps and Kenny Gryp

If you’re following this blog, it’s quite likely you’re already aware of the Percona Live London 2014 conference coming up in just a few days. Just in case, though (you know, if you’re still looking for an excuse to sign up), I wanted to put a spotlight on the tutorial to be delivered by my esteemed colleagues Frédéric Descamps (@lefred) and Kenny Gryp (@gryp), and myself.

The past two years at Percona we’ve been spending a substantial amount of time working with customers taking their first steps into creating Highly Available MySQL environments built on Galera. Percona XtraDB Cluster allows you to get it up and running very fast, but as any weathered “HA” DBA will tell you, building the cluster is only the beginning. (Percona XtraDB Cluster is an open source (free) high-availability and high-scalability solution for MySQL clustering.)

Any cluster technology is likely to introduce a great amount of complexity to your environment, and in our tutorial we want to show you not only how to get started, but also how to avoid many of the operational pitfalls we’ve encountered. Our tutorial, Percona XtraDB Cluster in a nutshell, will be taking place on Monday 3 November and is a full-day (6 hours) session, with an intense hands-on approach.

We’ll be covering a great deal of practical topics, such as:

  • Things to keep in mind when migrating an existing environment over to PXC
  • How to manage and maintain the cluster, keeping it in good shape
  • Load balancing requests across the cluster
  • Considerations for deploying PXC in the cloud

Planning on attending? Be sure to come prepared! Given the hands-on approach of the tutorial, make sure you bring your laptop with enough disk space (~20GB) and processing power to run at least 4 small VirtualBox VM’s.

We look forward to seeing you there!

The post Get a handle on your HA at Percona Live London 2014 appeared first on MySQL Performance Blog.

Link
Peter ZaitsevFacebook MySQL database engineers ready for Percona Live London 2014 (30.10.2014, 05:00 UTC)

With 1.28 billion active users, Facebook MySQL database engineers are active and extremely valuable contributors to the global MySQL community. So naturally they are also active participants of Percona Live MySQL conferences! And next week’s Percona Live London 2014 (Nov. 3-4) is no exception. (Register now and use the promotional code “Facebook” to save £30!)

I spoke with Facebook database engineers Yoshinori “Yoshi” Matsunobu and Shlomo Priymak about their upcoming sessions along with what’s new at Facebook since our last conversation back in April.


Percona Live London 2014Tom: Yoshi, last year Facebook deployed MySQL 5.6 on all production environments – what have you and your team learned since doing that? And do you have a few best practices you could share? I realize you’ll be going into detail during your session in London (MySQL 5.6 and WebScaleSQL at Facebook), but maybe a few words on a couple of the bigger ones?

Yoshi: MySQL 5.6 has excellent replication enhancements to use in large-scale deployments. For example, crash safe slave makes it possible to recover without rebuilding a slave instance on server crash. This can greatly minimize slave downtime, especially if your database size is large. There are many other new features such as GTID, multi-threaded slave, streaming mysqlbinlog and we actively use them in production.

For InnoDB, Online DDL is a good example to ease operations. Many MySQL users are doing schema changes by switching masters. This can minimize downtime but requires operational efforts. Online DDL made things much easier.

Tom: Facebook is an active and extremely valuable part of the overall MySQL community and ecosystem – what are some of the key features and improvements you’ve contributed in the past year since moving to MySQL 5.6?

Yoshi: For InnoDB, I think online defragmentation and faster full table scan are the most valuable contributions from Facebook in 5.6. I have received very positive feedback about faster InnoDB full table scan (Logical ReadAhead). My colleague Rongrong will speak about something interesting regarding online defragmentation at Percona Live London. For Replication, we have done many optimizations to make GTID and MTS work without pain. Semi-Synchronous mysqlbinlog and backported Loss-Less semisync from MySQL 5.7 are very useful when you use Semi-Synchronous replication.

Tom: Shlomo, your sesson, “MySQL Automation at Facebook Scale,” will be of great interest to DBAs at large and growing organizations considering that Facebook has one of the world’s largest MySQL database clusters. What are the two or three most significant things that you’ve learned as a database engineer operating a cluster of this size? And has anything surprised you along the way (so far)?

Shlomo: This is a great question! We like to speak of “10x” at Facebook when thinking of scaling. For example, what would you do differently if the number of servers you had was 10x more than what it is? This type of mental exercise is surprisingly useful when working with systems at scale. If you, or any of the readers, try to extrapolate this about systems you manage, there will be things you’ll be imagining about how a system like this would be – and you won’t be too far from our reality in many aspects.

You’d imagine that we automate much of the single units of work, like master/slave failover, upgrades and schema changes. You’d suspect we have automated fault detection, self managing systems, good alarming and self remediation. You’d presume that if you’re used to running a command on 100 machines, you’ll now be running it on 1000. At least that’s what I thought to myself, so these are not the things that surprised me. There are a few fundamental shifts in one’s thinking when you get to these sizes, which I didn’t foresee.

The first one is that there is absolutely no such thing as “one-off.” If there is a server somewhere that hits a problem every three years, and you have 1000 servers, this will be

Truncated by Planet PHP, read more at the original (another 9059 bytes)

Link
Peter ZaitsevMySQL and Openstack deep dive talk at OpenStack Paris Summit (and more!) (29.10.2014, 12:59 UTC)

MySQL and Openstack deep dive talk at OpenStack Paris Summit (and more!)I will present a benchmarking talk next week (Nov. 4) at the OpenStack Paris Summit with Jay Pipes from Mirantis. In order to be able to talk about benchmarking, we had to be able to set up and tear down OpenStack environments really quickly. For the benchmarks, we are using a deployment on AWS (ironically) where the instances aren’t actually started and the tenant network is not reachable but all the backend operations still happen.

The first performance bottleneck we hit wasn’t at the MySQL level. We used Rally to benchmark the environment. We started 1,000 fake instances with it at the first glance.

The first bottleneck that we saw was neutron-server eating up a single CPU core. We took a deeper look, and saw that neutron-server is utilizing a single core completely. By default, neutron does everything in a single process. After configuring the api workers and the rpc workers, performance became significantly better.

api_workers = 64
rpc_workers = 32

Before adding the options:

u'runner': {u'concurrency': 24, u'times': 1000, u'type': u'constant'}}
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action           | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| nova.boot_server | 4.125     | 9.336     | 15.547    | 11.795        | 12.362        | 100.0%  | 1000  |
| total            | 4.126     | 9.336     | 15.547    | 11.795        | 12.362        | 100.0%  | 1000  |
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
Whole scenario time without context preparation:  391.359671831

After adding the options:

u'runner': {u'concurrency': 24, u'times': 1000, u'type': u'constant'}}
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action           | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| nova.boot_server | 2.821     | 6.958     | 36.826    | 8.165         | 10.49         | 100.0%  | 1000  |
| total            | 2.821     | 6.958     | 36.826    | 8.165         | 10.49         | 100.0%  | 1000  |
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
Whole scenario time without context preparation:  292.163493156

Stop by our talk at the OpenStack Paris Summit for more details!

In addition to our talk, Percona has two additional speakers at the OpenStack Paris Summit. George Lorch, Percona software engineer, will speak with Vipul Sabhaya of the HP Cloud Platform Services team on “Percona Server Features for OpenStack and Trove Ops.” Tushar Katarki, Percona director of product management, will present a vBrownBag Tech Talk entitled “MySQL High Availability Options for OpenStack.” Percona is exhibiting at the OpenStack Paris Summit conference, as well – stop by booth E20 and say hello!

At Percona, we’re pleased to see the adoption of our open source software by the OpenStack community and we are working actively to develop more solutions for OpenStack users. We also provide Consulting assistance to organizations that are adopting OpenStack internally or are creating commercial services on top of OpenStack.

We are also pleased to introduce the first annual OpenStack Live, a conf

Truncated by Planet PHP, read more at the original (another 700 bytes)

Link
Monty SaysMariaDB foundation trademark agreement (28.10.2014, 21:38 UTC)

We have now published the trademark agreement between the MariaDB Corporation (formerly SkySQL) and the MariaDB Foundation. This agreement guarantees that MariaDB Foundation has the rights needed to protect the MariaDB server project!

With this protection, I mean to ensure that the MariaDB Foundation in turn ensures that anyone can be part of MariaDB development on equal terms (like with any other open source project).

I have received some emails and read some blog posts from people who are confusing trademarks with the rights and possibilities for community developers to be part of an open source project.

The MariaDB foundation was never created to protect the MariaDB trademark. It was created to ensure that what happened to MySQL would never happen to MariaDB: That people from the community could not be part of driving and developing MySQL on equal terms as other companies.

I have personally never seen a conflict with having one company own the trademark of an open source product, as long as anyone can participate in the development of the product! Having a strong driver for an open source project usually ensures that there are more full-time developers working on a project than would otherwise be possible. This makes the product better and makes it useful for more people. In most cases, people are participating in an open source project because they are using it, not because they directly make money on the project.

This is certainly the case with MySQL and MariaDB, but also with other projects. If the MySQL or the MariaDB trademark would have been fully owned by a foundation from a start, I think that neither project would have been as successful as they are! More about this later.

Some examples of open source projects that have the trademark used or owned by a commercial parent company are Wordpress (wordpress.com and Wordpress.org) and Mozilla.

Even when it comes to projects like Linux that are developed by many companies, the trademark is not owned by the Linux Foundation.

There has been some concern that MariaDB Corporation has more developers and Maria captains (people with write access to the MariaDB repositories) on the MariaDB project than anyone else. This means that the MariaDB Corporation has more say about the MariaDB roadmap than anyone else.

This is right and actually how things should be; the biggest contributors to a project are usually the ones that drive the project forward.

This doesn't, however, mean that no one else can join the development of the MariaDB project and be part of driving the road map.

The MariaDB Foundation was created exactly to guarantee this.

It's the MariaDB Foundation that governs the rules of how the project is developed, under what criteria one can become a Maria captain, the rights of the Maria captains, and how conflicts in the project are resolved.

Those rules are not yet fully defined, as we have had very few conflicts when it comes to accepting patches. The work on these rules have been initiated and I hope that we’ll have nice and equal rules in place soon. In all cases the rules will be what you would expect from an open source project. Any company that wants to ensure that MariaDB will continue to be a free project and wants to be part of defining the rules of the project can join the MariaDB Foundation and be part of this process!

Some of the things that I think went wrong with MySQL and would not have happened if we had created a foundation similar to the MariaDB Foundation for MySQL early on:

  • Claims that companies like Google and Ebay can't get their patches into MySQL if they don't pay (this was before MySQL was bought by Sun).
  • Closed source components in MySQL, developed by the company that owns the trademark to MySQL (almost happened to MySQL in Sun and has happened in MySQL Enterprise from Oracle).
  • Not giving community access to the roadmap.
  • Not giving community developers write access to the official repositories of MySQL.
  • Hiding code and critical test cases from the community.
  • No guarantee that a patch will ever be reviewed.

The MariaDB Foundation guarantees that the above things will never happen to MariaDB. In addition, the MariaDB Foundation employs people to perform reviews, provide documentation, and work actively to incorporate external contributions into the MariaDB project

Truncated by Planet PHP, read more at the original (another 5688 bytes)

Link
Peter ZaitsevHow to deal with MySQL deadlocks (28.10.2014, 07:00 UTC)

A deadlock in MySQL happens when two or more transactions mutually hold and request for locks, creating a cycle of dependencies. In a transaction system, deadlocks are a fact of life and not completely avoidable. InnoDB automatically detects transaction deadlocks, rollbacks a transaction immediately and returns an error. It uses a metric to pick the easiest transaction to rollback. Though an occasional deadlock is not something to worry about, frequent occurrences call for attention.

Before MySQL 5.6, only the latest deadlock can be reviewed using SHOW ENGINE INNODB STATUS command. But with Percona Toolkit’s pt-deadlock-logger you can have deadlock information retrieved from SHOW ENGINE INNODB STATUS at a given interval and saved to a file or table for late diagnosis. For more information on using pt-deadlock-logger, see this post. With MySQL 5.6, you can enable a new variable innodb_print_all_deadlocks to have all deadlocks in InnoDB recorded in mysqld error log.

Before and above all diagnosis, it is always an important practice to have the applications catch deadlock error (MySQL error no. 1213) and handle it by retrying the transaction.

How to diagnose a MySQL deadlock

A MySQL deadlock could involve more than two transactions, but the LATEST DETECTED DEADLOCK section only shows the last two transactions. Also it only shows the last statement executed in the two transactions, and locks from the two transactions that created the cycle. What are missed are the earlier statements that might have really acquired the locks. I will show some tips on how to collect the missed statements.

Let’s look at two examples to see what information is given. Example 1:

1 141013 6:06:22
2 *** (1) TRANSACTION:
3 TRANSACTION 876726B90, ACTIVE 7 sec setting auto-inc lock
4 mysql tables in use 1, locked 1
5 LOCK WAIT 9 lock struct(s), heap size 1248, 4 row lock(s), undo log entries 4
6 MySQL thread id 155118366, OS thread handle 0x7f59e638a700, query id 87987781416 localhost msandbox update
7 INSERT INTO t1 (col1, col2, col3, col4) values (10, 20, 30, 'hello')
8 *** (1) WAITING FOR THIS LOCK TO BE GRANTED:
9 TABLE LOCK table `mydb`.`t1` trx id 876726B90 lock mode AUTO-INC waiting
10 *** (2) TRANSACTION:
11 TRANSACTION 876725B2D, ACTIVE 9 sec inserting
12 mysql tables in use 1, locked 1
13 876 lock struct(s), heap size 80312, 1022 row lock(s), undo log entries 1002
14 MySQL thread id 155097580, OS thread handle 0x7f585be79700, query id 87987761732 localhost msandbox update
15 INSERT INTO t1 (col1, col2, col3, col4) values (7, 86, 62, "a lot of things"), (7, 76, 62, "many more")
16 *** (2) HOLDS THE LOCK(S):
17 TABLE LOCK table `mydb`.`t1` trx id 876725B2D lock mode AUTO-INC
18 *** (2) WAITING FOR THIS LOCK TO BE GRANTED:
19 RECORD LOCKS space id 44917 page no 529635 n bits 112 index `PRIMARY` of table `mydb`.`t2` trx id 876725B2D lock mode S locks rec but not gap waiting
20 *** WE ROLL BACK TRANSACTION (1)

Line 1 gives the time when the deadlock happened. If your application code catches and logs deadlock errors,which it should, then you can match this timestamp with the timestamps of deadlock errors in application log. You would have the transaction that got rolled back. From there, retrieve all statements from that transaction.

Line 3 & 11, take note of Transaction number and ACTIVE time. If you log SHOW ENGINE INNODB STATUS output periodically(which is a good practice), then you can search previous outputs with Transaction number to hopefully see more statements from the same transaction. The ACTIVE sec gives a hint on whether the transaction is a single statement or multi-statement one.

Line 4 & 12, the tables in use and locked are only with respect to the current statement. So having 1 table in use does not necessarily mean that the transaction involves 1 table only.

Line 5 & 13, this is worth of attention as it tells how many changes the transaction had made, which is the “undo log entries” and how many row locks it held which is “row lock(s)”. These info hints the complexity of the transaction.

Line 6 & 14, take note of thread id, connecting host and connecting user. If you use different MySQL users for different application functions which is another good practice, then you can tell which application area the transaction comes from based on the connecting host and user.

Line 9, for the f

Truncated by Planet PHP, read more at the original (another 7442 bytes)

Link
Jean-Jerome SchmidtData Warehouse in the Cloud - How to Upload MySQL data into Amazon Redshift for reporting and analytics (27.10.2014, 14:23 UTC)
October 27, 2014
By Severalnines

The term data warehousing often brings to mind things like large complex projects, big businesses, proprietary hardware and expensive software licenses. With Hadoop came open source data analysis software that ran on commodity hardware, this helped address at least some of the cost aspects. We had previously blogged about MongoDB and MySQL to Hadoop. But setting up and maintaining a Hadoop infrastructure might still be out of reach for small businesses or small projects with limited budgets. Well, perhaps then you might want to have a look at Redshift.

Now, in case you are running e.g. a Galera Cluster for MySQL, why not dedicate one of the cluster nodes for reporting? This is very doable, but if you’ve got reports generating long running queries, it might be advisable to decouple the reporting load from the live cluster. Having an asynchronous slave might help, but depending on the amount of data to be analyzed, a standard MySQL database might not be good enough. The great news is that Redshift is based on a columnar storage technology that’s designed to tackle big data problems. 

In this blog post, we’re going to show you how to parallel load your MySQL data into Amazon Redshift.

 

Loading Data to Amazon Redshift

 

There are several ways to load your data into Amazon Redshift. The COPY command is the most efficient way to load a table, as it can load data in parallel from multiple files and take advantage of the load distribution between nodes in the Redshift cluster. It supports loading data in CSV (or TSV), JSON, character-delimited, and fixed width format.

After your initial data load, if you add, modify, or delete a significant amount of data, you should follow up by running a VACUUM command to reorganize your data and reclaim space after deletes. You should also run an ANALYZE command to update table statistic.

Using individual INSERT statements to populate a table might be prohibitively slow. Alternatively, if your data already exists in other Amazon Redshift database tables, use SELECT INTO ... INSERT or CREATE TABLE AS to improve performance.

read more

Link
Peter ZaitsevMySQL & Friends Devroom FOSDEM 2015 (27.10.2014, 13:50 UTC)

FOSDEM 2015You can already feel the cold of February coming slowly… you can also smell waffles, fries and see a large amount of beards walking around with laptops… you are right, FOSDEM is coming! And as every year, the MySQL Community will also be present! For the 4th year in a row, I’ll perpetuate the organization of the MySQL & Friends Devroom.

FOSDEM 2015 edition will be held January 31 and February 1 here in Brussels. The MySQL & Friends Devroom is back on Sunday from 9 a.m. What is FOSDEM? It stands for the “Free and Open Source Software Developers’ European Meeting.” It’s a free event that offers open-source communities a place to meet, share ideas and collaborate.

As every year, the “Call for Papers” has been announced on the MySQL mailing list, and you can still read it here. CfP is open until December 7th!

This year the committee responsible for the talk’s selection is composed by:

* Dimitri Kravtchuk, representing Oracle
* Daniël van Eeden for the Community
* Roland Bouman for the Community
* Cédric Peintre for the Community
* Liz van Dijk, representing Percona
* Serge Frezefond, representing MariaDB
* René Cannaò, representing Blackbird IT

Thanks to all who have accepted playing this role and I wish them to work hard and make the best schedule as possible.

Don’t forget to submit your sessions (submit here, don’t forget to select MySQL track) in time and see you soon in Brussels to discover amazing stuff related to MySQL and have some beers with Friends!

The post MySQL & Friends Devroom FOSDEM 2015 appeared first on MySQL Performance Blog.

Link
Shlomi NoachRefactoring replication topologies with Pseudo GTID: a visual tour (27.10.2014, 09:55 UTC)

Orchestrator 1.2.1-beta supports Pseudo GTID (read announcement): a means to refactor the replication topology and connect slaves even without direct relationship; even across failed servers. This post illustrates two such scenarios and shows the visual way of mathcing/re-synching slaves.

Of course, orchestrator is not just a GUI tool; anything done with drag-and-drop is also done via web API (in fact, the drag-and-drop invoke the web API) as well as via command line. I'm mentioning this as this is the grounds for failover automation planned for the future.

Scenario 1: the master unexpectedly dies

The master crashes and cannot be contacted. All slaves are stopped as effect, but each in a different position. Some managed to salvage relay logs just before the master dies, some didn't. In our scenario, all three slaves are at least caught up with the relay log (that is, whatever they managed to pull through the network, they already managed to execute). So they're otherwise sitting idle waiting for something to happen. Well, something's about to happen.

orchestrator-pseudo-gtid-dead-master

Note the green "Safe mode" button to the right. This means operation is through calculation of binary log files & positions with relation to one's master. But the master is now dead, so let's switch to adventurous mode; in this mode we can drag and drop slaves onto instances normally forbidden. At this stage the web interface allows us to drop a slave onto its sibling or any of its ancestors (including its very own parent, which is a means of reconnecting a slave with its parent). Anyhow:

orchestrator-pseudo-gtid-dead-master-pseudo-gtid-mode

We notice that orchestrator is already kind enough to say which slave is best candidate to be the new master (127.0.0.1:22990): this is the slave (or one of the slaves) with most up-to-date data. So we choose to take another server and make it a slave of 127.0.0.1:22990:

orchestrator-pseudo-gtid-dead-master-begin-drag

And drop:

orchestrator-pseudo-gtid-dead-master-drop

There we have it: although their shared master is inaccessible, and the two slave's binary log file names & position mean nothing to each other, we are able to correctly match the two and make one child of the other:

orchestrator-pseudo-gtid-dead-master-refactored-1

Likewise, we do the same with 127.0.0.1:22988:

orchestrator-pseudo-gtid-dead-master-begin-drag-2

And end up with our (almost) final topology:

Truncated by Planet PHP, read more at the original (another 4497 bytes)

Link
LinksRSS 0.92   RDF 1.
Atom Feed