Oli SennhauserFromDual Backup Manager for MySQL 1.2.2 has been released (23.6.2015, 09:33 UTC)

FromDual has the pleasure to announce the release of the new version 1.2.2 of the popular Backup Manager for MySQL and MariaDB (fromdual_bman).

You can download the FromDual Backup Manager from here.

In the inconceivable case that you find a bug in the Backup Manager please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.2.x to 1.2.2

# cd ${HOME}/product
# tar xf /download/fromdual_brman-1.2.2.tar.gz
# rm -f fromdual_brman
# ln -s fromdual_brman-1.2.2 fromdual_brman

Changes in FromDual Backup Manager 1.2.2

FromDual Backup Manager

It contains mainly fixes with brman catalog and physical backups.

You can verify your current FromDual Backup Manager version with the following command:

fromdual_bman --version

  • Archiving with physical backup bug fixed.
  • Connect replaced by OO style and error exit fixed.
  • Create catalog fixed.
  • Archivedir without archive option does not make sense.
Link
Oli SennhauserWir suchen Dich: MySQL/MariaDB DBA für FromDual Support (22.6.2015, 11:51 UTC)
Taxonomy upgrade extras: 

Wer sind wir?


FromDual ist das führende unabhängige Beratungs- und Dienstleistungsunternehmen für MySQL, Galera Cluster, MariaDB und Percona Server in Europa mit Hauptsitz in der Schweiz.

Unsere Kunden stammen hauptsächlich aus Europa und reichen vom kleinen Start-Up bis zur europäischen Top-500 Firma. Sie erhalten von uns Support bei Datenbank-Problemen, direkte Eingriffe als remote-DBA, Schulung für ihre DBAs und Entwickler sowie Beratung bei Architektur- und Design-Entscheidungen. Ausserdem entwickeln wir Tools rund um MySQL, schreiben Blog-Artikel und halten Vorträge bei Konferenzen.

Da unsere qualitativ guten Dienstleistungen immer mehr Kunden anziehen, brauchen wir Kollegen (m/w), welche selbst und mit uns wachsen wollen.

Stellenbeschreibung


Wir suchen deutschsprachige Mitarbeiter (Sie oder Ihn) auf Junior- oder Senior-Level für Dienstleistungen rund um MySQL (hauptsächlich Support und remote-DBA Arbeiten) in Vollzeit. Primär solltest Du sicherstellen, dass die geschäftskritischen MySQL-Datenbanken unserer Kunden wie am Schnürchen laufen - und falls nicht, diese schnell wieder ans Laufen kriegen...


Unser/e "Wunschkandidat/in"

  • hat Erfahrung im Betrieb kritischer und hoch verfügbarer produktiver Datenbanken hauptsächlich auf Linux,
  • kennt Replikation in allen Variationen aus der täglichen Arbeit,
  • weiß, wie die meist verbreiteten MySQL-HA-Setups funktionieren und wie man sie wieder effizient repariert, wenn ein Problem auftritt,
  • ist sattelfest in SQL,
  • bringt Erfahrung mit Galera Cluster mit,
  • kann Bash skripten und einfache Programme in mindestens einer verbreiteten Programmier-/Skripting-Sprache (PHP, Bash, ...) erstellen.

Wir suchen Verstärkung, die von soliden Grundlagen aus auf dem Weg zu diesem Ideal ist.


Was wir von Dir erwarten:

  • Kenntnisse in MySQL, Percona Server oder MariaDB oder Bereitschaft, sich diese anzueignen
  • wissen, wie man kritische Datenbank-Systeme betreibt
  • Verständnis, was beim Betrieb von Datenbanken falsch laufen kann
  • selbständige Arbeitsweise (remote) mit Kommunikation über IRC, Skype, Mail und Telefon
  • Kenntnisse des Linux Systems

DBA- oder DevOps-Erfahrungen wären z.B. eine gute fachliche Basis.


Du schätzt den direkten Kontakt mit Kunden, hast ein gutes Gespür für deren Probleme, kannst zuhören und findest schnell die eigentlichen Probleme. Du bist gewohnt, proaktiv zu handeln bevor etwas passiert, und führst den Kunden wieder auf den richtigen Pfad zurück.


Um Deine Arbeit erledigen zu können, arbeitest Du in einer europäischen Zeitzone. Deine Arbeitszeit kannst Du, der betrieblichen Situation entsprechend, flexibel gestalten. Wir erwarten, dass Du Deinen Beitrag zum Bereitschaftsdienst leistest. FromDual hat voraussichtlich keine Büroräumlichkeiten in Deinem Wohnort. Ein Umzug ist jedoch nicht notwendig: Wir ermöglichen Dir das Arbeiten von zu Hause aus oder unterstützen Dich bei der Suche einer geeigneten Arbeitsräumlichkeit in Deiner Nähe. Gute schriftliche und mündliche Englischkenntnisse sind erforderlich.

Was wir Dir bieten:


  • Deinen Leistungen angemessenes Gehalt.
  • Möglichkeit Dich zum Top MySQL-Datenbankspezialisten zu entwickeln.
  • Selbständiges Arbeiten.
  • Verantwortung für Deine Projekte und Kunden zu übernehmen.
  • Gute Kameradschaft im Team, sowie lockerer und angenehmer Umgang.
  • Stellenbezogene Weiterbildungsmöglichkeiten.
  • Teilnahme an Open Source Anlässen.
  • Arbeit von Deinem bevorzugten Wohnort aus.

Truncated by Planet PHP, read more at the original (another 1371 bytes)

Link
Chris CalenderMySQL 5.6.25 Overview and Highlights (22.6.2015, 00:53 UTC)

MySQL 5.6.25 was recently released (it is the latest MySQL 5.6, is GA), and is available for download here.

For this release, there are 2 “Functionality Added or Changed” items of note:

  • Functionality Added/Changed: MySQL distributions now include an innodb_stress suite of test cases. Thanks to Mark Callaghan for the contribution. (Bug #76347)
  • Functionality Added/Changed: my_print_defaults now masks passwords. To display passwords in cleartext, use the new –show option.

In addition to those, there were 55 other bug fixes:

  • 10 InnoDB
  •   8 Replication
  •   3 Partitioning (one overlaps w/ an InnoDB bug fix)
  • 35 Miscellaneous (and 6 of those were specifically for “MySQL Enterprise Firewall”)

The highlights for me are 5 of the replication bugs, 1 partitioning bug, 1 performance-related bug, 1 wrong results bug, and 9 crashing bugs:

  • Replication: When using semisynchronous replication performance was degrading when the number of threads increased beyond a certain threshold. To improve performance, now only the thread which is committing is responsible for deleting the active transaction node. All other operations do not touch this active transaction list. (Bug #75570)
  • Replication: When binary logging was enabled, using stored functions and triggers resulting in a long running procedure that inserted many records caused the memory use to increase rapidly. This was due to memory being allocated per variable. The fix ensures that in such a situation, memory is allocated once and the same memory is reused. (Bug #75879)
  • Replication: If an error was encountered while adding a GTID to the received GTID set, the log lock was not being correctly released. This could cause a deadlock. (Bug #75781)
  • Replication: When master_info_repository=TABLE the receiver thread stores received event information in a table. The memory used in the process of updating the table was not being freed correctly and this could lead to an out of memory error. The fix ensures that after an event is flushed to the relay log file by a receiver thread, the memory used is freed. (Bug #72885, Bug #69848)
  • Replication: Using mysqlbinlog to process log events greater than 1.6GB failed with an out of memory error. This was caused by an internal error converting the length variable. The fix upgrades the length variable to avoid overflow in both encoding and decoding functions. (Bug #74734)
  • Partitioning: Executing an ALTER TABLE on a partitioned table on which a write lock was in effect could cause subsequent SQL statements on this table to fail. (Bug #74288).
  • Performance-related: Certain queries for the INFORMATION_SCHEMA TABLES and COLUMNS tables could lead to excessive memory use when there were large numbers of empty InnoDB tables. (Bug #72322)
  • Incorrect Results: Queries that included a HAVING clause based on nondeterministic functions could produce incorrect results. (Bug #69638)
  • Crashing Bug: For small values of the read_rnd_buffer_size system variable, internal caching of temporary results could fail and cause query execution failure.
  • Crashing Bug: A failed FLUSH PRIVILEGES statement followed by statements to create or drop accounts could cause a server exit.
  • Crashing Bug: SHOW VARIABLES mutexes were being locked twice, resulting in a server exit.
  • Crashing Bug: For join queries with a large number of tables, the server could exit converting the join to a semi-join.
  • Crashing Bug: Deleting rows from mysql.user following by granting privileges to a new account could result in a server exit.
  • Crashing Bug: Within a stored procedure, access to view columns after DDL or FLUSH TABLES statements in the procedure could cause a server exit.
  • Crashing Bug: Execution of certain BINLOG statements while temporary tables were open by HANDLER statements could cause a server exit.
  • Cr

Truncated by Planet PHP, read more at the original (another 1660 bytes)

Link
Jean-Jerome SchmidtBecome a MySQL DBA - webinar series: deciding on a relevant backup solution (17.6.2015, 15:29 UTC)

Backup and restore is one of the most important aspects of database administration. If a database crashed and there was no way to recover it, any resulting data loss might lead to devastating results to a business. As the DBA operating a MySQL or Galera cluster in production, you need to ensure your backups are scheduled, executed and regularly tested.

There are multiple ways to take backups, but which method fits your specific needs? How do I implement point in time recovery? 

Join us for this live session on backup strategies for MySQL and Galera clusters led by Krzysztof Książek, Senior Support Engineer at Severalnines.

DATE & TIME

Europe/MEA/APAC
Tuesday, June 30th at 09:00 BST / 10:00 CEST (Germany, France, Sweden)
Register Now

North America/LatAm
Tuesday, June 30th at 09:00 Pacific Time (US) / 12:00 Eastern Time (US)
Register Now

AGENDA

  • Logical and Physical Backup methods
  • Tools
    • mysqldump
    • mydumper
    • xtrabackup
    • snapshots
  • How backups are done in ClusterControl
  • Best practices
  • Example Setups
    • On premises / private datacenter
    • Amazon Web Services

SPEAKER

Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience in managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard. 

This webinar ‘Become a MySQL DBA - deciding on a relevant backup solution’ will show you the pros and cons of different backup options and help you pick one that goes well with your environment.

We look forward to “seeing” you there and to insightful discussions!

restore_logo.png

Blog category:

Link
Jean-Jerome SchmidtBecome a MySQL DBA blog series - Database High Availability (15.6.2015, 13:14 UTC)

There are many many approaches to MySQL high availability - from traditional, loosely-coupled database setups based on asynchronous replication to more modern, tightly-coupled architectures based on synchronous replication. These offer varying degrees of protection, and DBAs almost always have to choose a tradeoff between high-availability and cost. 

This is the third installment in the ‘Become a MySQL DBA’ series, and discusses the pros and cons of different approaches to high availability in MySQL. Our previous posts in the DBA series include Backup and Restore and Monitoring & Trending.

High Availability - what does it mean?

Availability is somewhat self-explanatory. If your database can be queried by your application, it is available. High, on the other hand, is a separate story. For some organizations, ‘high’ means max several minutes of downtime over the year. For others, it might mean a few hours per month. If you’ve read the previous blogs in this series, you may have noticed a pattern - “it depends on the business requirements”. This applies here also - you need to know your requirements in terms of how long downtime you can accept as it may limit your HA options significantly. What you need to keep in mind is that the length of a database incident, that causes some disturbance in database access, may be related to the HA method you choose. On the other hand, whether this disturbance affects end users is a different thing. For starters - does your application use a cache? How often does it need to be refreshed? Is it acceptable for your application to show stale data for some period of time? And for how long?

Caching Layer - for database reads and writes?

A cache that sits between the application and the database might be a way of decoupling those two from each other. 

For reads you can use one of many cache solutions - memcached, Redis, couchbase. Cache refresh can be performed by a background thread which, when needed, gets the data out of MySQL and stores it in the caching layer. It could be that the data is outdated because the database is not reachable and the background thread is not able to refresh the cache. While the database is down, the application serves the data out of cache - as long as it’s ok to serve stale data for some time, you are just fine and users may not even experience any issues. 

With writes, it is a similar story - you may want to cache writes in a queue. In the background, you would have threads that read the data out of the queue and store them into the database. Ideally those background threads keep the queue empty and any write request is handled immediately. If the database is down, the queue can serve as a write buffer - the application can still make modifications to the data but the results are not immediately stored in the database - they will be later on, when the database gets back online and the background threads start working on the backlog.

There are many ways to keep users happy and unaware of the issues behind the scenes - all user-related modifications can be immediately presented to the user, to give an impression that everything is just fine. Other users will not see those changes until the write queue is flushed to the database. Of course, it depends on what kind of data we are talking about - in many cases (e.g., social media site, web forum, chat engine, comment engine), it might be just fine. One way or another, this “illusion” can be maintained only for some period of time though. Eventually, the database has to be brought up again. Let’s talk now about our options for database high availability.

Block-level replication (DRBD)

We’ll start with DRBD - Distributed Replicated Block Device. In short, imagine that you could create a RAID1 over the network. This is, more or less, what DRBD does. You have two nodes (or three in the latest versions), each of them

Truncated by Planet PHP, read more at the original (another 24283 bytes)

Link
Peter ZaitsevPercona Live Europe 2015! Call for speakers; registration now open (10.6.2015, 07:00 UTC)

Percona Live AmsterdamPercona Live is moving from London to Amsterdam this year and the event is also expanding to three full days. Percona Live Europe 2015, September 21-23, will be at the Mövenpick Hotel Amsterdam City Centre. The call for speakers and Super Saver registration are now open. Hurry though because the deadline for submitting a speaking proposal is June 21st and Super Saver registration ends July 5th!

This year’s conference will feature one day of tutorials and two days of keynote talks and breakout sessions related to MySQL, NoSQL and Data in the Cloud. You’ll get briefed on the hottest topics, learn about operating a high-performing deployment and hear from top-industry leaders describe the future of the ecosystem – encompassing MySQL, MariaDB, Percona Server, MongoDB (and more). Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

Have something to say? Why not lead a breakout session or a tutorial?

Breakout sessions are 50 minutes including a Q&A. Tutorial sessions focus on an immediate and practical application of in-depth MySQL and NoSQL knowledge. Tutorial speakers should assume that attendees will have laptops to work through detailed and potentially hands-on presentations. Tutorials are typically three hours long including a Q&A, however, if you have content for a full day, submissions for 6-hour-long tutorials are also being accepted. If your tutorial or breakout session is approved, you’ll receive a complimentary full-conference pass.

Huge thanks to our Conference Committee!

There would be no Percona Live without the hard work of our conference committees. Meet this year’s Percona Live Europe 2015 Conference Committee – a dedicated group of experts in MySQL, NoSQL and Data in the Cloud:

  • Erik Beebe – Founder and CTO, ObjectRocket
  • Luis Motta Campos – Database Administrator, ebay Classifieds Group
  • Colin Charles – Chief Evangelist, MariaDB
  • César Trigo Esteban – Development Director, Gigigo
  • Kenny Gorman – Chief Technologist; Data. Office of the CTO, Rackspace
  • Amrith Kumar – Founder and CTO, Tesora
  • Giuseppe Maxia – Quality Assurance Architect, VMWare
  • Shlomi Noach – Senior Systems Engineer, Booking.com
  • Konstantin Osipov – Engineering Manager, Mail.Ru
  • Morgan Tocker – MySQL Community Manager, Oracle
  • Art van Scheppingen – Head of Database Engineering, Spil Games
  • Charity Majors- Production Engineering Manager, Facebook
  • Peter Zaitsev – Co-founder and CEO, Percona

Sponsorships

Sponsorship opportunities for Percona Live Europe 2015 are now available. Sponsors become part of a dynamic and fast-growing ecosystem and interact with hundreds of DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors, and entrepreneurs who typically attend the event. This year’s conference will feature expanded accommodations and turnkey kiosks.

Planning to attend?

Super Saver registration discounts for Percona Live Europe 2015 are available through July 5th (at 11:30 p.m. CEST). Visit the Percona Live Europe 2015 website for more information about the conference. Interested community members can also register to receive email updates about Percona Live Europe 2015.

Percona has also negotiated a special hotel rate at the Mövenpick Hotel Amsterdam City Centre. If you book your hotel before July 6th your delicious breakfast is included.

I hope to see you in Amsterdam!

The post Percona Live Europe 2015! Call for speakers; registration now open appeared first on MySQL Performance Blog.

Link
Peter ZaitsevAuditing MySQL with McAfee and MongoDB (9.6.2015, 13:00 UTC)

Greetings everyone! Let’s discuss a 3rd Party auditing solution to MySQL and how we can leverage MongoDB® to make sense out of all of that data.

The McAfee MySQL Audit plugin does a great job of capturing, at low level, activities within a MySQL server. It does this through some non-standard APIs which is why installing and configuring the plugin can be a bit difficult. The audit information is stored in JSON format, in a text file, by default.

There is 1 JSON object for each action that takes place within MySQL. If a user logs in, there’s an object. If that user queries a table, there’s an object. Imagine 1000 active connections from an application, each doing 2 queries per second. That’s 2000 JSON objects per second being written to the audit log. After 24 hours, that would be almost 173,000,000 audit entries!

How does one make sense of that many JSON objects? One option would be to write your own parser in $YOUR_FAVORITE_LANGUAGE and convert the JSON to INSERT statements and write the data back to MySQL (Note: If you do this, you can whitelist this table within the plugin so that these INSERTs are not re-audit logged). Or, we can use a system designed to import, store and query JSON objects, such as MongoDB.

Install McAfee Audit Plugin

First we need to download the source code for the plugin and download the source code for the specific MySQL version you are running. This is not a complete step-by-step HOWTO on installing this plugin; just some high-level points.

My client for this exercise is still on Percona Server 5.1.73, so we need the source for that EXACT version from percona.com.

We can clone the mcafee/mysql-audit using git.

Unzip the MySQL source and compile it; just don’t do “make install”, only “./configure” and “make” are necessary.

Now compile the plugin. You may want to read the detailed instructions.

This next step is tricky and really only necessary if you are not using vanilla MySQL. It is a required step to allow the plugin to use those non-standard API’s I mentioned earlier. You need to extract the offsets for the plugin to work. Follow the instructions given.

Once that is all done, you can:

INSTALL PLUGIN AUDIT SONAME 'libaudit_plugin.so';

If the plugin fails to load, check MySQL’s error log for the reason why and confer with the plugin documentation on how to resolve.

We now need to enable audit logging because nothing is enabled by default.

SET GLOBAL audit_record_cmds = "select,insert,update,delete";
SET GLOBAL audit_json_file = ON;
SET GLOBAL audit_record_objs = "*.*,{}";
SET GLOBAL audit_force_record_logins = ON;

Look inside @@datadir and you should see a file called mysql-audit.json. You can tail -f this file if you’d like to watch it to make sure data is being written.

If you’d like some more background reading on the audit plugin, check out Fernando’s post on Experiences with McAfee Audit Plugin.

Setting Up MongoDB

Let me begin by stating this is my first time really dealing with MongoDB in any real sense. I spun up an EC2 instance in AWS (m3.large, CentOS 6) and installed MongoDB using yum and the Mongo repositories.

As the ephemeral storage for my instance had been mounted at /opt, I changed just this one option in the supplied /etc/mongod.conf and restarted mongo (service mongod restart).

dbpath=/opt/mongo

I then copied the mysql-audit.json from the MySQL host using SSH:

[percona@mysql-host ~]$ scp -i .ssh/amazon.pem /data/mysql/mysql-audit.json root@54.177.22.22:/tmp/

Then I imported this JSON file directly into MongoDB:

[root@ip-10-255-8-15 ~]# mongoimport --db test --collection audit --drop --file /tmp/mysql-audit.json

The above mongoimport command specifies the database in which to import (test) and in which collection (

Truncated by Planet PHP, read more at the original (another 7781 bytes)

Link
Jean-Jerome SchmidtBecome a MySQL DBA blog series - Backup and Restore (8.6.2015, 15:10 UTC)

It is not uncommon that developers, network/system administrators, or DevOps folks with general backgrounds, find themselves in a DBA role at some point in their career. So, what does a DBA do? In the previous post, we covered monitoring and trending practices, as well as some popular tools that you might find handy in your day to day work. 

We’ll continue this blog series with another basic but crucial DBA responsibility - taking backups of your data. Backup and restore is one of the most important aspects of database administration. If a database crashed and there was no way to recovery it, any resulting data loss might lead to devasting results to a business. One could argue that you can protect against crashes by replicating to multiple servers or data centers. But if it is an application error that propagates to all instances, or a human dropping a part of a database by mistake, you will probably need to restore from backup.

Different backup methodologies

There are multiple ways to take a backup of a MySQL database, but we can divide these methods into two groups - logical and physical.

Logical backups contain data that is exported using SQL commands and stored in a file. This can be, for e.g., a set of SQL commands (INSERTs), that, when executed, will result in restoring a content of the database. It does not have to be SQL code, it can be anything that is restorable - you can as well use SELECT … INTO OUTFILE to generate a file with your database contents. With some modifications to the output file’s syntax, you can store your backup in CSV files.

Physical backups are copies of physical database files. Here, we would make a binary copy of a whole database by, for example, copying all of the files or by making a snapshot of a volume where data directory is located.

A logical backup is usually slower than a physical one, because of the overhead to execute SQL commands to get the data out and then to execute another set of SQL commands to get the data back into the database. This is a severe limitation that tends to prevent the logical backup from being a single backup method on large (high tens or hundreds of gigabytes) databases. On the other hand, a major advantage of the logical backup is the fact that, having all data in the SQL format, you can restore single rows.

Physical backups are not that flexible - while some of the methods make it possible to restore separate tables, you cannot go down to row level. On the other hand, this is a fastest way to backup and restore your database - you are limited only by the performance of your hardware - disk speed and network throughput will be the main limiting factor.

One more important concept, when it comes to the MySQL backup, is point in time recovery. A backup, whether logical or physical, takes place at a given time. This is not enough, you have to be able to restore your database to any point in time, also to a point which happened between the backups. In MySQL, the main way to handle point in time recovery is to use binary logs to replay the workload. With that in mind, a backup is not complete unless you make a copy of the binlogs along with it.

Logical backup methods

mysqldump

The most known method is definitely mysqldump, a CLI tool that enables the DBA to create an SQL dump of the database. Mysqldump is a single-threaded tool and this is its most significant drawback - performance is ok for small databases but it becomes quickly unacceptable if the data set grows to tens of gigabytes. If you plan to use mysqldump as a mean of taking backups, you need to keep a few things in mind. First, by default mysqldump doesn’t include routines and events in its output - you have to explicitly set --routines (-R) and --events (-E) flags. Second, if you want to take a c

Truncated by Planet PHP, read more at the original (another 19467 bytes)

Link
Peter ZaitsevMaxScale: A new tool to solve your MySQL scalability problems (8.6.2015, 13:00 UTC)

Ever since MySQL replication has existed, people have dreamed of a good solution to automatically split read from write operations, sending the writes to the MySQL master and load balancing the reads over a set of MySQL slaves. While if at first it seems easy to solve, the reality is far more complex.

First, the tool needs to make sure it parses and analyses correctly all the forms of SQL MySQL supports in order to sort writes from reads, something that is not as easy as it seems. Second, it needs to take into account if a session is in a transaction or not.

While in a transaction, the default transaction isolation level in InnoDB, Repeatable-read, and the MVCC framework insure that you’ll get a consistent view for the duration of the transaction. That means all statements executed inside a transaction must run on the master but, when the transaction commits or rollbacks, the following select statements on the session can be again load balanced to the slaves, if the session is in autocommit mode of course.

Then, what do you do with sessions that set variables? Do you restrict those sessions to the master or you replay them to the slave? If you replay the set variable commands, you need to associate the client connection to a set of MySQL backend connections, made of at least a master and a slave. What about temporary objects like with “create temporary table…”? How do you deal when a slave lags behind or what if worse, replication is broken? Those are just a few of the challenges you face when you want to build a tool to perform read/write splitting.

Over the last few years, a few products have tried to tackle the read/write split challenge. The MySQL_proxy was the first attempt I am aware of at solving this problem but it ended up with many limitations. ScaleARC does a much better job and is very usable but it stills has some limitations. The latest contender is MaxScale from MariaDB and this post is a road story of my first implementation of MaxScale for a customer.

Let me first introduce what is MaxScale exactly. MaxScale is an open source project, developed by MariaDB, that aims to be a modular proxy for MySQL. Most of the functionality in MaxScale is implemented as modules, which includes for example, modules for the MySQL protocol, client side and server side.

Other families of available modules are routers, monitors and filters. Routers are used to determine where to send a query, Read/Write splitting is accomplished by the readwritesplit router. The readwritesplit router uses an embedded MySQL server to parse the queries… quite clever and hard to beat in term of query parsing.

There are other routers available, the readconnrouter is basically a round-robin load balancer with optional weights, the schemarouter is a way to shard your data by schema and the binlog router is useful to manage a large number of slaves (have a look at Booking.com’s Jean-François Gagné’s talk at PLMCE15 to see how it can be used).

Monitors are modules that maintain information about the backend MySQL servers. There are monitors for a replicating setup, for Galera and for NDB cluster. Finally, the filters are modules that can be inserted in the software stack to manipulate the queries and the resultsets. All those modules have well defined APIs and thus, writing a custom module is rather easy, even for a non-developer like me, basic C skills are needed though. All event handling in MaxScale uses epoll and it supports multiple threads.

Over the last few months I worked with a customer having a challenging problem. On a PXC cluster, they have more than 30k queries/s and because of their write pattern and to avoid certification issues, they want to have the possibility to write to a single node and to load balance the reads. The application is not able to do the Read/Write splitting so, without a tool to do the splitting, only one node can be used for all the traffic. Of course, to make things easy, they use a lot of Java code that set tons of sessions variables. Furthermore, for ISO 27001 compliance, they want to be able to log all the queries for security analysis (and also for performance analysis, why not?). So, high query rate, Read/Write splitting and full query logging, like I said a challenging problem.

We experimented with a few solutions. One was a hardware load balancer that failed miserably – the implementation was just too simple, using only regular expressions. Another solution we tried was ScaleArc but it needed many rules to whitelist the set session variables and to repeat them to multiple servers. ScaleArc could have done the job but all the rules increases the CPU load and the cost is per CPU. The queries could have been sent to rsyslog and aggregated for analysis.

Finally, the HA implementatio

Truncated by Planet PHP, read more at the original (another 4041 bytes)

Link
Shlomi NoachOrchestrator visual cheatsheet (5.6.2015, 12:19 UTC)

Orchestrator is growing. Supporting automatic detection of topologies, simple refactoring of topology trees, complex refactoring via Pseudo-GTID, failure detection and automated discovery, it is becoming larger and larger by the day.

One of the problems with growign projects is hwo to properly document them. Orchestrator enjoys a comprehensive manual, but as these get more and more detailed, it is becoming difficult to get oriented and pointed in the right direction. I've done my best to advise the simple use cases throughout the manual.

One thing that is difficult to put into words is topologies. Explaining "failover of an intermediate master S1 that has S2,...,Sn slaves onto a sibling of S1 provided that..." is too verbose. So here's a quick visual cheatsheet for (current) topology refactoring commands. Refactoring commands are a mere subset of overall orchestrator commands, but they're great to play with and perfect for visualization.

The "move" and related commands use normal replication commands (STOP SLAVE; CHANGE MASTER TO; START SLAVE UNTIL;"...).

The "match" and related commands utilize Pseudo-GTID and use more elaborate MySQL commands (SHOW BINLOG EVENTS, SHOW RELAYLOG EVENTS).

So without further ado, here's what each command does (and do run "orchestrator" from the command line to get a man-like explanation of everything, or just go to the manual).

orchestrator-cheatsheet-visualized-7
orchestrator-cheatsheet-visualized-8
orchestrator-cheatsheet-visualized-9
orchestrator-cheatsheet-visualized-10
orchestrator-cheatsheet-visualized-11
orchestrator-cheatsheet-visualized-12
orchestrator-cheatsheet-visualized-13
orchestrator-cheatsheet-visualized-1
orchestrator-cheatsheet-visualized-2

Truncated by Planet PHP, read more at the original (another 2848 bytes)

Link
LinksRSS 0.92   RDF 1.
Atom Feed