Peter ZaitsevPercona XtraDB Cluster: How to run a 2-node cluster on a single server (21.10.2014, 13:53 UTC)
I reckon there’s little sense in running 2 or more Percona XtraDB Cluster (PXC) nodes in a single physical server other than for educational and testing purposes – but doing so is still useful in those cases. The most popular way of achieving this seems to be with server virtualization, such as making use of Vagrant boxes. But in the same way you can have multiple instances of MySQL running in parallel on the OS level in the form of concurrent mysqld processes, so too can you have multiple Percona XtraDB Cluster nodes. And the way to achieve this is precisely the same: using dedicated datadirs and different ports for each node.

 

Which ports?

4 tcp ports are used by Pecona XtraDB Cluster:
  • the regular MySQL port (default 3306)
  • port for group (Galera) communication (default 4567)
  • port for State Transfer (default 4444)
  • port for Incremental State Transfer (default is: port for group communication (4567) + 1 = 4568)
Of course, when you have multiple instances in the same server default values won’t work for all of them so we need to define new ports  for the additional instances and make sure to have the local firewall open to them, if there is one active (iptables, selinux,…).

[{ loading ... }]

Installing Percona XtraDB Cluster, configuring and starting the first node

My test server was a fresh CentOS 6.5 configured with Percona yum repository, from which I installed the latest Percona XtraDB Cluster (5.6.20-25.7.888.el6); note that you’ll need the EPEL repository as well to install socat, which is a dependency (see this bug). To avoid confusion, I’ve prevented the mysql service to start automatically:
chkconfig --level 3 mysql off
chkconfig --del mysql

I could have installed PXC from the tarball but I decided to do it from the repositories to have all dependencies covered by yum. This is how my initial /etc/my.cnf looked like (note the use of default values):
[mysqld]
datadir = /var/lib/mysql
port=3306
socket=/var/lib/mysql/mysql-node1.sock
pid-file=/var/lib/mysql/mysql-node1.pid
log-error=/var/lib/mysql/mysql-node1.err
binlog_format=ROW
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_name = singlebox
wsrep_node_name = node1
wsrep_cluster_address=gcomm://
I’ve started by manually bootsrapping the cluster with this single node with the command:
$ mysqld_safe --defaults-file=/etc/my.cnf --wsrep-new-cluster

You should then be able to access this node through the local socket:
$ mysql -S /var/lib/mysql/mysql-node1.sock

 

Configuring and starting the second node

Then I created a similar configuration configuration file for the second instance, which I named /etc/my2.cnf, with the following modifications:
[mysqld]
datadir = /var/lib/mysql2
port=3307
socket=/var/lib/mysql2/mysql-node2.sock
pid-file=/var/lib/mysql2/mysql-node2.pid
log-error=/var/lib/mysql2/mysql-node2.err
binlog_format=ROW
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_name = singlebox
wsrep_node_name = node2
wsrep_cluster_address=gcomm://127.0.0.1:4567,127.0.0.1:5020
wsrep_provider_options = "base_port=5020;"

Note the use of base_port: by having it defined, port 5020 is used for group communication and 5021 (the one above it) is reserved for IST (it’s the same as using gmcast.listen_addr=tcp://127.0.0.1:5021, just simpler).
You need to create and setup the right permissions to the datadir on this second instance, otherwise MySQL won’t be able to create some files (like .pid and .err), though you don’t need to run the mysql_install_db script:
$ chown -R mysq

Truncated by Planet PHP, read more at the original (another 3368 bytes)

Link
Jean-Jerome SchmidtNew Webinar: 9 DevOps Tips for Going in Production with Galera Cluster for MySQL - November 11th (21.10.2014, 09:49 UTC)
October 21, 2014
By Severalnines

 

Galera is a MySQL replication technology that can simplify the design of a high availability application stack. With a true multi-master MySQL setup, an application can now read and write from any database instance without worrying about master/slave roles, data integrity, slave lag or other drawbacks of asynchronous replication.

 

And that all sounds great until it’s time to go into production. Throw in a live migration from an existing database setup and devops life just got a bit more interesting ...

 

So if you are in devops, then this webinar is for you!

 

Operations is not so much about specific technologies, but about the techniques and tools you use to deploy and manage them. Monitoring, managing schema changes and pushing them in production, performance optimizations, configurations, version upgrades, backups; these are all aspects to consider – preferably before going live.

 

Let us guide you through 9 key tips to consider before taking Galera Cluster into production. 

 

New Webinar: 9 DevOps Tips for Going in Production with Galera Cluster for MySQL - November 11th

 

DATE & TIME

 

Europe/MEA/APAC

Tuesday, November 11th at 09:00 GMT (UK) / 10:00 CET (Germany, France, Sweden)

Register Now

 

North America/LatAm

Tuesday, November 11th at 9:00 Pacific Time (US) / 12:00 Eastern Time (US)

Register Now

 

SPEAKER

Johan Andersson, CTO, Severalnines

 

read more

Link
Peter ZaitsevAutumn: A season of MySQL-related conferences. Here’s my list (20.10.2014, 14:58 UTC)

Autumn is a season of MySQL-related conferences and I’m about to hit the road to speak and attend quite a  few of them.

Peter Zaitsev prepares for a tour of worldwide MySQL-related conferences including Percona Live London, All Things Open, Highload++, AWS re:Invent - Percona will also be at OpenStack Paris.This week I’ll participate in All Things Open, a local conference for me here in Raleigh, N.C. and therefore one I do not have to travel for. All Things Open explores open source, open tech and the open web in the enterprise. I’ll be speaking on SSDs for Databases at 3:15 p.m. on Thursday, Oct. 23 and I’ll also be participating in a book signing for the High Performance MySQL Book at 11:45 p.m. at the “Meet the Speaker” table. We are also proud to be sponsor of this show so please stop by and say “Hi” at our booth in the expo hall.

Following this show I go to Moscow, Russia to the Highload++ conference. This is wonderful show for people interested in high-performance solutions for Internet applications and I attend almost every year. It has a great lineup of speakers from leading Russian companies as well as many top International speakers covering a lot of diverse technologies. I have 3 talks at this show around Application Architecture, Using Indexes in MySQL and about SSD and Flash Storage for Databases. I’m looking forward to reconnecting with my many Russian friends at this show.

From Highload I go directly to Percona Live London 2014 (Nov. 3-4) which is the show we’re putting together – which of course means it is filled with great in-depth information about MySQL and its variants. I think this year we have a good balance of talks from MySQL users such as Facebook, Github, Booking.com, Ebay, Spil Games, IE Domain registry as well as vendors with in-depth information about products and having experiences with many customer environments – MySQL @ Oracle, HP, HGST, Percona, MariaDB, Pythian, Codership, Continuent, Tokutek, FromDual, OlinData. It looks like it is going to be a great show (though of course I’m biased) so do not forget to get registered if you have not already. (On Twitter use hashtag #PerconaLive)

The show I’m sorry to miss is the OpenStack Paris Summit. Even though it is so close to London, the additional visa logistics make it unfeasible for me to visit. There is going to be a fair amount of Perconians on the show, though. Our guys will be speaking about a MySQL and OpenStack Deep Dive as well as Percona Server Features for OpenStack and Trove Ops. We’re also exhibiting on this show so please stop by our booth and say “hi.”

Finally there is AWS re:Invent in Las Vegas Nov. 11-14. I have not submitted any talks for this one but I’ll drop in for a day to check it out. We’re also exhibiting at this show so if you’re around please stop by and stay “hi.”

This is going to be quite a busy month with a lot of events! There are actually more where we’re speaking or attending. If you’re interested about events we’re participating, there is a page on our web site to tell you just that! I also invite you to submit papers to speak at the new

Truncated by Planet PHP, read more at the original (another 742 bytes)

Link
Shlomi NoachMaking UUID() and RAND() replication safe (20.10.2014, 06:40 UTC)

MySQL's UUID() and RAND() functions both provide with (pseudo) indeterministic result. UUID()'s result is moreover bound to the host on which it executes. For this reason, both are unsafe to replicate with STATEMENT binlog format. As an example, consider:

master> create table test.uuid_test (id int, u varchar(64));

master> insert into test.uuid_test values (1, UUID());
Query OK, 1 row affected, 1 warning (0.03 sec)

master> select * from test.uuid_test;
+------+--------------------------------------+
| id   | u                                    |
+------+--------------------------------------+
|    1 | 7e3596d8-56ac-11e4-b284-3c970ea31ea8 |
+------+--------------------------------------+

The warning we got on the insert directly relates to the following inconsistency on a slave:

slave1> select * from test.uuid_test;
+------+--------------------------------------+
| id   | u                                    |
+------+--------------------------------------+
|    1 | 7e379d63-56ac-11e4-8477-3c970ea31ea8 |
+------+--------------------------------------+

The data on the slave is clearly inconsistent with the master's. The slave, replicating via STATEMENT binlog format, re-executes the INSERT command and gets a different UUID value.

External

One solution to the above is to generate the UUID value from your application. By the time MySQL gets the INSERT statement, the UUID value is a constant string, as far as MySQL is concerned.

Internal

However there's a way to do it from within MySQL, by decoupling the UUID() function from the INSERT statement. It takes a session variable. Consider:

master> set @safe_uuid := UUID();
Query OK, 0 rows affected (0.00 sec)

master> insert into test.uuid_test values (2, @safe_uuid);
Query OK, 1 row affected (0.02 sec)

master> select * from test.uuid_test;
+------+--------------------------------------+
| id   | u                                    |
+------+--------------------------------------+
|    1 | 7e3596d8-56ac-11e4-b284-3c970ea31ea8 |
|    2 | 29c51fb9-56ad-11e4-b284-3c970ea31ea8 |
+------+--------------------------------------+

And on a slave:

slave1> select * from test.uuid_test;
+------+--------------------------------------+
| id   | u                                    |
+------+--------------------------------------+
|    1 | 7e379d63-56ac-11e4-8477-3c970ea31ea8 |
|    2 | 29c51fb9-56ad-11e4-b284-3c970ea31ea8 |
+------+--------------------------------------+

The reason why this succeeds is that MySQL stores session variable values that are being used by DML queries in the binary log. It just so happened that @safe_uuid was assigned the UUID() value, but it could just as well have been assigned a constant or other computation. MySQL stored the resulting value into the binary log, where it is forces upon the slave to use. Check out this binary log snippet:

# at 14251
#141018 12:57:35 server id 1  end_log_pos 14319         Query   thread_id=2     exec_time=0     error_code=0
SET TIMESTAMP=1413626255/*!*/;
SET @@session.sql_auto_is_null=0/*!*/;
BEGIN
/*!*/;
# at 14319
#141018 12:57:35 server id 1  end_log_pos 14397         User_var
SET @`safe_uuid`:=_utf8 0x32396335316662392D353661642D313165342D623238342D336339373065613331656138 COLLATE `utf8_general_ci`/*!*/;
# at 14397
#141018 12:57:35 server id 1  end_log_pos 14509         Query   thread_id=2     exec_time=0     error_code=0
SET TIMESTAMP=1413626255/*!*/;
insert into test.uuid_test values (2, @safe_uuid)
/*!*/;
# at 14509
#141018 12:57:35 server id 1  end_log_pos 14536         Xid = 145
COMMIT/*!*/;

The same can be applied for RAND(). Funny thing about RAND() is that it is already taken care of by the binary log via SET @@RAND_SEED1, SET @@RAND_SEED2 statements (i.e. it works), though the documentation clearly states it is unsafe.

With Row Based Replication (RBR) the problem never arises in the first place since the binlog contains the values of the new/updated rows.

Link
Peter ZaitsevInnodb transaction history often hides dangerous ‘debt’ (17.10.2014, 14:02 UTC)

In many write-intensive workloads Innodb/XtraDB storage engines you may see hidden and dangerous “debt” being accumulated – unpurged transaction “history” which if not kept in check over time will cause serve performance regression or will take all free space and cause an outage. Let’s talk about where it comes from and what can you do to avoid running into the trouble.

Technical Background: InnoDB is an MVCC engine which means it keeps multiple versions of the rows in the database, and when rows are deleted or updated they are not immediately removed from the database but kept for some time – until they can be removed. For a majority of OLTP workloads they can be removed seconds after the change actually took place. In some cases though they might need to be kept for a long period of time – if there are some old transactions running in the system that might still need to look at an old database state. As of MySQL 5.6 Innodb has one or several “purge threads” which remove the old data that can be removed, though they might not be doing it fast enough for workloads with very intensive writes.

Does it really happen? I started looking into this problem based on some customer concerns and to my surprise I could very easily get the history to grow rapidly using basic sysbench “update” workload. It is especially easy with default innodb_purge_threads=1 setting but even with innodb_purge_threads=8 it grows rather rapidly.

If we take a look at the purging speed (which comes from innodb-metrics table) we can see what purge is being very much starved by the active concurrent sysbench process and it speeds up greatly when it is finished:

Now to be frank this is not an easy situation to get in the majority of workloads with short transactions when the undo space is kept in memory purge and is able to keep up. If Undo space however happens to be gone from buffer pool the purge speed can slow down drastically and the system might not be able to keep up anymore. How it could happen? There are 2 common variants….

Long Running Transaction: If you’re having some long running transaction, for example mysqldump, on the larger table the purging has to pause while that transaction is running and a lot of history will be accumulated. If there is enough IO pressure a portion of undo space will be removed from the buffer pool.

MySQL Restart: Even with modest history length restarting MySQL will wash away from memory and will cause purge to be IO bound. This is of course if you’re not using InnoDB Buffer Pool save and reload.

How do you check if your UNDO space is well cached? In Percona Server I can use those commands:

mysql> select sum(curr_size)*16/1024 undo_space_MB from XTRADB_RSEG;
+---------------+
| undo_space_MB |
+---------------+
|     1688.4531 |
+---------------+
1 row in set (0.00 sec)
mysql> select count(*) cnt, count(*)*16/1024 size_MB, page_type from INNODB_BUFFER_PAGE group by page_type;
+--------+-----------+-------------------+
| cnt    | size_MB   | page_type         |
+--------+-----------+-------------------+
|     55 |    0.8594 | EXTENT_DESCRIPTOR |
|      2 |    0.0313 | FILE_SPACE_HEADER |
|    108 |    1.6875 | IBUF_BITMAP       |
|  17186 |  268.5313 | IBUF_INDEX        |
| 352671 | 5510.4844 | INDEX             |
|     69 |    1.0781 | INODE             |
|    128 |    2.0000 | SYSTEM            |
|      1 |    0.0156 | TRX_SYSTEM        |
|   6029 |   94.2031 | UNDO_LOG          |
|  16959 |  264.9844 | UNKNOWN           |
+--------+-----------+-------------------+
10 rows in set (1.65 sec)

This shows what the total undo space size is now, 1.7GB, with less than 100MB cached in the buffer pool size….

Here are a few graphs from Running Heavy concurrent query during lighter workload where purging could keep up. In this case I used the “injection” benchmark in sysbench setting –trx-rate to 50% of what the system shown as peak.

mysql> select count(distinct k+ length(pad)) from sbtest1;
+--------------------------------+
| count(distinct k+ length(pad)) |
+--------------------------------+
|                       30916851 |
+--------------------------------+
1 row in set (28 min 32.38 sec)

</body>

Truncated by Planet PHP, read more at the original (another 6815 bytes)

Link
Jean-Jerome SchmidtIntegrating ClusterControl with FreeIPA and Windows Active Directory for Authentication (17.10.2014, 10:21 UTC)
October 17, 2014
By Severalnines

Integrating ClusterControl with a corporate LDAP directory is a common task for many IT organizations. In an earlier blog, we showed you how to integrate ClusterControl with OpenLDAP. In this post, we will show you how to integrate with FreeIPA and Windows Active Directory. 

 

How ClusterControl Performs LDAP Authentication

 

ClusterControl supports up to LDAPv3 protocol based on RFC2307. More details on this in the documentation.

 

When authenticating, ClusterControl will first bind to the directory tree server (LDAP Host) using the specified Login DN user and password, then it will check if the username you entered exists in the form of uid or cn of the User DN. If it exists, it will then use the username to bind against the LDAP server to check whether it has the configured group as in LDAP Group Name in ClusterControl. If it has, ClusterControl will then map the user to the appropriate ClusterControl role and grant access to the UI.

 

The following flowchart summarizes the workflow:

 

FreeIPA

 

FreeIPA is a Red Hat sponsored open source project which aims to provide an easily managed Identity, Policy and Audit (IPA) suite primarily targeted towards networks of Linux and Unix computers. It is easy to install/configure, and is an integrated security information management solution combining Linux (Fedora), 389 Directory Server, MIT Kerberos, NTP, DNS, Dogtag (Certificate System).

 

read more

Link
Peter ZaitsevPercona Toolkit for MySQL with MySQL-SSL Connections (16.10.2014, 13:06 UTC)

I recently had a client ask me how to use Percona Toolkit tools with an SSL connection to MySQL (MySQL-SSL). SSL connections aren’t widely used in MySQL due to most installations being within an internal network. Still, there are cases where you could be accessing MySQL over public internet or even over a public “private” network (ex: WAN between two colo datacenters). In order to keep packet sniffers at bay, the connection to MySQL should be encrypted.

If you are connecting to Amazon RDS from home or office (ie: not within the AWS network) you better be encrypted!

As there is already a MySQL Performance Blog post on how to setup MySQL SSL connections, we can skip that and dive right in.

As you probably know, the mysql client can read multiple configuration files; the primary one being /etc/my.cnf  You probably also know that the client reads a config file in your $HOME directory: .my.cnf (that’s dot-my-dot-cnf).  It is inside this file that we can set parameters for our shell-user account when connecting to MySQL hosts.

Percona Toolkit uses Perl’s DBI:mysql to make connections to MySQL hosts. This library is linked to the libmysqlclient C library which is responsible for reading and parsing the global config file as well as your $HOME config file. Let’s set some options here that are not directly available in the toolkit scripts. Using $MY_FAVORITE_EDITOR, edit your $HOME/.my.cnf as such:

[client]
user = myuser
password = foobar
ssl-ca = /Users/drmac/ca-cert.pem

You must use the absolute path to the CA file. Relative paths won’t cut it:

ERROR 2026 (HY000): SSL connection error: SSL_CTX_set_default_verify_paths failed

Test your connection first using the mysql client:

asura:~ drmac$ mysql -h 74.13.19.17 -e "SHOW STATUS LIKE 'Ssl_cipher'"
+---------------+--------------------+
| Variable_name | Value              |
+---------------+--------------------+
| Ssl_cipher    | DHE-RSA-AES256-SHA |
+---------------+--------------------+

Excellent! Now we can use any Percona Toolkit script and connect via SSL:

asura:~ drmac$ pt-table-checksum -h 74.13.19.17 -d foo -t zipcodes
            TS ERRORS  DIFFS     ROWS  CHUNKS SKIPPED    TIME TABLE
10-13T14:10:02      0      0    45358       7       0   5.959 foo.myzipcodes

Sweet!

Unfortunately, Percona Toolkit scripts are hard-coded to read the [client] section of your .my.cnf. If you don’t want to overwrite any existing configuration that may be present, you can make a new configuration and specify that file to any toolkit script using -F. Again, relative paths won’t work here. Use the absolute path; even if you are in the same directory.

asura:~ drmac$ cp .my.cnf mytestconfig.cnf
asura:~ drmac$ rm .my.cnf
asura:~ drmac$ pt-table-checksum -h 74.13.19.17 -d foo -t zipcodes -F /Users/drmac/mytestconfig.cnf

Now you can continue using our awesome tools in a secure manner.

Cheers!
-Matthew

The post Percona Toolkit for MySQL with MySQL-SSL Connections appeared first on MySQL Performance Blog.

Link
Chris CalenderIgnoring the lost+found Directory in your Datadir (15.10.2014, 22:37 UTC)

I still get asked about the lost+found directory enough, and so I wanted to provide a current update.

The lost+found directory is a filesystem directory created at the root level of a mapped drive. Thus this is common to see if you create your mysql datadir at the root level of a mapped drive.

In the past, you could ignore it, if it wasn’t too problematic for you, or you could move your datadir down a level, and then it wouldn’t be created in the datadir anymore.

However, there is now the –ignore-db-dir option. It is actually not too new (it’s been in MariaDB since 5.3.9 and 5.5.28, and in MySQL as of 5.6.3), but I don’t think many are too familiar with it.

But when you do run into this problem, some/many would prefer to add a single line to the config file rather than move the datadir.

To do this, just add the following option to your my.cnf file, under the [mysqld] section (it cannot be set dynamically):

ignore-db-dir=lost+found

And just to show the example:

Before updating my.cnf file:

mysql> select version();
+--------------------+
| version()          |
+--------------------+
| 5.5.40-MariaDB-log |
+--------------------+

mysql> show global variables like 'ignore_db_dirs';
+----------------+-------+
| Variable_name  | Value |
+----------------+-------+
| ignore_db_dirs |       |
+----------------+-------+

Update my.cnf file and restart mysqld:

mysql> show global variables like 'ignore_db_dirs';
+----------------+------------+
| Variable_name  | Value      |
+----------------+------------+
| ignore_db_dirs | lost+found |
+----------------+------------+

mysql> use lost+found
ERROR 1102 (42000): Incorrect database name 'lost+found'

Now you see the lost+found directory is ignored now.

Of course, you can omit multiple directories. However, if you need to add more than one, then you *must* use multiple instances of the ignore_db_dirs= option, one for each directory you want to ignore. That is, you cannot separate them by comma, even though that is how it will be displayed when you have more than one being ignored (I think it treats the comma as part of the name, so then neither of the dirs you want to ignore would be ignored):

For instance, if I want to ignore both “lost+found” and “test”, then you must add the following to the config file:

ignore-db-dir=lost+found
ignore-db-dir=test

Then restart mysqld:

mysql> show global variables like 'ignore_db_dirs';
+----------------+-----------------+
| Variable_name  | Value           |
+----------------+-----------------+
| ignore_db_dirs | lost+found,test |
+----------------+-----------------+

Hope this helps.

 

Link
Peter ZaitsevHow to close POODLE SSLv3 security flaw (CVE-2014-3566) (15.10.2014, 17:55 UTC)

Padding Oracle On Downgraded Legacy Encryption

POODLE security flaw disables SSLv3 secure browsing (CVE-2014-3566)First off, the naming “convention” as of late for security issues has been terrible. The newest vulnerability (CVE­-2014-3566) is nicknamed POODLE, which at least is an acronym and as per the header above has some meaning.

The summary of this issue is that it is much the same as the earlier B.E.A.S.T (Browser Exploit Against SSL TLS), however there’s no known mitigation method in this case – other than entirely disabling SSLv3 support, in short, an attacker has a vector by which they can retrieve the plaintext form your encrypted streams.

So let’s talk mitigation, the Mozilla Security Wiki Serverside TLS has for some time made strict recommendations of ciphers and protocols; and is certainly worth your attention.

Apache

Disable SSLv2 and SSLv3 in your ssh apache configuration by setting:
SSLProtocol all -SSLv2 -SSLv3

Nginx

Allow support only for TLS in Nginx with the following:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

MySQL

This is where things get far more interesting; unlike Apache and Nginx there’s no way to allow / disallow entire protocols of the SSL / TLS spec within mysql; there is however the ability to specify the cipher spec to be used in SSL communication.

As such to remove SSLv3 support from MySQL you need only ensure that none of the SSLv3 ciphers are in use wihtin your configuration.

As per information in this bug you can find a list of SSLv3 ciphers by simply
openssl ciphers -v 'DEFAULT' | awk '/SSLv3 Kx=(RSA|DH|DH(512))/ { print $1 }'
DHE-RSA-AES256-SHA
DHE-DSS-AES256-SHA
DHE-RSA-CAMELLIA256-SHA
DHE-DSS-CAMELLIA256-SHA
AES256-SHA
CAMELLIA256-SHA
EDH-RSA-DES-CBC3-SHA
EDH-DSS-DES-CBC3-SHA
DES-CBC3-SHA
DHE-RSA-AES128-SHA
DHE-DSS-AES128-SHA
DHE-RSA-SEED-SHA
DHE-DSS-SEED-SHA
DHE-RSA-CAMELLIA128-SHA
DHE-DSS-CAMELLIA128-SHA
AES128-SHA
SEED-SHA
CAMELLIA128-SHA
RC4-SHA
RC4-MD5
EDH-RSA-DES-CBC-SHA
EDH-DSS-DES-CBC-SHA
DES-CBC-SHA
EXP-EDH-RSA-DES-CBC-SHA
EXP-EDH-DSS-DES-CBC-SHA
EXP-DES-CBC-SHA
EXP-RC2-CBC-MD5
EXP-RC4-MD5

Removing the above form your ssl-cipher configuration should disable SSLv3 support; of course ensuring your MySQL service is NOT generally accessible is by far one of the most important steps you can take in securing your MySQL deployment against CVE-2014-3566.

You can read more about POODLE here.

The following script will help to identify support for any none SSLv3 ciphers; unfortunately in my limited testing I have yet to have found a supported none SSLv3 cipher.

Formatting is an issue for the script as such please see the Github gist

 

UPDATE 2014-10-16: openssl updates are now becoming available with patches against this issue

AMI Linux: openssl-1.0.1j-1.80.amzn1 “add patch for CVE-2014-3566 (Padding Oracle On Downgraded Legacy Encryption attack)”

RedHat: no update is yet available

 

The post How to close POODLE SSLv3 security flaw (CVE-2014-3566) appeared first on MySQL Performance Blog.

Link
Peter ZaitsevRackspace doubling-down on open-source databases, Percona Server (15.10.2014, 07:00 UTC)

Rackspace doubling-down on OpenStack TroveFounded in 1998, Rackspace has evolved over the years to address the way customers are using data – and more specifically, databases. The San Antonio-based company is fueling the adoption of cloud computing among organizations large and small.

Today Rackspace is doubling down on open source database technologies. Why? Because that’s where the industry is heading, according to Sean Anderson, Manager of Data Services at Rackspace. The company, he said, created a separate business unit of 100+ employees focused solely on database workloads.

The key technologies under the hood include both relational databases (e.g., MySQL, Percona Server, and MariaDB) and NoSQL databases (e.g., MongoDB, Redis, and Apache Hadoop).

Last July Rackspace added support for Percona Server and MariaDB to their Cloud Databases DBaaS (Database-as-a-Service) product, primarily at the request of application developers who had been requesting more open source database support options.

Matt Griffin, Percona director of product management, and I recently sat down with Sean and his colleague Neha Verma, product manager of Cloud Databases. Our discussion focused on the shift to DBaaS as well as what to expect in the future from Rackspace in terms of Cloud Databases, OpenStack Trove and more.

* * *

Matt: Why did you expand the Cloud Databases product this past summer?
Sean:  We launched cloud databases about a year and a half ago. Since then we’ve rolled feature after feature (backups, monitoring, configuration management, etc…) focused on simplifying our customers life, this backed by Fanatical support has made the product easier to use and more production ready than ever. We understand that features aren’t enough so in addition to all the features we have also made significant improvements to the hardware and network infrastructure. All this means that we’ve been very busy not just expanding the offering but also making the offering simpler to use, more complete and more scalable.

Our vision is to offer a robust platform that with the most popular Big Data, SQL, and NoSQL databases on dedicated, bare metal, and public cloud infrastructure.

Matt: What type of customer is your Cloud Databases offering aimed at?
Sean: Currently we have a variety of customers running multiple Cloud Database instances ranging from customers running a two-month marketing campaign to customers running web applications, ecommerce applications with highly transactional database workloads. Our customers prefer the simplicity and reliability of the service which allows them to focus on their business and not worry about the heavy lifting associated with scaling and managing databases.

Matt: How is your Cloud Databases offering backed-up?
Neha: We use Percona XtraBackup  to perform a hot copy of all databases on a instance and then stream the backups to Cloud Files for storage. A customer can anytime restore the backup to a new instance. Percona XtraBackup is the only option we offer customers right now.

Tom: In terms of security, how do you address customer concerns? Are cloud-based open source databases more secure?
Sean: Data security concerns are at an all-time high and we have a number of up and coming features that continue to address those concerns.   Today we offer a number of unique features specifically Cloud Databases can only be accessed on the private network so the database can only be accessed by systems on your private network. Additionally, we support SSL for communication between user application and database instance so that any data transfer is encrypted in transit.  These features along with the built in user controls and authentication mechanisms help significantly address customers security concerns.  Ultimately Cloud-based open source databases or no more or less secure than any other database, security is about more than features it is about the process and people that build and manage your database and we have those more than covered.

Matt: Is this for production applications or pre-production?

Truncated by Planet PHP, read more at the original (another 3744 bytes)

Link
LinksRSS 0.92   RDF 1.
Atom Feed