Home MySQL Blogs
MySQL

My MySQL tipsvalid-rss-rogers




ProxySQL and Mirroring what about it? PDF Print E-mail
Written by Marco Tusa   
Saturday, 13 May 2017 00:11

 

Overview

I love ProxySQL, I think it is a great component for expanding architecture flexibility and HA, but not all what shine is gold.

Let me make clear that I only want to set the expectations right, and avoid to sell carbon for gold. Carbon has it's own use, gold has another. 

 

First of all let me clarify what is mirroring for me (and hope most of you).

Then we need to cover the basic of how ProxySQL manage the (I cannot say mirroring) traffic dispatch.

ProxySQL receive a connection from the application, and through it we can have a simple SELECT or a more complex transaction. ProxySQL will get each query, pass it to the QueryProcessor, process the query, identify if the query is mirrored, duplicate the whole mysql session ProxySQL internal object and associate it to a mirror queue, which refer to a mirror threads pool.


If the pool is free (has an available active slot in the concurrent active threads set) then the query is processed right away, if not it will stay in the queue. If the queue is full, the query is lost.

Whatever is returned from the query goes to /dev/null, as such no result set is passed back to client.

The whole process is not free for a server, actually if you will check the CPU utilization you will see that the “mirroring” in ProxySQL will actually double the CPU utilization. Meaning that also the traffic on server A will be impacted because resource contention.

Summarizing ProxySQL will:

  • Send the query for execution in different order
  • Completely ignore any transaction isolation
  • Have different number of query executed on B respect to A
  • Add significant load on the server resources

Comparing this with the point and expectations I mention in the reasoning at the end of this article, it is quite clear to me that at the moment we cannot consider ProxySQL as a valid mechanism to duplicate consistent load from server A to server B.

 

Personally, I don’t think that ProxySQL development Team (Rene’ :D), should waste his time on fixing this part, there are so many other things to cover and improve on ProxySQL.

After having work extensively with ProxySQL and have done deep QA on mirroring, I think that either we keep it as basic blind traffic dispatcher or a full re-conceptualization is required.

Setup

But once we have clarified that, we can still see ProxySQL “traffic dispatch” (cannot say mirroring really) as a very interesting feature, that may result useful in many ways, especially because it is so easy to setup.

The following is the result of tests I had performed, which should help in setting correct expectations.

Tests were simple, load data in a PXC cluster and use ProxySQL to replicate the load on a MySQL master-slave environment.

Machines for MySQL/PXC where VM with CentOS 7, 4 CPU 3 GB RAM, attached storage.

Machine for ProxySQL VM CentOS 7, 8 CPU 8GB RAM.

Why I choose to give ProxySQL such higher  volume of resources?

I knew in advance I may need to play a bit with a couple of settings requiring more memory and CPU cycles and I want to be sure I don’t get any problem from ProxySQL in relation to CPU and Memory.

The application that I was using to add load is a Java application I develop to perform my tests. App is https://github.com/Tusamarco/blogs/blob/master/stresstool_base_app.tar.gz, the whole set I had used to do the test is here

I had used 4 different tables,

1
2
3
4
5
6
7
8
+------------------+
| Tables_in_mirror |
+------------------+
| mirtabAUTOINC    |
| mirtabMID        |
| mirtabMIDPart    |
| mirtabMIDUUID    |
 

 

For full table definition see here

 

ProxySQL setup

Ok so let start.

First setup ProxySQL

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
DELETE FROM mysql_servers WHERE hostgroup_id IN (500,501,700,701);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.5',500,3306,60000,400);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.5',501,3306,100,400);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.21',501,3306,20000,400);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.231',501,3306,20000,400);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.7',700,3306,1,400);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.7',701,3306,1,400);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.25',701,3306,1,400);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.43',701,3306,1,400);
LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;
 
DELETE FROM mysql_users WHERE username='load_RW';
INSERT INTO mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent) VALUES ('load_RW','test',1,500,'test',1);
LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK;
 
DELETE FROM mysql_query_rules WHERE rule_id=202;
INSERT INTO mysql_query_rules (rule_id,username,destination_hostgroup,mirror_hostgroup,active,retries,apply) VALUES(202,'load_RW',500,700,1,3,1);
LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK
 

 

Tests results

Now the action. 

Test1

The first test is mainly a simple functional test during which I insert records using 1 single thread in PXC and in MySQL.

No surprise here I have 3000 loops and at the end I have 3000 records on both platforms.

To have a baseline we can see that ProxySQL CPU utilization is quite low

cpu_proxy

 

AT the same time the number of "questions" against PXC and MySQL very similar:
PXC

question_galera

MySQL

question_mysql

 

The other two metrics we want to keep an eye on are Mirror_concurrency and Mirror_queue_length this two refer respectively to mysql-mirror_max_concurrency and mysql-mirror_max_queue_length.

The two new variables and metrics were introduced in ProxySQL 1.4.0 with the intent to control and manage the load ProxySQL generate internally related to the mirroring feature.

 

queue_proxy

In this case as you can see we have a max of 3 concurrent connections and 0 queue entry, all good.

Now that we have a baseline, and that we know at functional level "it works" let see what happen increasing the load.

 

Test 2.

Scope of the test was to identify how ProxySQL will behave with standard configuration and increasing load.

It comes up that as soon as ProxySQL has a little bit more load, it will start to loose some query along the way.

Executing 3000 loop for 40 threads, insert only will result in 120,000 rows inserted in all the 4 tables in PXC but the table in the Secondary (mirrored) platform will only have a variable number between 101,359 and 104,072. Showing consistent lost of data.

Reviewing the insight and comparing the connections running in PXC and the secondary we can see that, as expected, the PXC number of connections is scaling serving the number of incoming requests, while the connections on the Secondary are limited by the default value of mysql-mirror_max_concurrency=16.

test2

 

Is also interesting to notice that to process the queue of transaction existing in ProxySQL the connection on the Secondary persist longer than the connection in PXC.

queue_proxy

 

As we can see above the queue as an evident bell curve reaching the 6K entries which is quite below the mysql-mirror_max_queue_length limit (32K). Yet the queries were drop by ProxySQL, which indicate the queue is not really enough to accommodate the pending work.

 

cpu_proxy

CPU wise ProxySQL as expected take a bit more cycles, but nothing crazy and the overhead for the simple mirroring queue processing can be see when the main load stops around 12:47.

 

Another interesting graph to keep an eye on is the one describing the executed command inside PXC and the Secondary:

PXC

commands_galera

Secondary

commands_mysql

 

As you can see the traffic on the Secondary was significantly less 669 average than PXC 1.17K. Then it spikes when the main load on the PXC node terminates.

In short it is quite clear that ProxySQL is not able here to scale following the traffic existing in PXC and actually loosing significant amount of data on the Secondary.

Doubling the load in the Test3 show the same behavior, having ProxySQL reach his limit for the traffic duplication.

 

But can this be optimized?

Of course yes, this is what the mysql-mirror_max_concurrency is for, lets see what is going to happen if I increase the value from 16 to 100 just to make it crazy high.

 

Test 4

2 app node writing.

I am jumping the description of test 3 because is mainly the same of Test 2 with more load.

 

The first thing that is coming to the attention is that both PXC and secondary report same number of rows in the tables (240,000). That is a first good win.

 

Second the number of running connections:

test4

Lines now are much closer and the queue just drop to few entries.

 

Commands executed in PXC:

command_galera

And commands executed in the Secondary:

command_mysql

Average execution report the same value, and very similar trend.

 

Finally, what was the CPU cost and effect?

PXC:

cpu_galera

 

Secondary:

cpu_mysql

As expected some difference in the CPU usage distribution exists, but the trend is consistent between the two nodes and with the operations.

 

The ProxySQL CPU utilization is definitely higher than before:

cpu_proxy

But absolutely manageable, and still reflecting the initial distribution.

 

Finally what about CRUD now?
So far I had only tested the insert operation, but what happen if we run a full CRUD set of tests?

 

Test 7 Crud

First of all, let us review the executed commands, in PXC

commands_galera

And Secondary

commands_mysql

While in appearance we have very similar workload, selects aside the behavior will significantly diverge.
This because in the Secondary the different operations are not encapsulated by the transaction and executed as they are received.  We can see significant difference in update and delete operations between the two.

 

Also the threads in execution will show a different picture between the two platform.

PXC

threads_galera

Secondary

threads_mysql

It appears quite clear that PXC is constantly having more running threads and more connections.

Never the less both platform process similar total number of questions.

PXC

questions_galera

Secondary

questions_mysql

Both having an average around 1.17K/second questions.

This is also another indication of how much the behavior is impacted when we have concurrent operation but no respect of the isolation or execution order.

Different behavior that is also clear reviewing the CPU utilization.

PXC

cpu_galera

Secondary

cpu_mysql

 

Conclusions

To close this long article, I want to go back to the start.

We cannot consider the mirror function in ProxySQL as a real mirroring, but more a traffic redirection.

 

Use ProxySQL with this approach, is still partially effective in testing the load and the effect it may have on a secondary platform.
As we know data consistency is not guarantee in this scenario, and Selects, Updates and Delete are affected by this given the different data-set and result-set they will manage.

Given that, the server behaviors will change between original and mirror, if not in the quantity in the quality.

I am convinced that when we need a tool able to test our production load on a different or new platform, we will do better to look to something else, like Query Playback recently reviewed and significantly patch by DropBox (https://github.com/Percona-Lab/query-playback).

At the end ProxySQL is already a cool tool and if it doesn't cover this, well I can live with that, I am interested to have it working as it should and does in many other functionalities.

 

Acknowledgments

As usual to Rene' who had work on fixing and introducing new functionalities associated to the mirror, like the queue and concurrency control.

To the Percona team who develop PMM, all the graphs here (except 3) comes from PMM, and some of them are my customization.

Last Updated on Friday, 26 May 2017 10:04
 
InnoDB Page Merging and Page Splitting PDF Print E-mail
Written by Marco Tusa   
Monday, 10 April 2017 22:15

If you met one of the (few) MySQL consultants around the globe and asked him/her to review your queries and/or schemas, I am sure that he/she would tell you something regarding the importance of good primary key(s) design. Especially in the case of InnoDB, I’m sure they started to explain to you about index merges and page splits. These two notions are closely related to performance, and you should take this relationship into consideration when designing any index (not just PKs).

That may sound like mumbo jumbo to you, and you may be right. This is not easy stuff, especially when talking about internals. This is not something you deal with on a regular basis, and often you don’t want to deal with it at all.

But sometimes it’s a necessity. If so, this article is for you.

In this article, I want to shed some light in explaining some of the most unclear, behind the scenes operations in InnoDB: page index creation, page merging and page splitting.

In Innodb all data is an index. You’ve probably heard that as well right? But what exactly does that mean?

 

File-Table Components

Let's say you have MySQL installed, the latest 5.7 version (Percona Server for MySQL, right?  ), and you have a table named wmills in the schema windmills. In the data directory (normally /var/lib/mysql/) you will see that it contains:

 

data/  
windmills/
     wmills.ibd
     wmills.frm

 

This is because the parameter innodb_file_per_table is set to 1 since MySQL 5.6. With that setting, each table in your schema is represented by one file (or many files if the table is partitioned).

What is important here is that the physical container is a file named wmills.ibd. This file is broken up into and contains N number of segments. Each segment is associated with an index.

While a file’s dimensions do not shrink with row-deletions, a segment itself can grow or shrink in relation to a sub-element named extent. An extent can only exist inside a segment and has a fixed dimension of 1MB (in the case of default page size). A page is a sub-element of an extent and has a default size of 16KB.

Given that, an extent can contain a maximum of 64 pages. A page can contain two to N number of rows. The number of rows a page can contain is related to the size of the row, as defined by your table schema. There is a rule within InnoDB that says, at minimum, two rows must fit into a page. Therefore, we have a row-size limit of 8000 bytes.

If you think this sounds like Matryoshka dolls, you are right! An image might help:

segment_extent

InnoDB uses B-trees to organize your data inside pages across extents, within segments.

Roots, Branches, and Leaves

Each page (leaf) contains 2-N rows(s) organized by the primary key. The tree has special pages to manage the different branch(es). These are known as internal nodes (INodes).

Bplustree

This image is just an example, and is not indicative of the real-world output below.

Let’s see the details:

 

ROOT NODE #3: 4 records, 68 bytes
 NODE POINTER RECORD ≥ (id=2)#197
 INTERNAL NODE #197: 464 records, 7888 bytes
 NODE POINTER RECORD ≥ (id=2)#5
 LEAF NODE #5: 57 records, 7524 bytes
 RECORD: (id=2)(uuid="884e471c-0e82-11e7-8bf6-08002734ed50", millid=139, kwatts_s=1956, date="2017-05-01", lo

 

 

Below is the table structure:

 

CREATE TABLE `wmills` (
  `id` bigint(11) NOT NULL AUTO_INCREMENT,
  `uuid` char(36) COLLATE utf8_bin NOT NULL,
  `millid` smallint(6) NOT NULL,
  `kwatts_s` int(11) NOT NULL,
  `date` date NOT NULL,
  `location` varchar(50) COLLATE utf8_bin DEFAULT NULL,
  `active` tinyint(2) NOT NULL DEFAULT '1',
  `time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `strrecordtype` char(3) COLLATE utf8_bin NOT NULL,
  PRIMARY KEY (`id`),
  KEY `IDX_millid` (`millid`)
) ENGINE=InnoDB;

 

All styles of B-trees have a point of entry known as the root node. We’ve identified that here as page #3. The root page contains information such as index ID, number of INodes, etc. INode pages contain information about the pages themselves, their value ranges, etc. Finally, we have the leaf nodes, which is where we can find our data. In this example, we can see that leaf node #5 has 57 records for a total of 7524 bytes. Below that line is a record, and you can see the row data.

The concept here is that while you organize your data in tables and rows, InnoDB organizes it in branches, pages, and records. It is very important to keep in mind that InnoDB does not work on a single row basis. InnoDB always operates on pages. Once a page is loaded, it will then scan the page for the requested row/record.

Is that clear up to now? Good. Let’s continue.

Page Internals

A page can be empty or fully filled (100%). The row-records will be organized by PK. For example, if your table is using an AUTO_INCREMENT, you will have the sequence ID = 1, 2, 3, 4, etc.

Locality_1

A page also has another important attribute: MERGE_THRESHOLD. The default value of this parameter is 50% of the page, and it plays a very important role in InnoDB merge activity:

Locality_2

While you insert data, the page is filled up sequentially if the incoming record can be accommodated inside the page.

When a page is full, the next record will be inserted into the NEXT page:

Locality_4

Given the nature of B-trees, the structure is browsable not only top-down following the branches, but also horizontally across the leaf nodes. This is because each leaf node page has a pointer to the page that contains the NEXT record value in the sequence.

For example, Page #5 has a reference to the next page, Page #6. Page #6 has references backward to the previous page (Page #5) and a forward to the next page (Page #7).

This mechanism of a linked list allows for fast, in-order scans (i.e., Range Scans). As mentioned before, this is what happens when you are inserting and have a PK based on AUTO_INCREMENT. But what happens if I start to delete values?

 

Page Merging

When you delete a record, the record is not physically deleted. Instead, it flags the record as deleted and the space it used becomes reclaimable.

Locality_3

When a page has received enough deletes to match the MERGE_THRESHOLD (50% of the page size by default), InnoDB starts to look to the closest pages (NEXT and PREVIOUS) to see if there is any chance to optimize the space utilization by merging the two pages.

Locality_4

In this example, Page #6 is utilizing less than half of its space. Page #5 received many deletes and is also now less than 50% used. From InnoDB’s perspective, they are mergeable:

Locality_5

The merge operation results in Page #5 containing its previous data plus the data from Page #6. Page #6 becomes an empty page, usable for new data.

Locality_6

The same process also happens when we update a record and the size of the new record brings the page below the threshold.

The rule is: Merges happen on delete and update operations involving close linked pages.

If a merge operation is successful, the index_page_merge_successful metric in INFORMATION_SCHEMA.INNODB_METRICS is incremented.

 

Page Splits

As mentioned above, a page can be filled up to 100%. When this happens, the next page takes new records.

But what if we have the following situation?

Locality_7

Page #10 doesn’t have enough space to accommodate the new (or updated) record. Following the next page logic, the record should go on Page #11. However:

Locality_9

Page #11 is also full, and data cannot be inserted out of order. So what can be done?

Remember the linked list we spoke about? At this moment Page #10 has Prev=9 and Next=11.

What InnoDB will do is (simplifying):

  1. Create a new page
  2. Identify where the original page (Page #10) can be split (at the record level)
  3. Move records
  4. Redefine the page relationships

Locality_8

A new Page #12 is created:

Locality_10

Page #11 stays as it is. The thing that changes is the relationship between the pages:

  • Page #10 will have Prev=9 and Next=12
  • Page #12 Prev=10 and Next=11
  • Page #11 Prev=12 and Next=13

The path of the B-tree still sees consistency since it is following a logical organization. However, physically the page is located out of order, and in most cases in a different extent.

As a rule we can say: Page splits happens on Insert or Update, and cause page dislocation (in many cases on different extents).

InnoDB tracks the number of page splits in INFORMATION_SCHEMA.INNODB_METRICS. Look for index_page_splits and index_page_reorg_attempts/successful metrics.

Once the split page is created, the only way to move back is to have the created page drop below the merge threshold. When that happens, InnoDB moves the data from the split page with a merge operation.

The other way is to reorganize the data by OPTIMIZE the table. This can be a very heavy and long process, but often is the only way to recover from a situation where too many pages are located in sparse extents.

Another aspect to keep in mind is that during merge and split operations, InnoDB acquires an x-latch to the index tree. On a busy system, this can easily become a source of concern. This can cause index latch contention. If no merges and splits (aka writes) touch only a single page, this is called an “optimistic” update in InnoDB, and the latch is only taken in S. Merges and splits are called “pessimistic” updates, and take the latch in X.

 

My Primary Key

A good Primary Key (PK) is not only important for retrieving data, but also correctly distributing the data inside the extents while writing (which is also relevant in the case of split and merge operations).

In the first case, I have a simple auto-increment. In the second my PK is based on an ID (1-200 range) and an auto-increment value. In my third, I have the same ID (1-200 range) but associate with a UUID.

When inserting, InnoDB must add pages. This is read as a SPLIT operation:

Page Merging and Page Splitting

The behavior is quite different depending on the kind of Primary Key I use.

The first two cases will have more “compact” data distribution. This means they will also have better space utilization, while the semi-random nature of the UUID will cause a significant “sparse” page distribution (causing a higher number of pages and related split operations).

In the case of merges, the number of attempts to merge is even more different by PK type.

merges_1

On Insert-Update-Delete operations, auto-increment has less page merge attempts and 9.45% less of a success ratio than the other two types. The PK with UUID (on the side other of the spectrum) has a higher number of merge attempts, but at the same time also a significantly higher success ratio at 22.34%, given that the “sparse” distribution left many pages partially empty.

The PK values with similar numbers also come from a secondary index.

Conclusion

MySQL/InnoDB constantly performs these operations, and you have very limited visibility of them. But they can bite you, and bite hard, especially if using a spindle storage VS SSD (which have different issues, by the way).

The sad story is there is also very little we can do to optimize this on the server side using parameters or some other magic. But the good news is there is A LOT that can be done at design time.

Use a proper Primary Key and design a secondary index, keeping in mind that you shouldn’t abuse of them. Plan proper maintenance windows on the tables that you know will have very high levels of inserts/deletes/updates.

This is an important point to keep in mind. In InnoDB you cannot have fragmented records, but you can have a nightmare at the page-extent level. Ignoring table maintenance will cause more work at the IO level, memory and InnoDB buffer pool.

You must rebuild some tables at regular intervals. Use whatever tricks it requires, including partitioning and external tools (pt-osc). Do not let a table to become gigantic and fully fragmented.

Wasting disk space? Need to load three pages instead one to retrieve the record set you need? Each search causes significantly more reads?
That’s your fault; there is no excuse for being sloppy!

Happy MySQL to everyone!

 

Acknowledgments

Laurynas Biveinis: who had the time and patience to explain some internals to me.

Jeremy Cole: for his project InnoDB_ruby (that I use constantly).

Last Updated on Monday, 10 April 2017 22:57
 
Setup ProxySQL as High Available (and not a SPOF) PDF Print E-mail
Written by Marco Tusa   
Sunday, 15 January 2017 00:00

During the last few months we had a lot of opportunities to present and discuss about a very powerful tool that will become more and more used in the architectures supporting MySQL, ProxySQL.

ProxySQL is becoming every day more flexible, solid, performant and used (http://www.proxysql.com/ and recent http://www.proxysql.com/compare).

 

This is it, the tool is a winner in comparing it with similar ones, and we all need to have a clear(er) idea on how integrate it in our architectures in order to achieve the best results.

 

The first to keep in mind is that ProxySQL is not natively supporting any high availability solution, in short we can setup a cluster of MySQL(s) and achieve 4 or even 5 nines of HA, but if we include ProxySQL, as it is, and as single block, our HA will include a single point of failure (SPOF) that will drag us down in case of crash.

 

To solve this, the most common solution so far had be to setup ProxySQL as part of a tile architecture, where Application/ProxySQL where deploy together.

tileProxy

 

This is a good solution for some cases, and for sure it reduce the network hops, but what it may be less than practical when our architecture has a very high number of tiles.
Say 100 or 400 application servers, not so unusual nowadays. 
In that case managing the ProxySQL will be challenging, but most problematic it will be the fact that ProxySQL must perform several checks on the destination servers (MySQL), and if we have 400 instance of ProxySQL we will end up keeping our databases busy just because the checks.

In short ... is not a smart move.

 

Another possible approach, used so far was to have two layers of ProxySQL, one close to the application, another in the middle to finally connect to the database.

I personally don't like this approach for many reasons, but the most relevants are that this approach create additional complexity in the management of the platform, and it adds network hops.

ProxyCascade

 

So what can be done?

I like to have things simple, I love the KISS principle, I love to have things simple and because I am lazy I love to reuse the wheel instead re-invent things that someone else had already invent.

Last thing I like to have my customers not depending from me or any other colleague, once I am done, and gone, they must be able to manage their things, understand their things, fix their things by themselves.

 

Anyhow as said I like simple things. So my point here is the following:  

  • excluding the cases where a tile (application/ProxySQL) make sense;  
  • or when in the cloud and tools like ELB (Elastic load balancer) exist; 
  • or on architecture already including a balancer.

What I can use for the remaining cases?

The answer comes with existing solutions and combining existing blocks, KeepAlived + ProxySQl + MySQL.

keepalived_logo

For KeepAlived explanation visit http://www.keepalived.org/.

Short description
"Keepalived is a routing software written in C. The main goal of this project is to provide simple and robust facilities for loadbalancing and high-availability to Linux system and Linux based infrastructures. Loadbalancing framework relies on well-known and widely used Linux Virtual Server (IPVS) kernel module providing Layer4 loadbalancing. Keepalived implements a set of checkers to dynamically and adaptively maintain and manage loadbalanced server pool according their health. On the other hand high-availability is achieved by VRRP protocol. VRRP is a fundamental brick for router failover. In addition, Keepalived implements a set of hooks to the VRRP finite state machine providing low-level and high-speed protocol interactions. Keepalived frameworks can be used independently or all together to provide resilient infrastructures."

Bingo! this is exactly what we need for our ProxySQL setup.

Below I will show how to setup: 

  • Simple solution base on a single VIP 
  • More complex solution using multiple VIPs 
  • Even more complex solution using virtual VIPs and virtual servers.

Just remind that what we want to achieve is to prevent ProxySQL to become a SPOF, that's it. 

While achieving that we need to reduce as much as possible the network hops and keep the solution SIMPLE.

 

Another important concept to keep in mind is that ProxySQL (re)start take place in less then a second. 

This means that if it crash and it can be restarted by the angel process, having it doing so and recovery the service is much more efficient than to have any kind of failover mechanism to take place.

As such whenever you plan your solution keep in mind the ~1 second time of ProxySQL restart as base line.

 

Ready?

Let's go.

Setup

Choose 3 machines that will host the combination of Keepalive and ProxySQL.

In the following example I will use 3 machines for ProxySQL and Keepalived and 3 hosting PXC, but you can have the Keepalived+ProxySQL whenever you like also on the same PXC box.

For the following examples we will have:

PXC
node1 192.168.0.5 galera1h1n5
node2 192.168.0.21 galera2h2n21
node3 192.168.0.231 galera1h3n31
 
ProxySQL-Keepalived
test1 192.168.0.11
test2 192.168.0.12
test3 192.168.0.235
 
VIP 192.168.0.88 /89/90
 

 

 

To check I will use this table, please create it in your MySQL server:

DROP TABLE  test.`testtable2`;
 CREATE TABLE test.`testtable2` (
  `autoInc` bigint(11) NOT NULL AUTO_INCREMENT,
  `a` varchar(100) COLLATE utf8_bin NOT NULL,
  `b` varchar(100) COLLATE utf8_bin NOT NULL,
  `host` varchar(100) COLLATE utf8_bin NOT NULL,
  `userhost` varchar(100) COLLATE utf8_bin NOT NULL,
  PRIMARY KEY (`autoInc`)
) ENGINE=InnoDB ROW_FORMAT=DYNAMIC;

 

 

And this bash TEST command to use later

while [ 1 ];do export mydate=$(date +'%Y-%m-%d %H:%M:%S.%6N');
mysql --defaults-extra-file=./my.cnf -h 192.168.0.88 -P 3311  --skip-column-names  
-b -e "BEGIN;set @userHost='a';select concat(user,'_', host) into @userHost from information_schema.processlist  where user = 'load_RW' limit 1;insert into test.testtable2 values(NULL,'$mydate',SYSDATE(6),@@hostname,@userHost);commit;select * from test.testtable2 order by 1 DESC limit 1" ;
sleep 1;done

 

  1. Install ProxySQL (refer to https://github.com/sysown/proxysql/wiki#installation)
  2. Install Keepalived (yum install keepalived; apt-get install keepalived)
  3. Setup ProxySQL users and servers

Once you have your ProxySQL up (run the same on all ProxySQL nodes, it is much simpler), connect to the Admin interface and:

 

DELETE FROM mysql_replication_hostgroups WHERE writer_hostgroup=500 ;
DELETE FROM mysql_servers WHERE hostgroup_id IN (500,501);
 
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.0.5',500,3306,1000000000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.0.5',501,3306,100);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.0.21',500,3306,1000000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.0.21',501,3306,1000000000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.0.231',500,3306,100);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.0.231',501,3306,1000000000);
LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;
 
DELETE FROM mysql_users WHERE username='load_RW';
INSERT INTO mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent) VALUES ('load_RW','test',1,500,'test',1);
LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK;
 
DELETE FROM mysql_query_rules WHERE rule_id IN (200,201);
INSERT INTO mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply) VALUES(200,'load_RW',501,1,3,'^SELECT.*FOR UPDATE',1);
INSERT INTO mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply) VALUES(201,'load_RW',501,1,3,'^SELECT ',1); 
 
LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;

 

 

Create a my.cnf file in your default dir with

[mysql]
user=load_RW
password=test

 

Simple Setup using a single VIP 3 ProxySQL 3 Galera nodes

proxy_keep_single

 

First setup the keepalive configuration file (/etc/keepalived/keepalived.conf):

global_defs {
  # Keepalived process identifier
  lvs_id proxy_HA
}
# Script used to check if Proxy is running
vrrp_script check_proxy {
  script "killall -0 proxysql"
  interval 2
  weight 2
}
# Virtual interface
# The priority specifies the order in which the assigned interface to take over in a failover
vrrp_instance VI_01 {
  state MASTER 
  interface em1
  virtual_router_id 51
  priority <calculate on the WEIGHT for each node>
 
  # The virtual ip address shared between the two loadbalancers
  virtual_ipaddress {
    192.168.0.88 dev em1
  }
  track_script {
    check_proxy
  }
}
 

 

 

Given the above and given I want to have test1 as main priority will be set as:

 

test1 = 101
test2 = 100
test3 = 99 

 

 

Modify the config in each node following the above values and (re)start keepalived.

If all is set correctly in the system log of the TEST1 machine you will see:

 

Jan 10 17:56:56 mysqlt1 systemd: Started LVS and VRRP High Availability Monitor.
Jan 10 17:56:56 mysqlt1 Keepalived_healthcheckers[6183]: Configuration is using : 6436 Bytes
Jan 10 17:56:56 mysqlt1 Keepalived_healthcheckers[6183]: Using LinkWatch kernel netlink reflector...
Jan 10 17:56:56 mysqlt1 Keepalived_vrrp[6184]: Configuration is using : 63090 Bytes
Jan 10 17:56:56 mysqlt1 Keepalived_vrrp[6184]: Using LinkWatch kernel netlink reflector...
Jan 10 17:56:56 mysqlt1 Keepalived_vrrp[6184]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
Jan 10 17:56:56 mysqlt1 Keepalived_vrrp[6184]: VRRP_Script(check_proxy) succeeded
Jan 10 17:56:57 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Transition to MASTER STATE
Jan 10 17:56:57 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Received lower prio advert, forcing new election
Jan 10 17:56:57 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Received higher prio advert
Jan 10 17:56:57 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Entering BACKUP STATE
Jan 10 17:56:58 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) forcing a new MASTER election
...
Jan 10 17:57:00 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Transition to MASTER STATE
Jan 10 17:57:01 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Entering MASTER STATE <-- MASTER
Jan 10 17:57:01 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) setting protocol VIPs.
Jan 10 17:57:01 mysqlt1 Keepalived_healthcheckers[6183]: Netlink reflector reports IP 192.168.0.88 added
Jan 10 17:57:01 mysqlt1 avahi-daemon[937]: Registering new address record for 192.168.0.88 on em1.IPv4.
Jan 10 17:57:01 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Sending gratuitous ARPs on em1 for 192.168.0.88
 

 

 

While in the other two:

 

Jan 10 17:56:59 mysqlt2 Keepalived_vrrp[13107]: VRRP_Instance(VI_01) Entering BACKUP STATE <--- 

 

Which means node is there as ... :D Backup.

 

Now is time to test our connection to our ProxySQL pool.

From an application node or just from your laptop.

Open 3 terminals and in each one:

 

 watch -n 1 'mysql -h <IP OF THE REAL PROXY (test1|test2|test3)> -P 3310 -uadmin -padmin -t -e "select * from stats_mysql_connection_pool where hostgroup in (500,501,9500,9501) order by hostgroup,srv_host ;" -e " select srv_host,command,avg(time_ms), count(ThreadID) from stats_mysql_processlist group by srv_host,command;" -e "select * from stats_mysql_commands_counters where  Total_Time_us > 0;"'

 

 

You will see that unless you are already sending queries to proxies, you have the Proxies just doing nothing.
Time to start the test bash as I indicate above.
If everything is working correctly you will see the bash command reporting this:

 

+----+----------------------------+----------------------------+-------------+----------------------------+
| 49 | 2017-01-10 18:12:07.739152 | 2017-01-10 18:12:07.733282 | galera1h1n5 | load_RW_192.168.0.11:33273 |
+----+----------------------------+----------------------------+-------------+----------------------------+
  ID    execution time in the bash   exec time inside mysql     node hostname   user and where the connection is coming from

 

 

The other 3 running bash commands will show that ONLY the ProxySQL in TEST1 is currently getting/serving requests, because is the one with the VIP.

Like:

 

+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host      | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 500       | 192.168.0.21  | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0       | 0               | 0               | 629        |
| 500       | 192.168.0.231 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0       | 0               | 0               | 510        |
| 500       | 192.168.0.5   | 3306     | ONLINE | 0        | 0        | 3      | 0       | 18      | 882             | 303             | 502        |
| 501       | 192.168.0.21  | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0       | 0               | 0               | 629        |
| 501       | 192.168.0.231 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0       | 0               | 0               | 510        |
| 501       | 192.168.0.5   | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0       | 0               | 0               | 502        |
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
+---------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+
| Command | Total_Time_us | Total_cnt | cnt_100us | cnt_500us | cnt_1ms | cnt_5ms | cnt_10ms | cnt_50ms | cnt_100ms | cnt_500ms | cnt_1s | cnt_5s | cnt_10s | cnt_INFs |
+---------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+
| BEGIN   | 9051          | 3         | 0         | 0         | 0       | 3       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |
| COMMIT  | 47853         | 3         | 0         | 0         | 0       | 0       | 0        | 3        | 0         | 0         | 0      | 0      | 0       | 0        |
| INSERT  | 3032          | 3         | 0         | 0         | 1       | 2       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |
| SELECT  | 8216          | 9         | 3         | 0         | 3       | 3       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |
| SET     | 2154          | 3         | 0         | 0         | 3       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |
+---------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+

 

 

So nothing special right, all as expected. 

Time to see if the failover-failback works along the chain.

Let us kill the ProxySQL on TEST1 while the test bash command is running.

 

killall -9 proxysql 

 

 

Here is what you will get:

 

+----+----------------------------+----------------------------+-------------+----------------------------+
| 91 | 2017-01-10 18:19:06.188233 | 2017-01-10 18:19:06.183327 | galera1h1n5 | load_RW_192.168.0.11:33964 |
+----+----------------------------+----------------------------+-------------+----------------------------+
ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.0.88' (111)
+----+----------------------------+----------------------------+-------------+----------------------------+
| 94 | 2017-01-10 18:19:08.250093 | 2017-01-10 18:19:11.250927 | galera1h1n5 | load_RW_192.168.0.12:39635 | <-- note 
+----+----------------------------+----------------------------+-------------+----------------------------+
 

 

 

the source had change  but not the PXC node.

If you check the system log for TEST1:

 

Jan 10 18:19:06 mysqlt1 Keepalived_vrrp[6184]: VRRP_Script(check_proxy) failed
Jan 10 18:19:07 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Received higher prio advert
Jan 10 18:19:07 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Entering BACKUP STATE
Jan 10 18:19:07 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) removing protocol VIPs.
Jan 10 18:19:07 mysqlt1 Keepalived_healthcheckers[6183]: Netlink reflector reports IP 192.168.0.88 removed

 

 

 

While on TEST2

 

Jan 10 18:19:08 mysqlt2 Keepalived_vrrp[13107]: VRRP_Instance(VI_01) Transition to MASTER STATE
Jan 10 18:19:09 mysqlt2 Keepalived_vrrp[13107]: VRRP_Instance(VI_01) Entering MASTER STATE
Jan 10 18:19:09 mysqlt2 Keepalived_vrrp[13107]: VRRP_Instance(VI_01) setting protocol VIPs.
Jan 10 18:19:09 mysqlt2 Keepalived_healthcheckers[13106]: Netlink reflector reports IP 192.168.0.88 added
Jan 10 18:19:09 mysqlt2 Keepalived_vrrp[13107]: VRRP_Instance(VI_01) Sending gratuitous ARPs on em1 for 192.168.0.88

 

 

Simple ... and elegant. No need to re-invent the wheel and works smooth.


The total time for the recovery given the ProxySQL crash had be of 5.06 seconds,

considering the wider window ( last application start, last recovery in PXC  2017-01-10 18:19:06.188233|2017-01-10 18:19:11.250927)

As such the worse scenario, keeping in mind we run the check for the ProxySQL every 2 seconds (real recover max window 5-2=3 sec).

 

OK what about fail-back?

Let us restart the proxysql service:

 

/etc/init.d/proxysql start (or systemctl)

 

 

Here the output:

 

+-----+----------------------------+----------------------------+-------------+----------------------------+
| 403 | 2017-01-10 18:29:34.550304 | 2017-01-10 18:29:34.545970 | galera1h1n5 | load_RW_192.168.0.12:40330 |
+-----+----------------------------+----------------------------+-------------+----------------------------+
+-----+----------------------------+----------------------------+-------------+----------------------------+
| 406 | 2017-01-10 18:29:35.597984 | 2017-01-10 18:29:38.599496 | galera1h1n5 | load_RW_192.168.0.11:34640 |
+-----+----------------------------+----------------------------+-------------+----------------------------+

 

 

Worse recovery time = 4.04 seconds of which 2 of delay because the check interval.

 

Of course the test is running every second and is running one single operation, as such the impact is minimal (no error in fail-back), and recovery longer.

But I think I have made clear the concept here. 

Let see another thing... is the failover working as expected? Test1 -> 2 -> 3 ??

 

Let us kill 1 - 2 and see:

 

Kill Test1 :
+-----+----------------------------+----------------------------+-------------+----------------------------+
| 448 | 2017-01-10 18:35:43.092878 | 2017-01-10 18:35:43.086484 | galera1h1n5 | load_RW_192.168.0.11:35240 |
+-----+----------------------------+----------------------------+-------------+----------------------------+
+-----+----------------------------+----------------------------+-------------+----------------------------+
| 451 | 2017-01-10 18:35:47.188307 | 2017-01-10 18:35:50.191465 | galera1h1n5 | load_RW_192.168.0.12:40935 |
+-----+----------------------------+----------------------------+-------------+----------------------------+
...
Kill Test2
+-----+----------------------------+----------------------------+-------------+----------------------------+
| 463 | 2017-01-10 18:35:54.379280 | 2017-01-10 18:35:54.373331 | galera1h1n5 | load_RW_192.168.0.12:40948 |
+-----+----------------------------+----------------------------+-------------+----------------------------+
+-----+----------------------------+----------------------------+-------------+-----------------------------+
| 466 | 2017-01-10 18:36:08.603754 | 2017-01-10 18:36:09.602075 | galera1h1n5 | load_RW_192.168.0.235:33268 |
+-----+----------------------------+----------------------------+-------------+-----------------------------+

 

This image is where you should be at the end:

proxy_keep_single_failover

 

In this case given I have done one kill immediately after the other, Keepalived had take a bit more in failing over, but still it did correctly and following the planned chain.

Fail-back as smooth as usual:

1
2
3
4
5
6
+-----+----------------------------+----------------------------+-------------+-----------------------------+
| 502 | 2017-01-10 18:39:18.749687 | 2017-01-10 18:39:18.749688 | galera1h1n5 | load_RW_192.168.0.235:33738 |
+-----+----------------------------+----------------------------+-------------+-----------------------------+
+-----+----------------------------+----------------------------+-------------+----------------------------+
| 505 | 2017-01-10 18:39:19.794888 | 2017-01-10 18:39:22.800800 | galera1h1n5 | load_RW_192.168.0.11:35476 |
+-----+----------------------------+----------------------------+-------------+----------------------------+

 

Let us see now another case.

The case above is nice and simple, but as a cavet.

I can access only one ProxySQL a time, which may be good or not.

In any case it may be nice to have the possibility to choose, and with Keepalived you can. 

We can actually set an X number of VIP and associate them to each test box.

The result will be that each server hosting ProxySQL will also host a VIP, and will be eventually able to fail-over to any of the other two servers.

proxy_keep_multiple

 

Failing-over/Back will be fully managed by Keepalived, checking as we did before if ProxySQL is running.
Example of configuration for one node can be the one below:

 

global_defs {
  # Keepalived process identifier
  lvs_id proxy_HA
}
# Script used to check if Proxy is running
vrrp_script check_proxy {
  script "killall -0 proxysql"
  interval 2
  weight 3
}
 
# Virtual interface 1
# The priority specifies the order in which the assigned interface to take over in a failover
vrrp_instance VI_01 {
  state MASTER
  interface em1
  virtual_router_id 51
  priority 102
 
  # The virtual ip address shared between the two loadbalancers
  virtual_ipaddress {
    192.168.0.88 dev em1
  }
  track_script {
    check_proxy
  }
}
 
# Virtual interface 2
# The priority specifies the order in which the assigned interface to take over in a failover
vrrp_instance VI_02 {
  state MASTER
  interface em1
  virtual_router_id 52
  priority 100
 
  # The virtual ip address shared between the two loadbalancers
  virtual_ipaddress {
    192.168.0.89 dev em1
  }
  track_script {
    check_proxy
  }
}
 
# Virtual interface 3
# The priority specifies the order in which the assigned interface to take over in a failover
vrrp_instance VI_03 {
  state MASTER
  interface em1
  virtual_router_id 53
  priority 99
 
  # The virtual ip address shared between the two loadbalancers
  virtual_ipaddress {
    192.168.0.90 dev em1
  }
  track_script {
    check_proxy
  }
}

 

 

The tricky part in this case is to play with the PRIORITY for each VIP and each server such that you will NOT assign the same ip twice.

The whole set of configs can be found here

 

Performing the check with the test bash as above we have:

 

Test 1 crash
+-----+----------------------------+----------------------------+-------------+----------------------------+
| 422 | 2017-01-11 18:30:14.411668 | 2017-01-11 18:30:14.344009 | galera1h1n5 | load_RW_192.168.0.11:55962 |
+-----+----------------------------+----------------------------+-------------+----------------------------+
ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.0.88' (111)
+-----+----------------------------+----------------------------+-------------+----------------------------+
| 426 | 2017-01-11 18:30:18.531279 | 2017-01-11 18:30:21.473536 | galera1h1n5 | load_RW_192.168.0.12:49728 | <-- new server
+-----+----------------------------+----------------------------+-------------+----------------------------+
....
Test 2 crash
+-----+----------------------------+----------------------------+-------------+----------------------------+
| 450 | 2017-01-11 18:30:27.885213 | 2017-01-11 18:30:27.819432 | galera1h1n5 | load_RW_192.168.0.12:49745 |
+-----+----------------------------+----------------------------+-------------+----------------------------+
ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.0.88' (111)
+-----+----------------------------+----------------------------+-------------+-----------------------------+
| 454 | 2017-01-11 18:30:30.971708 | 2017-01-11 18:30:37.916263 | galera1h1n5 | load_RW_192.168.0.235:33336 | <-- new server
+-----+----------------------------+----------------------------+-------------+-----------------------------+
 

 

 

Final state of IPs on Test3:

 

enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:c2:16:3f brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.235/24 brd 192.168.0.255 scope global enp0s8   <-- Real IP
       valid_lft forever preferred_lft forever
    inet 192.168.0.90/32 scope global enp0s8    <--- VIP 3
       valid_lft forever preferred_lft forever
    inet 192.168.0.89/32 scope global enp0s8    <--- VIP 2
       valid_lft forever preferred_lft forever
    inet 192.168.0.88/32 scope global enp0s8    <--- VIP 1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fec2:163f/64 scope link 
       valid_lft forever preferred_lft forever
 

 

And this is the image:

proxy_keep_multiple_full_failover

 

 

Recovery times:

 

  test 1 crash = 7.06 sec (worse case scenario)
  test 2 crash = 10.03 sec (worse case scenario)

 

 

Conclusions

In this example I had just use a test that checks the process, but a check can be anything reporting 0|1, the limit is define only from what you need.

The times for the failover can be significant shorter, reducing the check time and considering only the time taken to move the VIP, I had prefer to show the worse case scenario considering an application with a second interval, but that is a pessimistic view of what normally happens with real traffic. 

I was looking for a simple, simple simple way to add HA to ProxySQL, something that can be easily integrate with automation and that is actually also well established and maintained.

In my opinion using Keepalived is a good solution because it match all the above expectations. 

Implementing a set of ProxySQL and have Keepalived manage the failover between them is pretty easy, but you can expand the usage (and the complexity) if you need, counting on tools that are already part of the Linux stack, no need to re-invent the wheel with crazy mechanism.

If you want to have fun doing crazy things... at least start from something that helps you to go beyond the basiscs. 

For instance I was also playing a bit with keepalived and virtual server, creating set of redundant Proxysql with load balancers and ... .. but this is another story (blog).

 

Great MySQL & ProxySQL to all!

Last Updated on Saturday, 14 January 2017 21:47
 
ProxySQL – Percona Cluster (Galera) integration PDF Print E-mail
Written by Marco Tusa   
Sunday, 11 September 2016 15:57

ProxySQL is design to do not perform any specialized operation in relation to the servers it communicate with.

Instead it has scheduled events that can be used to extend functionalities and cover any special need.

Given that specialized product like PXC, are not managed by ProxySQL and require the design and implementation of good/efficient extensions.

In this article I will illustrate how PXC/Galera can be integrated with ProxySQL to get the best from both.

 

Brief digression

Before discussing the PXC integration, we need to review a couple of very important concept in ProxySQL.ProxySQL has a very important logical component, the Hostgroup(s) (HG).
An hostgroup as a relation of :

 

+-----------+       +------------------------+
|Host group +------>|Server (1:N)            |
+-----------+       +------------------------+

 

Not only, in ProxySQL you can use QueryRules (QR) that can be directly map to an HG.

Such that you can define a specific user to go ONLY to that HG, for instance you may want to have user app1_user go only on Servers A-B-C.

The only thing you need to do is to set a QR that say this user (app1_user) had destination hostgroup 5.

 

Where HG 5 has the servers A-B-C.

 

INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.5',5,3306,10);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.6',5,3306,10);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.7',5,3306,10);
INSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES('app1_user',5,1);

 

 

 

Easy isn't it?

Another important concept in ProxySQL also related to HG is ReplicationHostgroup(s) (RHG).

This is a special HG that ProxySQL use to automatically manage the nodes that are connected by replication and configured in Write/Read and Read_only mode.

 

What it means?

Let us say you have 4 nodes A-B-C-D, connected by standard asynchronous replication.

Where A is the master and B-C-D are the slaves.

What you want is to have you application pointing to server A for all writes, and to B-C the reads keeping off D because is a backup slave.

Also you don't want to have any read to go to B-C if the replication delay is more than 2 seconds. 

Using RHG in conjunction with HG, ProxySQL will manage all these for you. 

 

To achieve that we only have to instruct proxy to:use RHGdefine the value of the maximum latencyUsing the example above:

 

INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('192.168.1.5',5,3306,10,2);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('192.168.1.6',5,3306,10,2);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('192.168.1.7',5,3306,10,2);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('192.168.1.8',10,3306,10,2);
INSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES('app1_user',5,1);
INSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES('app1_user',6,1);
INSERT INTO mysql_replication_hostgroups VALUES (5,6);

 

From now on ProxySQL will split the R/W using the RHG and the nodes defined in HG 5. 

The flexibility that the use of the HG introduce, is obviously not limited to what I had mention here, and will play a good part in the integration between PXC and ProxySQL that I will illustrate below.

 

PXC/Galera integration

In a PXC cluster a node can have many different state and conditions that will affect if and if yes how, your application should operate on the node.

The most common one is when a node become a DONOR.

Whoever had install PXC or any Galera implementation, had face that when a node become a DONOR it will change it state to DESYNC, and/or if the node is heavy loaded, the DONOR process may affect the node itself.

But that is just one of the thing.

A node can be in state 3 JOINED but not sync, it can have wsrep_rejectqueries, wsrep_donorrejectqueries, wsrep_ready (off), it can be in a different segment and last but not least the number of nodes per segment is relevant. 

To show what can be done, and how, we will use the following setup:

5 nodes

2 segment

Applications requiring R/W split

And finally two options:

single writer node

multiple writers node

 

What will analyze here is how Proxy will behave under the use of a script run by the ProxySQL scheduler.

The use of a script is necessary to have ProxySQL act correctly to PXC state modifications.

ProxySQL comes with two scripts for galera, both of them are too basic and not considering a lot of relevant conditions.

 

As example I have wrote a more complete script https://github.com/Tusamarco/proxy_sql_tools galera_check.pl

 

The script is designed to manage a X number of nodes that belong to a given Hostgroup (HG).

The script works by HG and as such it will perform isolated actions/checks by HG.

It is not possible to have more than one check running on the same HG.

The check will create a lock file {proxysql_galera_check_${hg}.pid} that will be used by the check to prevent duplicates.

Galera_check will connect to the ProxySQL node and retrieve all the information regarding the Nodes/proxysql configuration.

It will then check in parallel each node and will retrieve the status and configuration.At the moment galera_check analyze and manage the following:

Node states:

read_only

wsrep_status

wsrep_rejectqueries

wsrep_donorrejectqueries

wsrep_connected

wsrep_desinccount

wsrep_ready

wsrep_provider

wsrep_segment

Number of nodes in by segment

Retry loop

 

As mention the number of Nodes inside a segment is relevant, if a node is the only one in a segment, the check will behave accordingly.

IE if a node is the only one in the MAIN segment, it will not put the node in OFFLINE_SOFT when the node become donor to prevent the cluster to become unavailable for the applications. 

In the script it is possible to declare a segment as MAIN, quite useful when managing prod and DR site, because the script will manage the segment acting as main in a more conservative way. 

The check can be configured to perform retries after a given interval.

Where the interval is the time define in the ProxySQL scheduler.

 

As such if the check is set to have 2 retry for UP and 3 for down, it will loop that number before doing anything.PXC/Galera does some action behind the hood, some of them not totally correct.

This feature is useful in some not well known cases where Galera behave weird.

IE whenever a node is set to READ_ONLY=1, galera desync and resync the node.

 

A check not taking this into account will cause a node to be set OFFLINE and back for no reason. 

Another important differentiation for this check is that it use special HGs for maintenance, all in range of 9000.

So if a node belong to HG 10 and the check needs to put it in maintenance mode, the node will be moved to HG 9010.

Once all is normal again, the Node will be put back on his original HG. This check does NOT modify any state of the Nodes.

 

Meaning It will NOT modify any variables or settings in the original node.

It will ONLY change node states in ProxySQL. 

 

Multiwriter mode

The MOST and recommended way to use galera is to have it in multiwriter mode.

Then play with the weight to have a node act as MAIN node and prevent/reduce certification failures and Brutal force Abort from PXC.

The configuration to use is:

 

DELETE FROM mysql_replication_hostgroups WHERE writer_hostgroup=500 ;
DELETE FROM mysql_servers WHERE hostgroup_id IN (500,501);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.5',500,3306,1000000000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.5',501,3306,100);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.6',500,3306,1000000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.6',501,3306,1000000000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.7',500,3306,100);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.7',501,3306,1000000000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.8',500,3306,1);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.8',501,3306,1);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.9',500,3306,1);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.9',501,3306,1);
LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL TO DISK;

 

 

 

In this test we will NOT use Replication HostGroup.

We will do that later when testing single writer, for now ... focus on multi-writer.

Segment 1 cover HG 500 and 501 while segment 2 is only 501.
Weight for servers in HG 500 is progressive from 1 to 1 billion, this to reduce the possible random writes on the non main node.

As such Nodes:

 

HG 500
S1 192.168.1.5 - 1.000.000.000
S1 192.168.1.6 - 1.000.000
S1 192.168.1.7 - 100
S2 192.168.1.8 - 1
S2 192.168.1.9 - 1
 
HG 501
S1 192.168.1.5 - 100
S1 192.168.1.6 - 1000000000
S1 192.168.1.7 - 1000000000
S2 192.168.1.8 - 1
S2 192.168.1.9 - 1

 

 

 

The following command can be used to view what ProxySQL is doing:

 

watch -n 1 'mysql -h 127.0.0.1 -P 3310 -uadmin -padmin -t -e "select * from stats_mysql_connection_pool where hostgroup in (500,501,9500,9501) order by hostgroup,srv_host ;" -e " select hostgroup_id,hostname,status,weight,comment from mysql_servers where hostgroup_id in (500,501,9500,9501)  order by hostgroup_id,hostname ;"'

 

 

Download the check from git (https://github.com/Tusamarco/proxy_sql_tools) and activate it in ProxySQL.

 

Be sure to set the parameter that match your installation:

 

DELETE FROM scheduler WHERE id=10;
INSERT  INTO scheduler (id,active,interval_ms,filename,arg1) VALUES (10,0,2000,"/var/lib/proxysql/galera_check.pl","-u=admin -p=admin -h=192.168.1.50 -H=500:W,501:R -P=3310 --execution_time=1 --retry_down=2 --retry_up=1 --main_segment=1 --debug=0  --log=/var/lib/proxysql/galeraLog");
LOAD SCHEDULER TO RUNTIME;SAVE SCHEDULER TO DISK;

 

 

If you want activate it:

 

UPDATE scheduler SET active=1 WHERE id=10;
LOAD SCHEDULER TO RUNTIME;

 

 

The following is the kind of scenario we have:

 

+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries  | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| 500       | 192.168.1.9 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0        | 0               | 0               | 413        |
| 500       | 192.168.1.8 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0        | 0               | 0               | 420        |
| 500       | 192.168.1.7 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0        | 0               | 0               | 227        |
| 500       | 192.168.1.6 | 3306     | ONLINE | 0        | 10       | 10     | 0       | 12654    | 1016975         | 0               | 230        |
| 500       | 192.168.1.5 | 3306     | ONLINE | 0        | 9        | 29     | 0       | 107358   | 8629123         | 0               | 206        |
| 501       | 192.168.1.9 | 3306     | ONLINE | 0        | 4        | 6      | 0       | 12602425 | 613371057       | 34467286486     | 413        |
| 501       | 192.168.1.8 | 3306     | ONLINE | 0        | 6        | 7      | 0       | 12582617 | 612422028       | 34409606321     | 420        |
| 501       | 192.168.1.7 | 3306     | ONLINE | 0        | 6        | 6      | 0       | 18580675 | 905464967       | 50824195445     | 227        |
| 501       | 192.168.1.6 | 3306     | ONLINE | 0        | 6        | 14     | 0       | 18571127 | 905075154       | 50814832276     | 230        |
| 501       | 192.168.1.5 | 3306     | ONLINE | 0        | 1        | 10     | 0       | 169570   | 8255821         | 462706881       | 206        |
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+

 

 

To generate load we will use the following commands (or whatever you like but do 2 different for read only and reads/writes)

 

Write
sysbench --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-host=192.168.1.50 --mysql-port=3311 --mysql-user=stress_RW --mysql-password=test --mysql-db=test_galera --db-driver=mysql --oltp-tables-count=50 --oltp-tablesize=50000 --max-requests=0 --max-time=9000 --oltp-point-selects=5 --oltp-read-only=off --oltp-dist-type=uniform --oltp-reconnect-mode=transaction --oltp-skip-trx=off --num-threads=10 --report-interval=10 --mysql-ignore-errors=all run
Read only
sysbench --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-host=192.168.1.50 --mysql-port=3311 --mysql-user=stress_RW --mysql-password=test --mysql-db=test_galera --db-driver=mysql --oltp-tables-count=50 --oltp-tablesize=50000 --max-requests=0 --max-time=9000 --oltp-point-selects=5 --oltp-read-only=on --num-threads=10 --oltp-reconnect-mode=query --oltp-skip-trx=on --report-interval=10 --mysql-ignore-errors=all run

 

 

Now the most common thing that could happen to a cluster node is to become a donor, this is a planned activity for a PXC, and it is suppose to be manage in the less harmful way.


To simulate that we will choose a node and crash it, forcing the crash node to elect as DONOR our main node (the one with highest WEIGHT).

To do so we need to have the parameter wsrep_sst_donor set in the node that will request the SST data transfer.

 

 

SHOW global VARIABLES LIKE 'wsrep_sst_donor';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| wsrep_sst_donor | node1 | <---
+-----------------+-------+

 

 

 

Activate the check if not already done:

 

UPDATE scheduler SET active=1 WHERE id=10;

 

 

And now run traffic.

Then check load:

 

SELECT * FROM stats_mysql_connection_pool WHERE hostgroup IN (500,501,9500,9501) ORDER BY hostgroup,srv_host ;
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries  | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| 500       | 192.168.1.5 | 3306     | ONLINE | 10       | 0        | 30     | 0       | 112662   | 9055479         | 0               | 120        | <--- our Donor
| 500       | 192.168.1.6 | 3306     | ONLINE | 0        | 10       | 10     | 0       | 12654    | 1016975         | 0               | 111        |
| 500       | 192.168.1.7 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0        | 0               | 0               | 115        |
| 500       | 192.168.1.8 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0        | 0               | 0               | 316        |
| 500       | 192.168.1.9 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0        | 0               | 0               | 329        |
| 501       | 192.168.1.5 | 3306     | ONLINE | 0        | 1        | 10     | 0       | 257271   | 12533763        | 714473854       | 120        |
| 501       | 192.168.1.6 | 3306     | ONLINE | 0        | 10       | 18     | 0       | 18881582 | 920200116       | 51688974309     | 111        |
| 501       | 192.168.1.7 | 3306     | ONLINE | 3        | 6        | 9      | 0       | 18927077 | 922317772       | 51794504662     | 115        |
| 501       | 192.168.1.8 | 3306     | ONLINE | 0        | 1        | 8      | 0       | 12595556 | 613054573       | 34447564440     | 316        |
| 501       | 192.168.1.9 | 3306     | ONLINE | 1        | 3        | 6      | 0       | 12634435 | 614936148       | 34560620180     | 329        |
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+

 

 

On one of the node:

kill mysql

remove the content of the data directory

restart the node

The node will go in SST and our galera_check script will manage it:

+--------------+-------------+--------------+------------+--------------------------------------------------+
| hostgroup_id | hostname    | STATUS       | weight     | comment                                          |
+--------------+-------------+--------------+------------+--------------------------------------------------+
| 500          | 192.168.1.5 | OFFLINE_SOFT | 1000000000 | 500_W_501_R_retry_up=0;500_W_501_R_retry_down=0; | <---- the donor
| 500          | 192.168.1.6 | ONLINE       | 1000000    |                                                  |
| 500          | 192.168.1.7 | ONLINE       | 100        |                                                  |
| 500          | 192.168.1.8 | ONLINE       | 1          |                                                  |
| 500          | 192.168.1.9 | ONLINE       | 1          |                                                  |
| 501          | 192.168.1.5 | OFFLINE_SOFT | 100        | 500_W_501_R_retry_up=0;500_W_501_R_retry_down=0; |
| 501          | 192.168.1.6 | ONLINE       | 1000000000 |                                                  |
| 501          | 192.168.1.7 | ONLINE       | 1000000000 |                                                  |
| 501          | 192.168.1.8 | ONLINE       | 1          |                                                  |
| 501          | 192.168.1.9 | ONLINE       | 1          |                                                  |
+--------------+-------------+--------------+------------+--------------------------------------------------+

 

We can also check the galera_check log and see what happened:

 

2016/09/02 16:13:27.298:[WARN] Move node:192.168.1.5;3306;500;3010 SQL: UPDATE mysql_servers SET status='OFFLINE_SOFT' WHERE hostgroup_id=500 AND hostname='192.168.1.5' AND port='3306'
2016/09/02 16:13:27.303:[WARN] Move node:192.168.1.5;3306;501;3010 SQL: UPDATE mysql_servers SET status='OFFLINE_SOFT' WHERE hostgroup_id=501 AND hostname='192.168.1.5' AND port='3306'

 

 

 

The node will remain in OFFLINE_SOFT while the other node (192.168.1.6 having the 2nd WEIGHT) serves the writes, untill the node is in DONOR state.


All as expected, the node was set in OFFLINE_SOFT state, which mean the existing connections where able to finish, whole the node was not accepting any NEW connection.


As soon the node ends to send data to the Joiner, it was moved back and traffic restart:

 

2016/09/02 16:14:58.239:[WARN] Move node:192.168.1.5;3306;500;1000 SQL: UPDATE mysql_servers SET STATUS='ONLINE' WHERE hostgroup_id=500 AND hostname='192.168.1.5' AND port='3306'
2016/09/02 16:14:58.243:[WARN] Move node:192.168.1.5;3306;501;1000 SQL: UPDATE mysql_servers SET STATUS='ONLINE' WHERE hostgroup_id=501 AND hostname='192.168.1.5' AND port='3306'
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries  | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| 500       | 192.168.1.5 | 3306     | ONLINE | 6        | 1        | 37     | 0       | 153882   | 12368557        | 0               | 72         | <---
| 500       | 192.168.1.6 | 3306     | ONLINE | 1        | 9        | 10     | 0       | 16008    | 1286492         | 0               | 42         |
| 500       | 192.168.1.7 | 3306     | ONLINE | 1        | 0        | 1      | 0       | 1398     | 112371          | 0               | 96         |
| 500       | 192.168.1.8 | 3306     | ONLINE | 0        | 0        | 24545  | 791     | 24545    | 122725          | 0               | 359        |
| 500       | 192.168.1.9 | 3306     | ONLINE | 1        | 0        | 1      | 0       | 15108    | 1214366         | 0               | 271        |
| 501       | 192.168.1.5 | 3306     | ONLINE | 1        | 0        | 11     | 0       | 2626808  | 128001112       | 7561278884      | 72         |
| 501       | 192.168.1.6 | 3306     | ONLINE | 5        | 7        | 20     | 0       | 28629516 | 1394974468      | 79289633420     | 42         |
| 501       | 192.168.1.7 | 3306     | ONLINE | 2        | 8        | 10     | 0       | 29585925 | 1441400648      | 81976494740     | 96         |
| 501       | 192.168.1.8 | 3306     | ONLINE | 0        | 0        | 16779  | 954     | 12672983 | 616826002       | 34622768228     | 359        |
| 501       | 192.168.1.9 | 3306     | ONLINE | 0        | 4        | 6      | 0       | 13567512 | 660472589       | 37267991677     | 271        |
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+

 

 

This was easy, and more or less managed also by the standard script.


But what would happened if my donor was set to DO NOT serve query when in donor state?


Wait what?? Yes PXC (Galera in general) can be set to refuse any query when the Node goes in DONOR state.

If not managed this will cause issue because the Node will simply reject queries but ProxySQL see he node alive.

 

Let me show you:

 

SHOW global VARIABLES LIKE 'wsrep_sst_donor_rejects_queries';
+---------------------------------+-------+
| Variable_name                   | Value |
+---------------------------------+-------+
| wsrep_sst_donor_rejects_queries | ON    |
+---------------------------------+-------+

 

 

 

For a moment let us deactivate the check.

Then, do the same stop and delete of the data dir, then restart the node... SST take place.

And sysbench will report:

 

ALERT: mysql_drv_query() returned error 2013 (Lost connection to MySQL server during query) for query 'BEGIN'
FATAL: failed to execute function `event': 3
ALERT: mysql_drv_query() returned error 2013 (Lost connection to MySQL server during query) for query 'BEGIN'
FATAL: failed to execute function `event': 3
 

 

 

 

But and ProxySQL?

 

+-----------+-------------+----------+---------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS  | ConnUsed | ConnFree | ConnOK | ConnERR | Queries  | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+---------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| 500       | 192.168.1.5 | 3306     | ONLINE  | 0        | 0        | 101    | 0       | 186331   | 14972717        | 0               | 118        | <-- no writes in wither HG
| 500       | 192.168.1.6 | 3306     | ONLINE  | 0        | 9        | 10     | 0       | 20514    | 1648665         | 0               | 171        |  |
| 500       | 192.168.1.7 | 3306     | ONLINE  | 0        | 1        | 3      | 0       | 5881     | 472629          | 0               | 134        |  |
| 500       | 192.168.1.8 | 3306     | ONLINE  | 0        | 0        | 205451 | 1264    | 205451   | 1027255         | 0               | 341        |  |
| 500       | 192.168.1.9 | 3306     | ONLINE  | 0        | 1        | 2      | 0       | 15642    | 1257277         | 0               | 459        |  -
| 501       | 192.168.1.5 | 3306     | ONLINE  | 1        | 0        | 13949  | 0       | 4903347  | 238627310       | 14089708430     | 118        |
| 501       | 192.168.1.6 | 3306     | ONLINE  | 2        | 10       | 20     | 0       | 37012174 | 1803380964      | 103269634626    | 171        |
| 501       | 192.168.1.7 | 3306     | ONLINE  | 2        | 11       | 13     | 0       | 38782923 | 1889507208      | 108288676435    | 134        |
| 501       | 192.168.1.8 | 3306     | SHUNNED | 0        | 0        | 208452 | 1506    | 12864656 | 626156995       | 34622768228     | 341        |
| 501       | 192.168.1.9 | 3306     | ONLINE  | 1        | 3        | 6      | 0       | 14451462 | 703534884       | 39837663734     | 459        |
+-----------+-------------+----------+---------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
mysql> SELECT * FROM mysql_server_connect_log WHERE hostname IN ('192.168.1.5','192.168.1.6','192.168.1.7','192.168.1.8','192.168.1.9')  ORDER BY time_start_us DESC LIMIT 10;
+-------------+------+------------------+-------------------------+--------------------------------------------------------------------------------------------------------+
| hostname    | port | time_start_us    | connect_success_time_us | connect_error                                                                                          |
+-------------+------+------------------+-------------------------+--------------------------------------------------------------------------------------------------------+
| 192.168.1.9 | 3306 | 1472827444621954 | 1359                    | NULL                                                                                                   |
| 192.168.1.8 | 3306 | 1472827444618883 | 0                       | Can't connect to MySQL server on '192.168.1.8' (107)                                                   |
| 192.168.1.7 | 3306 | 1472827444615819 | 433                     | NULL                                                                                                   |
| 192.168.1.6 | 3306 | 1472827444612722 | 538                     | NULL                                                                                                   |
| 192.168.1.5 | 3306 | 1472827444606560 | 473                     | NULL                                                                                                   | <-- donor is seen as up
| 192.168.1.9 | 3306 | 1472827384621463 | 1286                    | NULL                                                                                                   |
| 192.168.1.8 | 3306 | 1472827384618442 | 0                       | Lost connection to MySQL server at 'handshake: reading inital communication packet', system error: 107 |
| 192.168.1.7 | 3306 | 1472827384615317 | 419                     | NULL                                                                                                   |
| 192.168.1.6 | 3306 | 1472827384612241 | 415                     | NULL                                                                                                   |
| 192.168.1.5 | 3306 | 1472827384606117 | 454                     | NULL                                                                                                   | <-- donor is seen as up
+-------------+------+------------------+-------------------------+--------------------------------------------------------------------------------------------------------+
select * from mysql_server_ping_log where hostname in ('192.168.1.5','192.168.1.6','192.168.1.7','192.168.1.8','192.168.1.9')  order by time_start_us desc limit 10;
+-------------+------+------------------+----------------------+------------------------------------------------------+
| hostname    | port | time_start_us    | ping_success_time_us | ping_error                                           |
+-------------+------+------------------+----------------------+------------------------------------------------------+
| 192.168.1.9 | 3306 | 1472827475062217 | 311                  | NULL                                                 |
| 192.168.1.8 | 3306 | 1472827475060617 | 0                    | Can't connect TO MySQL server ON '192.168.1.8' (107) |
| 192.168.1.7 | 3306 | 1472827475059073 | 108                  | NULL                                                 |
| 192.168.1.6 | 3306 | 1472827475057281 | 102                  | NULL                                                 |
| 192.168.1.5 | 3306 | 1472827475054188 | 74                   | NULL                                                 | <-- donor is seen as up
| 192.168.1.9 | 3306 | 1472827445061877 | 491                  | NULL                                                 |
| 192.168.1.8 | 3306 | 1472827445060254 | 0                    | Can't connect to MySQL server on '192.168.1.8' (107) |
| 192.168.1.7 | 3306 | 1472827445058688 | 53                   | NULL                                                 |
| 192.168.1.6 | 3306 | 1472827445057124 | 131                  | NULL                                                 |
| 192.168.1.5 | 3306 | 1472827445054015 | 98                   | NULL                                                 | <-- donor is seen as up
+-------------+------+------------------+----------------------+------------------------------------------------------+

 

 

 

As you can see all seems ok also if it is not :)

Let us turn on the galera_check and see what happens.

Run some load in read and write.

And now let me do the stop-delete-restart-SST process again

 

kill -9 <mysqld_safe_pid> <mysqld_pid>; rm -fr data/*;rm -fr logs/*;sleep 2;./start

 

 

A soon the node goes down ProxySQL Shun the node.

 

+-----------+-------------+----------+---------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS  | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+---------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 500       | 192.168.1.5 | 3306     | ONLINE  | 7        | 3        | 34     | 0       | 21570   | 1733833         | 0               | 146        |
| 500       | 192.168.1.6 | 3306     | ONLINE  | 1        | 8        | 12     | 0       | 9294    | 747063          | 0               | 129        |
| 500       | 192.168.1.7 | 3306     | ONLINE  | 1        | 0        | 4      | 0       | 3396    | 272950          | 0               | 89         |
| 500       | 192.168.1.8 | 3306     | SHUNNED | 0        | 0        | 1      | 6       | 12      | 966             | 0               | 326        | <-- crashed
| 500       | 192.168.1.9 | 3306     | ONLINE  | 1        | 0        | 2      | 0       | 246     | 19767           | 0               | 286        |
| 501       | 192.168.1.5 | 3306     | ONLINE  | 0        | 1        | 2      | 0       | 772203  | 37617973        | 2315131214      | 146        |
| 501       | 192.168.1.6 | 3306     | ONLINE  | 9        | 3        | 12     | 0       | 3439458 | 167514166       | 10138636314     | 129        |
| 501       | 192.168.1.7 | 3306     | ONLINE  | 1        | 12       | 13     | 0       | 3183822 | 155064971       | 9394612877      | 89         |
| 501       | 192.168.1.8 | 3306     | SHUNNED | 0        | 0        | 1      | 6       | 11429   | 560352          | 35350726        | 326        | <-- crashed
| 501       | 192.168.1.9 | 3306     | ONLINE  | 0        | 1        | 1      | 0       | 312253  | 15227786        | 941110520       | 286        |
+-----------+-------------+----------+---------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+

 

 

 

Immediately after the glera_check identify the node is requesting the SST and that the DONOR is our writer, given is NOT the only writer in the HG, and given it has the variable wsrep_sst_donor_rejects_queries active, it cannot be set OFFLINE_SOFT, and we do not want to have ProxySQL consider it OFFLINE_HARD (because it is not).
As such the script will move it to a special HG:

 

2016/09/04 16:11:22.091:[WARN] Move node:192.168.1.5;3306;500;3001 SQL: UPDATE mysql_servers SET hostgroup_id=9500 WHERE hostgroup_id=500 AND hostname='192.168.1.5' AND port='3306'
2016/09/04 16:11:22.097:[WARN] Move node:192.168.1.5;3306;501;3001 SQL: UPDATE mysql_servers SET hostgroup_id=9501 WHERE hostgroup_id=501 AND hostname='192.168.1.5' AND port='3306'
+--------------+-------------+------+--------+------------+-------------+-----------------+---------------------+---------+----------------+--------------------------------------------------+
| hostgroup_id | hostname    | port | STATUS | weight     | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment                                          |
+--------------+-------------+------+--------+------------+-------------+-----------------+---------------------+---------+----------------+--------------------------------------------------+
| 500          | 192.168.1.6 | 3306 | ONLINE | 1000000    | 0           | 1000            | 0                   | 0       | 0              |                                                  |
| 500          | 192.168.1.7 | 3306 | ONLINE | 100        | 0           | 1000            | 0                   | 0       | 0              |                                                  |
| 500          | 192.168.1.8 | 3306 | ONLINE | 1          | 0           | 1000            | 0                   | 0       | 0              | 500_W_501_R_retry_up=0;500_W_501_R_retry_down=0; |
| 500          | 192.168.1.9 | 3306 | ONLINE | 1          | 0           | 1000            | 0                   | 0       | 0              | 500_W_501_R_retry_up=0;500_W_501_R_retry_down=0; |
| 501          | 192.168.1.6 | 3306 | ONLINE | 1000000000 | 0           | 1000            | 0                   | 0       | 0              |                                                  |
| 501          | 192.168.1.7 | 3306 | ONLINE | 1000000000 | 0           | 1000            | 0                   | 0       | 0              |                                                  |
| 501          | 192.168.1.9 | 3306 | ONLINE | 1          | 0           | 1000            | 0                   | 0       | 0              | 500_W_501_R_retry_up=0;500_W_501_R_retry_down=0; |
| 9500         | 192.168.1.5 | 3306 | ONLINE | 1000000000 | 0           | 1000            | 0                   | 0       | 0              | 500_W_501_R_retry_up=0;500_W_501_R_retry_down=0; | <-- Special HG
| 9501         | 192.168.1.5 | 3306 | ONLINE | 100        | 0           | 1000            | 0                   | 0       | 0              | 500_W_501_R_retry_up=0;500_W_501_R_retry_down=0; | <-- Special HG
+--------------+-------------+------+--------+------------+-------------+-----------------+---------------------+---------+----------------+--------------------------------------------------+

 

 

In this way the Donor will continue to serve the Joiner, but applications will not see it.


What is also very important is what the applications will see.

The Applications doing WRITEs will see:

 

[ 10s] threads: 10, tps: 9.50, reads: 94.50, writes: 42.00, response time: 1175.77ms (95%), errors: 0.00, reconnects: 0.00
...
[ 40s] threads: 10, tps: 2.80, reads: 26.10, writes: 11.60, response time: 3491.45ms (95%), errors: 0.00, reconnects: 0.10
[ 50s] threads: 10, tps: 4.80, reads: 50.40, writes: 22.40, response time: 10062.13ms (95%), errors: 0.80, reconnects: 351.60 <--- Main writer moved to another HG
[ 60s] threads: 10, tps: 5.90, reads: 53.10, writes: 23.60, response time: 2869.82ms (95%), errors: 0.00, reconnects: 0.00
...

 

 

 

At the moment of the shift from one node to another the applications will have to manage the RE-TRY, but it will be a very short moment that will cause limited impact on the production flow.

Application readers will see no errors:

 

[ 10s] threads: 10, tps: 0.00, reads: 13007.31, writes: 0.00, response time: 9.13ms (95%), errors: 0.00, reconnects: 0.00
[ 50s] threads: 10, tps: 0.00, reads: 9613.70, writes: 0.00, response time: 10.66ms (95%), errors: 0.00, reconnects: 0.20 <-- just a glitch in reconnect
[ 60s] threads: 10, tps: 0.00, reads: 10807.90, writes: 0.00, response time: 11.07ms (95%), errors: 0.00, reconnects: 0.20
[ 70s] threads: 10, tps: 0.00, reads: 9082.61, writes: 0.00, response time: 23.62ms (95%), errors: 0.00, reconnects: 0.00
...
[ 390s] threads: 10, tps: 0.00, reads: 13050.80, writes: 0.00, response time: 8.97ms (95%), errors: 0.00, reconnects: 0.00

 

 

 

When the Donor had end to provide SST it comes back and the script manage it, Glara_check will put it in the right HG:

 

2016/09/04 16:12:34.266:[WARN] Move node:192.168.1.5;3306;9500;1010 SQL: UPDATE mysql_servers SET hostgroup_id=500 WHERE hostgroup_id=9500 AND hostname='192.168.1.5' AND port='3306'
2016/09/04 16:12:34.270:[WARN] Move node:192.168.1.5;3306;9501;1010 SQL: UPDATE mysql_servers SET hostgroup_id=501 WHERE hostgroup_id=9501 AND hostname='192.168.1.5' AND port='3306'

 

 

 

The crashed node, will be re-start by the SST process, as such the node will be up.

But if the level of load in the cluster is mid/high it will remain in JOINED state for sometime, becoming visible by the ProxySQL again, while ProxySQL will not correctly recognize the state.

 

 

2016-09-04 16:17:15 21035 [Note] WSREP: 3.2 (node4): State transfer from 1.1 (node1) complete.2016-09-04 16:17:15 21035 [Note] WSREP: Shifting JOINER -> JOINED (TO: 254515)

 

 

To avoid issue the script will move it to special HG, allowing it to recovery without interfering with real load.

 

+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 500       | 192.168.1.5 | 3306     | ONLINE | 6        | 2        | 15     | 0       | 3000    | 241060          | 0               | 141        |
| 500       | 192.168.1.6 | 3306     | ONLINE | 1        | 9        | 13     | 0       | 13128   | 1055268         | 0               | 84         |
| 500       | 192.168.1.7 | 3306     | ONLINE | 1        | 0        | 4      | 0       | 3756    | 301874          | 0               | 106        |
| 500       | 192.168.1.9 | 3306     | ONLINE | 1        | 0        | 2      | 0       | 4080    | 327872          | 0               | 278        |
| 501       | 192.168.1.5 | 3306     | ONLINE | 1        | 0        | 2      | 0       | 256753  | 12508935        | 772048259       | 141        |
| 501       | 192.168.1.6 | 3306     | ONLINE | 4        | 8        | 12     | 0       | 5116844 | 249191524       | 15100617833     | 84         |
| 501       | 192.168.1.7 | 3306     | ONLINE | 2        | 11       | 13     | 0       | 4739756 | 230863200       | 13997231724     | 106        |
| 501       | 192.168.1.9 | 3306     | ONLINE | 0        | 1        | 1      | 0       | 496524  | 24214563        | 1496482104      | 278        |
| 9500      | 192.168.1.8 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0       | 0               | 0               | 331        |<-- Joined not Sync
| 9501      | 192.168.1.8 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0       | 0               | 0               | 331        |<-- Joined not Sync
+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
Once Node fully recover, galera_check put it back IN the original HG, ready serve requests:
 
+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 500       | 192.168.1.5 | 3306     | ONLINE | 0        | 1        | 15     | 0       | 3444    | 276758          | 0               | 130        |
| 500       | 192.168.1.6 | 3306     | ONLINE | 0        | 9        | 13     | 0       | 13200   | 1061056         | 0               | 158        |
| 500       | 192.168.1.7 | 3306     | ONLINE | 0        | 0        | 4      | 0       | 3828    | 307662          | 0               | 139        |
| 500       | 192.168.1.8 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0       | 0               | 0               | 0          |<-- up again
| 500       | 192.168.1.9 | 3306     | ONLINE | 0        | 0        | 2      | 0       | 4086    | 328355          | 0               | 336        |
| 501       | 192.168.1.5 | 3306     | ONLINE | 0        | 1        | 2      | 0       | 286349  | 13951366        | 861638962       | 130        |
| 501       | 192.168.1.6 | 3306     | ONLINE | 0        | 12       | 12     | 0       | 5239212 | 255148806       | 15460951262     | 158        |
| 501       | 192.168.1.7 | 3306     | ONLINE | 0        | 13       | 13     | 0       | 4849970 | 236234446       | 14323937975     | 139        |
| 501       | 192.168.1.8 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0       | 0               | 0               | 0          |<-- up again
| 501       | 192.168.1.9 | 3306     | ONLINE | 0        | 1        | 1      | 0       | 507910  | 24768898        | 1530841172      | 336        |
+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+

 

 

Summarizing the logical steps are:

 

                +---------+
                |  Crash  |
                +----+----+
                     |
                     v
            +--------+-------+
            |  ProxySQL      |
            |  shun crashed  |
            |      node      |
            +--------+-------+
                     |
                     |
                     v
   +-----------------+-----------------+
   |  Donor has one of the following?  |
   |  wsrep_sst_dono _rejects_queries  |
   |  OR                               |
   |  wsrep_reject_queries             |
   +-----------------------------------+
      |No                            |Yes
      v                              v
+-----+----------+       +-----------+----+
| Galera_check   |       | Galera_check   |
| put the donor  |       | put the donor  |
| in OFFLINE_SOFT|       | in special HG  |
+---+------------+       +-----------+----+
    |                                |
    |                                |
    v                                v
+---+--------------------------------+-----+
|            Donor SST ends                |
+---+---------------+----------------+-----+
    |               |                |
    |               |                |
+---+------------+  |    +-----------+----+
| Galera_check   |  |    | Galera_check   |
| put the donor  |  |    | put the donor  |
| ONLINE         |  |    | in Original HG |
+----------------+  |    +----------------+
                    |                
                    |                 
+------------------------------------------+
|           crashed SST ends               |
+-------------------+----------------------+
                    |
                    |
       +------------+-------------+
       |  Crashed node back but   +<------------+
       |  Not Sync?               |             |
       +--------------------------+             |
          |No                   |Yes            |
          |                     |               |
          |                     |               |
+---------+------+       +------+---------+     |
| Galera_check   |       | Galera_check   |     |
| put the node   |       | put the node   +-----+
| back orig. HG  |       | Special HG     |
+--------+-------+       +----------------+
         |
         |
         |
         |      +---------+
         +------>   END   |
                +---------+
 

 

 

As mention in this integration with galera_check we can manage several node states.


Another case is when we need to have the node not accepting ANY query.

We may need that for several reasons, including preparing the node for maintenance or whatever.


In PXC and other Galera implementation we can set the value of wsrep_reject_queries to:

Valid Values
NONE

ALL

ALL_KILL

Let see how it works:

run some load then on the main writer node (192.168.1.5)

 

SET global wsrep_reject_queries=ALL;

 

 

This will block any new queries to be executed while the running will be completed.

Do a simple select on the node :

 

(root@localhost:pm) [test]>select * FROM tbtest1;
ERROR 1047 (08S01): WSREP has NOT yet prepared node FOR application USE

 

 

 

Point is , as you should have understand by now, that ProxySQL do not see these conditions:

 

+-------------+------+------------------+----------------------+------------+
| hostname    | port | time_start_us    | ping_success_time_us | ping_error |
+-------------+------+------------------+----------------------+------------+
| 192.168.1.5 | 3306 | 1473005467628001 | 35                   | NULL       | <--- ping ok
| 192.168.1.5 | 3306 | 1473005437628014 | 154                  | NULL       |
+-------------+------+------------------+----------------------+------------+
+-------------+------+------------------+-------------------------+---------------+
| hostname    | port | time_start_us    | connect_success_time_us | connect_error |
+-------------+------+------------------+-------------------------+---------------+
| 192.168.1.5 | 3306 | 1473005467369575 | 246                     | NULL          | <--- connect ok
| 192.168.1.5 | 3306 | 1473005407369441 | 353                     | NULL          |
+-------------+------+------------------+-------------------------+---------------+

 

 

The script Galera check will instead manage it:

 

+-----------+-------------+----------+--------------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS       | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+--------------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 500       | 192.168.1.5 | 3306     | OFFLINE_SOFT | 0        | 0        | 8343   | 0       | 10821   | 240870          | 0               | 93         | <--- galera check put it OFFLINE
| 500       | 192.168.1.6 | 3306     | ONLINE       | 10       | 0        | 15     | 0       | 48012   | 3859402         | 0               | 38         | <--- writer
| 500       | 192.168.1.7 | 3306     | ONLINE       | 0        | 1        | 6      | 0       | 14712   | 1182364         | 0               | 54         |
| 500       | 192.168.1.8 | 3306     | ONLINE       | 0        | 1        | 2      | 0       | 1092    | 87758           | 0               | 602        |
| 500       | 192.168.1.9 | 3306     | ONLINE       | 0        | 1        | 4      | 0       | 5352    | 430152          | 0               | 238        |
| 501       | 192.168.1.5 | 3306     | OFFLINE_SOFT | 0        | 0        | 1410   | 0       | 197909  | 9638665         | 597013919       | 93         |
| 501       | 192.168.1.6 | 3306     | ONLINE       | 2        | 10       | 12     | 0       | 7822682 | 380980455       | 23208091727     | 38         |
| 501       | 192.168.1.7 | 3306     | ONLINE       | 0        | 13       | 13     | 0       | 7267507 | 353962618       | 21577881545     | 54         |
| 501       | 192.168.1.8 | 3306     | ONLINE       | 0        | 1        | 1      | 0       | 241641  | 11779770        | 738145270       | 602        |
| 501       | 192.168.1.9 | 3306     | ONLINE       | 1        | 0        | 1      | 0       | 756415  | 36880233        | 2290165636      | 238        |
+-----------+-------------+----------+--------------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+

 

 

In this case the script will put the node in OFFLINE_SOFT, given the "set global wsrep_reject_queries=ALL" means do not accept NEW complete the existing, as OFFLINE_SOFT.

The script manage also in the case of "set global wsrep_reject_queries=ALL_KILL;" .

Which from ProxySQL point of view do not exists as well:

 

+-------------+------+------------------+----------------------+------------+
| hostname    | port | time_start_us    | ping_success_time_us | ping_error |
+-------------+------+------------------+----------------------+------------+
| 192.168.1.5 | 3306 | 1473005827629069 | 59                   | NULL       |<--- ping ok
| 192.168.1.5 | 3306 | 1473005797628988 | 57                   | NULL       |
+-------------+------+------------------+----------------------+------------+
+-------------+------+------------------+-------------------------+---------------+
| hostname    | port | time_start_us    | connect_success_time_us | connect_error |
+-------------+------+------------------+-------------------------+---------------+
| 192.168.1.5 | 3306 | 1473005827370084 | 370                     | NULL          | <--- connect ok
| 192.168.1.5 | 3306 | 1473005767369915 | 243                     | NULL          |
+-------------+------+------------------+-------------------------+---------------+
+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 9500      | 192.168.1.5 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0       | 0               | 0               | 0          |<--- galera check put it in special HG
| 9501      | 192.168.1.5 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0       | 0               | 0               | 0          |
+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
 

 

 

The difference here is that the script moves the node to the special HG to isolate it, instead let it be in the original HG.


As you can see the integration between ProxySQL and PXC(galera) in the case of multi-writer, works perfectly, if you have a script as galera_check that manage correctly the different PXC/Galera states.

 

ProxySQL and PXC using Replication HostGroup

Sometimes we may need to have the 100% of the write going to one node only a time.

As explain above ProxySQL use weight to redirect a % of the load to a specific node.

In most of the case it will be enough to set the weight in the main writer to a very high value, like 10 billions and on the next node to 1 thousands, to achieve an almost single writer.

But this is not 100%, it still allow ProxySQL to send a query once every X to the other node(s).

The solution to this and to keep it consistent with the ProxySQL logic, is to use replication Hostgroups.

Replication HG are special HG that Proxy see as connected for R/W operations.

ProxySQL analyze the value of the READ_ONLY variables and assign to the READ_ONLY HG the nodes that have it enable.

While the node having READ_ONLY=0, reside in both HG.

 

As such the first thing we need to modify is to say to ProxySQL that our two HG 500 and 501 are replication HG.

 

INSERT INTO mysql_replication_hostgroups VALUES (500,501,'');
LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK; 
SELECT * FROM mysql_replication_hostgroups ;
+------------------+------------------+---------+
| writer_hostgroup | reader_hostgroup | comment |
+------------------+------------------+---------+
| 500              | 501              |         |
+------------------+------------------+---------+

 

 

Now whenever I will set the value of READ_ONLY on a node ProxySQL will move the node accordingly.

Let see how.

Current:

 

+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries  | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| 500       | 192.168.1.5 | 3306     | ONLINE | 6        | 1        | 7      | 0       | 16386    | 1317177         | 0               | 97         |
| 500       | 192.168.1.6 | 3306     | ONLINE | 1        | 9        | 15     | 0       | 73764    | 5929366         | 0               | 181        |
| 500       | 192.168.1.7 | 3306     | ONLINE | 1        | 0        | 6      | 0       | 18012    | 1447598         | 0               | 64         |
| 500       | 192.168.1.8 | 3306     | ONLINE | 1        | 0        | 2      | 0       | 1440     | 115728          | 0               | 341        |
| 501       | 192.168.1.5 | 3306     | ONLINE | 0        | 1        | 1      | 0       | 1210029  | 58927817        | 3706882671      | 97         |
| 501       | 192.168.1.6 | 3306     | ONLINE | 1        | 11       | 12     | 0       | 16390790 | 798382865       | 49037691590     | 181        |
| 501       | 192.168.1.7 | 3306     | ONLINE | 1        | 12       | 13     | 0       | 15357779 | 748038558       | 45950863867     | 64         |
| 501       | 192.168.1.8 | 3306     | ONLINE | 0        | 1        | 1      | 0       | 1247662  | 60752227        | 3808131279      | 341        |
| 501       | 192.168.1.9 | 3306     | ONLINE | 0        | 1        | 1      | 0       | 1766309  | 86046839        | 5374169120      | 422        |
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+

 

 

Set global READ_ONLY=1;

on the following nodes 192.168.1.6/7/8/9

After:

 

+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries  | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| 500       | 192.168.1.5 | 3306     | ONLINE | 10       | 0        | 20     | 0       | 25980    | 2088346         | 0               | 93         |
| 501       | 192.168.1.5 | 3306     | ONLINE | 0        | 1        | 1      | 0       | 1787979  | 87010074        | 5473781192      | 93         |
| 501       | 192.168.1.6 | 3306     | ONLINE | 4        | 8        | 12     | 0       | 18815907 | 916547402       | 56379724890     | 79         |
| 501       | 192.168.1.7 | 3306     | ONLINE | 1        | 12       | 13     | 0       | 17580636 | 856336023       | 52670114510     | 131        |
| 501       | 192.168.1.8 | 3306     | ONLINE | 0        | 1        | 1      | 0       | 15324    | 746109          | 46760779        | 822        |
| 501       | 192.168.1.9 | 3306     | ONLINE | 0        | 1        | 1      | 0       | 16210    | 789999          | 49940867        | 679        |
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+

 

 

In this scenario, IF a reader node crash, the application will not suffer at all given the redundancy.


But if the writer is going to crash THEN the issue exists, because there will be NO node available to manage the failover.

The solution is either do the node election manually, or to have the script elect the node with the lowest read weight in the same segment as new writer.

 

The one below is what is going to happen when a node crash (bird-eye view):

 

                         +---------+
                         |  Crash  |
                         +----+----+
                              |
                              v
                     +--------+-------+
                     |  ProxySQL      |
                     |  shun crashed  |
                     |      node      |
                     +--------+-------+
                              |
                              |
                              v
            +-----------------+-----------------+
+----------->   HostGroup has another active    |
|           |   Node in HG writer?              |
|           +--+--------------+---------------+-+
|              |              |               |
|              |              |               |
|              |No            |               |Yes
|              |              |               |
|        +-----v----------+   |   +-----------v----+
|        |ProxySQL will   |   |   |ProxySQL will   |
|        |stop serving    |   |   |redirect load   >--------+
|        |writes          |   |   |there           |        |
|        +----------------+   |   +----------------+        |
|                             |                             |
|                             v                             |
|                     +-------+--------+                    |
|                     |ProxySQL checks |                    |
|                     |READ_ONLY on    |                    |
|                     |Reader HG       |                    |
|                     |                |                    |
|                     +-------+--------+                    |
|                             |                             |
|                             v                             |
|                     +-------+--------+                    |
|                     |Any Node with   |                    |
|                     |READ_ONLY = 0 ? |                    |
|                     +----------------+                    |
|                      |No            |Yes                  |
|                      |              |                     |
|           +----------v------+    +--v--------------+      |
|           |ProxySQL will    |    |ProxySQL will    |      |
|           |continue to      |    |Move node to     |      |
+<---------<+do not serve     |    |Writer HG        |      |
|           |Writes           |    |                 |      |
|           +-----------------+    +--------v--------+      |
|                                           |               |
+-------------------------------------------+               |
                         +---------+                        |
                         |   END   <------------------------+
                         +---------+

 

 

The script should act in the step immediately after ProxySQL SHUNNED the node, just replacing the READ_ONLY=1 with READ_ONLY=0, on the reader node with the lowest READ WEIGHT.


ProxySQL will do the rest, copying the Node into the WRITER HG, keeping low weight, such that WHEN/IF the original node will comeback, the new node will not compete for traffic.

I had included that special function in the check, the feature will allow automatic fail-over.

This experimental feature is active only if explicitly set in the parameter that the scheduler will pass to the script.

To activate it just add --active_failover list of arguments that is pass over the script in the scheduler.

 

My recommendation is to have two entries in the scheduler and activate the one with --active_failover for test, remember to deactivate the other.

 

Let see the manual procedure first

Process will be:

1 Generate some load

2 Kill the writer node

3 Manually elect a reader as writer

4 Recover crashed node

 

Current load:

 

+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 500       | 192.168.1.5 | 3306     | ONLINE | 10       | 0        | 10     | 0       | 30324   | 2437438         | 0               | 153        |
| 501       | 192.168.1.5 | 3306     | ONLINE | 0        | 1        | 1      | 0       | 1519612 | 74006447        | 4734427711      | 153        |
| 501       | 192.168.1.6 | 3306     | ONLINE | 4        | 8        | 12     | 0       | 7730857 | 376505014       | 24119645457     | 156        |
| 501       | 192.168.1.7 | 3306     | ONLINE | 2        | 10       | 12     | 0       | 7038332 | 342888697       | 21985442619     | 178        |
| 501       | 192.168.1.8 | 3306     | ONLINE | 1        | 0        | 1      | 0       | 612523  | 29835858        | 1903693835      | 337        |
| 501       | 192.168.1.9 | 3306     | ONLINE | 1        | 0        | 1      | 0       | 611021  | 29769497        | 1903180139      | 366        |
+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+

 

 

Kill the main node 192.168.1.5

 

+-----------+-------------+----------+---------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS  | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+---------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 501       | 192.168.1.5 | 3306     | SHUNNED | 0        | 0        | 1      | 11      | 1565987 | 76267703        | 4879938857      | 119        |
| 501       | 192.168.1.6 | 3306     | ONLINE  | 1        | 11       | 12     | 0       | 8023216 | 390742215       | 25033271548     | 112        |
| 501       | 192.168.1.7 | 3306     | ONLINE  | 1        | 11       | 12     | 0       | 7306838 | 355968373       | 22827016386     | 135        |
| 501       | 192.168.1.8 | 3306     | ONLINE  | 1        | 0        | 1      | 0       | 638326  | 31096065        | 1984732176      | 410        |
| 501       | 192.168.1.9 | 3306     | ONLINE  | 1        | 0        | 1      | 0       | 636857  | 31025014        | 1982213114      | 328        |
+-----------+-------------+----------+---------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
+-------------+------+------------------+----------------------+------------------------------------------------------+
| hostname    | port | time_start_us    | ping_success_time_us | ping_error                                           |
+-------------+------+------------------+----------------------+------------------------------------------------------+
| 192.168.1.5 | 3306 | 1473070640798571 | 0                    | Can't connect to MySQL server on '192.168.1.5' (107) |
| 192.168.1.5 | 3306 | 1473070610798464 | 0                    | Can't connect TO MySQL server ON '192.168.1.5' (107) |
+-------------+------+------------------+----------------------+------------------------------------------------------+
+-------------+------+------------------+-------------------------+------------------------------------------------------+
| hostname    | port | time_start_us    | connect_success_time_us | connect_error                                        |
+-------------+------+------------------+-------------------------+------------------------------------------------------+
| 192.168.1.5 | 3306 | 1473070640779903 | 0                       | Can't connect to MySQL server on '192.168.1.5' (107) |
| 192.168.1.5 | 3306 | 1473070580779977 | 0                       | Can't connect TO MySQL server ON '192.168.1.5' (107) |
+-------------+------+------------------+-------------------------+------------------------------------------------------+

 

 

 

When the node is killed ProxySQL Shun it and also report issues with the checks (connect and ping)

During this time frame the Application will experience issues, and if is not designed to manage the retry, and eventually a queue, it will crash.


Sysbench report the errors:

 

Writes
 
[  10s] threads: 10, tps: 6.70, reads: 68.50, writes: 30.00, response time: 1950.53ms (95%), errors: 0.00, reconnects:  0.00
...
[1090s] threads: 10, tps: 4.10, reads: 36.90, writes: 16.40, response time: 2226.45ms (95%), errors: 0.00, reconnects:  1.00  <-+ killing the node
[1100s] threads: 10, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 1.00, reconnects:  0.00         |
[1110s] threads: 10, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 1.00, reconnects:  0.00         |
[1120s] threads: 10, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 1.00, reconnects:  0.00         |
[1130s] threads: 10, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 1.00, reconnects:  0.00         |-- Gap waiting for a node to become
[1140s] threads: 10, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 1.00, reconnects:  0.00         |   READ_ONLY=0
[1150s] threads: 10, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 1.00, reconnects:  0.00         |
[1160s] threads: 10, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 1.00, reconnects:  0.00         |
[1170s] threads: 10, tps: 4.70, reads: 51.30, writes: 22.80, response time: 80430.18ms (95%), errors: 0.00, reconnects:  0.00 <-+
[1180s] threads: 10, tps: 8.90, reads: 80.10, writes: 35.60, response time: 2068.39ms (95%), errors: 0.00, reconnects:  0.00
...
 [1750s] threads: 10, tps: 5.50, reads: 49.80, writes: 22.80, response time: 2266.80ms (95%), errors: 0.00, reconnects:  0.00 -- No additional errors

 

 

 

I decided to promote node 192.168.1.6 given in this setup the weight for readers was equal and as such no difference.

 

(root@localhost:pm) [(none)]>set global read_only=0;
Query OK, 0 rows affected (0.00 sec)

 

 

 

Checking proxySQL:

 

+-----------+-------------+----------+---------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS  | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+---------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 500       | 192.168.1.6 | 3306     | ONLINE  | 10       | 0        | 10     | 0       | 1848    | 148532          | 0               | 40         |
| 501       | 192.168.1.5 | 3306     | SHUNNED | 0        | 0        | 1      | 72      | 1565987 | 76267703        | 4879938857      | 38         |
| 501       | 192.168.1.6 | 3306     | ONLINE  | 2        | 10       | 12     | 0       | 8843069 | 430654903       | 27597990684     | 40         |
| 501       | 192.168.1.7 | 3306     | ONLINE  | 1        | 11       | 12     | 0       | 8048826 | 392101994       | 25145582384     | 83         |
| 501       | 192.168.1.8 | 3306     | ONLINE  | 1        | 0        | 1      | 0       | 725820  | 35371512        | 2259974847      | 227        |
| 501       | 192.168.1.9 | 3306     | ONLINE  | 1        | 0        | 1      | 0       | 723582  | 35265066        | 2254824754      | 290        |
+-----------+-------------+----------+---------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+

 

 

As the READ_ONLY value is modified, ProxySQL move it to the writer Hostgroup and writes can take place again.

At this point in time production activities are recovered.

 

Reads had just a minor glitch:

 

Reads
[  10s] threads: 10, tps: 0.00, reads: 20192.15, writes: 0.00, response time: 6.96ms (95%), errors: 0.00, reconnects:  0.00
...
[ 410s] threads: 10, tps: 0.00, reads: 16489.03, writes: 0.00, response time: 9.41ms (95%), errors: 0.00, reconnects:  2.50
...
[ 710s] threads: 10, tps: 0.00, reads: 18789.40, writes: 0.00, response time: 6.61ms (95%), errors: 0.00, reconnects:  0.00

 

 

 

 

when node 192.168.1.6 was copied over to HG 500, but no interruptions or errors.

At this point let us put back the crashed node, which coming back elect Node2 (192.168.1.6) as donor.

This was a PXC/Galera choice and we have to accept and manage it.


Note that the other basic scripts ( will put the node in OFFLINE_SOFT, given the node will become a DONOR).

Galera_check will recognize that Node2 (192.168.1.6) is the only active node in the segment for that specific HG (writer), while is not the only present for the READER HG.

As such it will put the node in OFFLINE_SOFT only for the READER HG, trying to reduce load on the node, but it will keep it active in the WRITER HG, to prevent service interruption.

 

Node restart and ask for a donor:

 

2016-09-05 12:21:43 8007 [Note] WSREP: Flow-control interval: [67, 67]
2016-09-05 12:21:45 8007 [Note] WSREP: Member 1.1 (node1) requested state transfer from '*any*'. Selected 0.1 (node2)(SYNCED) as donor.
2016-09-05 12:21:46 8007 [Note] WSREP: (ef248c1f, 'tcp://192.168.1.8:4567') turning message relay requesting off
2016-09-05 12:21:52 8007 [Note] WSREP: New cluster view: global state: 234bb6ed-527d-11e6-9971-e794f632b140:324329, view# 7: Primary, number of nodes: 5, my index: 3, protocol version 3

 

 

 

Galera_check will set OFFLINE_SOFT  192.168.1.6 only for the READER HG, and ProxySQL will use the others to serve reads.

 

+-----------+-------------+----------+--------------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS       | ConnUsed | ConnFree | ConnOK | ConnERR | Queries  | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+--------------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| 500       | 192.168.1.6 | 3306     | ONLINE       | 10       | 0        | 10     | 0       | 7746     | 622557          | 0               | 86         |
| 501       | 192.168.1.5 | 3306     | ONLINE       | 0        | 0        | 1      | 147     | 1565987  | 76267703        | 4879938857      | 38         |
| 501       | 192.168.1.6 | 3306     | OFFLINE_SOFT | 0        | 0        | 12     | 0       | 9668944  | 470878452       | 30181474498     | 86         | <-- Node offline
| 501       | 192.168.1.7 | 3306     | ONLINE       | 9        | 3        | 12     | 0       | 10932794 | 532558667       | 34170366564     | 62         |
| 501       | 192.168.1.8 | 3306     | ONLINE       | 0        | 1        | 1      | 0       | 816599   | 39804966        | 2545765089      | 229        |
| 501       | 192.168.1.9 | 3306     | ONLINE       | 0        | 1        | 1      | 0       | 814893   | 39724481        | 2541760230      | 248        |
+-----------+-------------+----------+--------------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+

 

 

When SST donor task is over and Galera_check moves the 192.168.1.6 back ONLINE as expected.

But at the same time, it moves the recovering node to the special HG to avoid to have it included in any activity until ready.

 

2016-09-05 12:22:36 27352 [Note] WSREP: 1.1 (node1): State transfer FROM 0.1 (node2) complete.
2016-09-05 12:22:36 27352 [Note] WSREP: Shifting JOINER -> JOINED (TO: 325062)
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries  | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| 500       | 192.168.1.6 | 3306     | ONLINE | 10       | 0        | 10     | 0       | 1554     | 124909          | 0               | 35         |
| 501       | 192.168.1.6 | 3306     | ONLINE | 2        | 8        | 22     | 0       | 10341612 | 503637989       | 32286072739     | 35         |
| 501       | 192.168.1.7 | 3306     | ONLINE | 3        | 9        | 12     | 0       | 12058701 | 587388598       | 37696717375     | 13         |
| 501       | 192.168.1.8 | 3306     | ONLINE | 1        | 0        | 1      | 0       | 890102   | 43389051        | 2776691164      | 355        |
| 501       | 192.168.1.9 | 3306     | ONLINE | 0        | 1        | 1      | 0       | 887994   | 43296865        | 2772702537      | 250        |
| 9500      | 192.168.1.5 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0        | 0               | 0               | 57         | <-- Special HG for recover
| 9501      | 192.168.1.5 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0        | 0               | 0               | 57         | <-- Special HG for recover
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+

 

 

 

Once finally the node is in SYNC with the group it is put back online in the READER HG and in the writer HG:

 

2016-09-05 12:22:36 27352 [Note] WSREP: 1.1 (node1): State transfer FROM 0.1 (node2) complete.
2016-09-05 12:22:36 27352 [Note] WSREP: Shifting JOINER -> JOINED (TO: 325062)
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries  | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| 500       | 192.168.1.5 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0        | 0               | 0               | 0          | <-- Back on line
| 500       | 192.168.1.6 | 3306     | ONLINE | 10       | 0        | 10     | 0       | 402      | 32317           | 0               | 68         |
| 501       | 192.168.1.5 | 3306     | ONLINE | 1        | 0        | 1      | 0       | 6285     | 305823          | 19592814        | 312        | <-- Back on line
| 501       | 192.168.1.6 | 3306     | ONLINE | 4        | 6        | 22     | 0       | 10818694 | 526870710       | 33779586475     | 68         |
| 501       | 192.168.1.7 | 3306     | ONLINE | 0        | 12       | 12     | 0       | 12492316 | 608504039       | 39056093665     | 26         |
| 501       | 192.168.1.8 | 3306     | ONLINE | 1        | 0        | 1      | 0       | 942023   | 45924082        | 2940228050      | 617        |
| 501       | 192.168.1.9 | 3306     | ONLINE | 1        | 0        | 1      | 0       | 939975   | 45834039        | 2935816783      | 309        |
+-----------+-------------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
+--------------+-------------+------+--------+------------+
| hostgroup_id | hostname    | port | STATUS | weight     |
+--------------+-------------+------+--------+------------+
| 500          | 192.168.1.5 | 3306 | ONLINE | 100        |
| 500          | 192.168.1.6 | 3306 | ONLINE | 1000000000 |
| 501          | 192.168.1.5 | 3306 | ONLINE | 100        |
| 501          | 192.168.1.6 | 3306 | ONLINE | 1000000000 |
| 501          | 192.168.1.7 | 3306 | ONLINE | 1000000000 |
| 501          | 192.168.1.8 | 3306 | ONLINE | 1          |
| 501          | 192.168.1.9 | 3306 | ONLINE | 1          |
+--------------+-------------+------+--------+------------+

 

 

But given is coming back with its READER WEIGHT, it will NOT compete with the previously elected WRITER.

The recovered node will stay on "hold" waiting for a DBA to act and eventually put it back, or be set as READ_ONLY and as such be fully removed from the WRITER HG.

 

 

Let see the Automatic procedure now

As such let for the moment just stay stick to the MANUAL failover process.

Process will be:

1 Generate some load

2 Kill the writer node

3 Script will do auto-failover

4 Recover crashed node

Check our scheduler config:

 

+----+--------+-------------+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+------+------+------+---------+
| id | active | interval_ms | filename | arg1 | arg2 | arg3 | arg4 | arg5 | comment |
+----+--------+-------------+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+------+------+------+---------+
| 10 | 1 | 2000 | /var/lib/proxysql/galera_check.pl | -u=admin -p=admin -h=192.168.1.50 -H=500:W,501:R -P=3310 --execution_time=1 --retry_down=2 --retry_up=1 --main_segment=1 --active_failover --debug=0 --log=/var/lib/proxysql/galeraLog | NULL | NULL | NULL | NULL | | <--- Active
| 20 | 0 | 1500 | /var/lib/proxysql/galera_check.pl | -u=admin -p=admin -h=192.168.1.50 -H=500:W,501:R -P=3310 --execution_time=1 --retry_down=2 --retry_up=1 --main_segment=1 --debug=0 --log=/var/lib/proxysql/galeraLog | NULL | NULL | NULL | NULL | |
+----+--------+-------------+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+------+------+------+---------+

 

 

Active is the one with auto-failover

Start load and check Current load:

 

+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 500       | 192.168.1.5 | 3306     | ONLINE | 10       | 0        | 10     | 0       | 952     | 76461           | 0               | 0          |
| 501       | 192.168.1.5 | 3306     | ONLINE | 1        | 0        | 1      | 0       | 53137   | 2587784         | 165811100       | 167        |
| 501       | 192.168.1.6 | 3306     | ONLINE | 5        | 5        | 11     | 0       | 283496  | 13815077        | 891230826       | 109        |
| 501       | 192.168.1.7 | 3306     | ONLINE | 3        | 7        | 10     | 0       | 503516  | 24519457        | 1576198138      | 151        |
| 501       | 192.168.1.8 | 3306     | ONLINE | 1        | 0        | 1      | 0       | 21952   | 1068972         | 68554796        | 300        |
| 501       | 192.168.1.9 | 3306     | ONLINE | 0        | 1        | 1      | 0       | 21314   | 1038593         | 67043935        | 289        |
+-----------+-------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+

 

 

Kill the main node 192.168.1.5

 

+-----------+-------------+----------+---------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host    | srv_port | STATUS  | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+-------------+----------+---------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 500       | 192.168.1.6 | 3306     | ONLINE  | 10       | 0        | 10     | 0       | 60      | 4826            | 0               | 0          |
| 501       | 192.168.1.5 | 3306     | SHUNNED | 0        | 0        | 1      | 11      | 177099  | 8626778         | 552221651       | 30         |
| 501       | 192.168.1.6 | 3306     | ONLINE  | 3        | 7        | 11     | 0       | 956724  | 46601110        | 3002941482      | 49         |
| 501       | 192.168.1.7 | 3306     | ONLINE  | 2        | 8        | 10     | 0       | 1115685 | 54342756        | 3497575125      | 42         |
| 501       | 192.168.1.8 | 3306     | ONLINE  | 0        | 1        | 1      | 0       | 76289   | 3721419         | 240157393       | 308        |
| 501       | 192.168.1.9 | 3306     | ONLINE  | 1        | 0        | 1      | 0       | 75803   | 3686067         | 236382784       | 231        |
+-----------+-------------+----------+---------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+

 

 

When the node is killed the node is SHUNNED, but this time the script had already set the new node 192.168.1.6 ONLINE

See script log

 

2016/09/08 14:04:02.494:[INFO] END EXECUTION Total Time:102.347850799561
2016/09/08 14:04:04.478:[INFO] This Node Try to become a WRITER set READ_ONLY to 0 192.168.1.6:3306:HG501
2016/09/08 14:04:04.479:[INFO] This Node NOW HAS READ_ONLY = 0 192.168.1.6:3306:HG501
2016/09/08 14:04:04.479:[INFO] END EXECUTION Total Time:71.8140602111816

 

 

 

More important the application experience

 

Writes
 
[  10s] threads: 10, tps: 9.40, reads: 93.60, writes: 41.60, response time: 1317.41ms (95%), errors: 0.00, reconnects:  0.00
[  20s] threads: 10, tps: 8.30, reads: 74.70, writes: 33.20, response time: 1350.96ms (95%), errors: 0.00, reconnects:  0.00
[  30s] threads: 10, tps: 8.30, reads: 74.70, writes: 33.20, response time: 1317.81ms (95%), errors: 0.00, reconnects:  0.00
[  40s] threads: 10, tps: 7.80, reads: 70.20, writes: 31.20, response time: 1407.51ms (95%), errors: 0.00, reconnects:  0.00
[  50s] threads: 10, tps: 6.70, reads: 60.30, writes: 26.80, response time: 2259.35ms (95%), errors: 0.00, reconnects:  0.00
[  60s] threads: 10, tps: 6.60, reads: 59.40, writes: 26.40, response time: 3275.78ms (95%), errors: 0.00, reconnects:  0.00
[  70s] threads: 10, tps: 5.70, reads: 60.30, writes: 26.80, response time: 1492.56ms (95%), errors: 0.00, reconnects:  1.00 <-- just a reconnect experience
[  80s] threads: 10, tps: 6.70, reads: 60.30, writes: 26.80, response time: 7959.74ms (95%), errors: 0.00, reconnects:  0.00
[  90s] threads: 10, tps: 6.60, reads: 59.40, writes: 26.40, response time: 2109.03ms (95%), errors: 0.00, reconnects:  0.00
[ 100s] threads: 10, tps: 6.40, reads: 57.60, writes: 25.60, response time: 1883.96ms (95%), errors: 0.00, reconnects:  0.00
[ 110s] threads: 10, tps: 5.60, reads: 50.40, writes: 22.40, response time: 2167.27ms (95%), errors: 0.00, reconnects:  0.00

 

 

No errors no huge delay, our application (managing reconnect) had only glitch, and had to reconnect.

Read had no errors or reconnects.

The connection errors were managed by ProxySQL and given it found 5 in 1sec it SHUNNED the node.

The galera_script was able to promote a reader, and given it is a failover no delay with retry loop.

The whole thing is done in such brief time that application barely see it.

 

Obviously an application with thousands of connections/sec will experience more impact, but the time-window will be very narrow, and that was our scope.

Once the failed node is ready to come back, either we choose to start it with READ_ONLY=1, as such it will come back as reader.

Or we will keep it as it is and it will come back as writer.

 

No matter what the script will manage the case as it had done in the previous (manual) exercise.


Conclusions

As shown ProxySQL and galera_check working together are quite efficient in managing the cluster and its different scenario.

When using the Single-Writer mode, solving the manual part of the failover dramatically improve the efficiency in performing the recovery of the production state, going from few minutes to seconds or less.

The Multiwriter mode remain the preferred and most recommended way to use ProxySQL/PXC given it will perform failover without the need of additional scripts or extension, also if a script is still required to manage the integration with ProxySQL.

In both cases the use of a script able to identify the multiple state of PXC and the mutable node scenario, is a crucial part of the implementation without which ProxySQL may not behave correctly.

 

 

 

 

 

 

 

 

Last Updated on Tuesday, 13 September 2016 11:49
 
ProxySQL and MHA integration PDF Print E-mail
Written by Marco Tusa   
Sunday, 11 September 2016 15:32

ProxySQL and MHA integration
MHA (Master High Availability Manager and tools for MySQL), is almost fully integrated with the ProxySQL process.


What it means is that you can count on the MHA standard feature to manage the failover, and on ProxySQL to manage the traffic and shift from one server.

This is one of the main difference between using MHA and VIP vs MHA and ProxySQL.

There is no need to move IPs or re-define DNS.

The following is an example of configuration file for MHA in case you use it with ProxySQL:

 

[server default]
    user=mha
    password=mha
    ssh_user=root
    repl_password=replica
    manager_log=/tmp/mha.log
    manager_workdir=/tmp
    remote_workdir=/tmp
    master_binlog_dir=/opt/mysql_instances/mha1/logs
    client_bindir=/opt/mysql_templates/mysql-57/bin
    client_libdir=/opt/mysql_templates/mysql-57/lib
    master_ip_failover_script=/opt/tools/mha/mha4mysql-manager/samples/scripts/master_ip_failover
    master_ip_online_change_script=/opt/tools/mha/mha4mysql-manager/samples/scripts/master_ip_online_change
    log_level=debug
 
    [server1]
    hostname=mha1r
    ip=192.168.1.104
    candidate_master=1
 
    [server2]
    hostname=mha2r
    ip=192.168.1.107
    candidate_master=1
 
    [server3]
    hostname=mha3r
    ip=192.168.1.111
    candidate_master=1
 
    [server4]
    hostname=mha4r
    ip=192.168.1.109
    no_master=1

 

The only thing you need to be sure is to comment out the "FIX ME " lines in the sample/scripts.

In the MHA installation direcotry look for :

mha4mysql-manager/samples/scripts/master_ip_failover
mha4mysql-manager/samples/scripts/master_ip_online_change

 

After that just install MHA as you are used to.
In ProxySQL be sure to have mha users and the servers set.

One very important thing when using ProxySQL with standard replication is that we need to set additional privileges to the ProxySQL monitor user.
It must have "Replication Client" set or it will fail to check the SLAVE LAG.

Also the Servers MUST have a defined value for the attribute max_replication_lag, or the check will be ignore.

 

As a reminder:

 

INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('192.168.1.104',600,3306,1000,0);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('192.168.1.104',601,3306,1000,10);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('192.168.1.107',601,3306,1000,10);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('192.168.1.111',601,3306,1000,10);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('192.168.1.109',601,3306,1000,10);
INSERT INTO mysql_replication_hostgroups VALUES (600,601);
LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;
INSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES('mha_W',600,1);
INSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES('mha_R',601,1);
INSERT INTO mysql_query_rules (username,destination_hostgroup,active,retries,match_digest) VALUES('mha_RW',600,1,3,'^SELECT.*FOR UPDATE');
INSERT INTO mysql_query_rules (username,destination_hostgroup,active,retries,match_digest) VALUES('mha_RW',601,1,3,'^SELECT');
LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;
INSERT INTO mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent) VALUES ('mha_W','test',1,600,'test_mha',1);
INSERT INTO mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent) VALUES ('mha_R','test',1,601,'test_mha',1);
INSERT INTO mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent) VALUES ('mha_RW','test',1,600,'test_mha',1);
LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK

 

 

Ok all is ready let us rock'n'roll.


Controlled fail-over.First of all the masterha_manager should not be running or you will get error.

Then let us start some traffic.

 

Write
sysbench --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-host=192.168.1.50 --mysql-port=3311 --mysql-user=mha_RW --mysql-password=test --mysql-db=mha_test --db-driver=mysql --oltp-tables-count=50 --oltp-tablesize=5000 --max-requests=0 --max-time=900 --oltp-point-selects=5 --oltp-read-only=off --oltp-dist-type=uniform --oltp-reconnect-mode=transaction --oltp-skip-trx=off --num-threads=10 --report-interval=10 --mysql-ignore-errors=all  run
Read only
sysbench --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-host=192.168.1.50 --mysql-port=3311 --mysql-user=mha_RW --mysql-password=test --mysql-db=mha_test --db-driver=mysql --oltp-tables-count=50 --oltp-tablesize=5000 --max-requests=0 --max-time=900 --oltp-point-selects=5 --oltp-read-only=on --num-threads=10 --oltp-reconnect-mode=query --oltp-skip-trx=on --report-interval=10  --mysql-ignore-errors=all run
 

 

 

Let it run for a bit then check:

 

mysql> SELECT * FROM stats_mysql_connection_pool WHERE hostgroup BETWEEN 600 AND 601 ORDER BY hostgroup,srv_host DESC;
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host      | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 600       | 192.168.1.104 | 3306     | ONLINE | 10       | 0        | 20     | 0       | 551256  | 44307633        | 0               | 285        | <--- current Master
| 601       | 192.168.1.111 | 3306     | ONLINE | 5        | 3        | 11     | 0       | 1053685 | 52798199        | 4245883580      | 1133       |
| 601       | 192.168.1.109 | 3306     | ONLINE | 3        | 5        | 10     | 0       | 1006880 | 50473746        | 4052079567      | 369        |
| 601       | 192.168.1.107 | 3306     | ONLINE | 3        | 5        | 13     | 0       | 1040524 | 52102581        | 4178965796      | 604        |
| 601       | 192.168.1.104 | 3306     | ONLINE | 7        | 1        | 16     | 0       | 987548  | 49458526        | 3954722258      | 285        |
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+

 

 

And now let us try to perform the failover.
To do we will instruct MHA to do a switch, and to set the OLD master as new slave:

 

masterha_master_switch --master_state=alive --conf=/etc/mha.cnf --orig_master_is_new_slave --interactive=0 --running_updates_limit=0

 

 

Now let us check what happened:

 

[ 160s] threads: 10, tps: 354.50, reads: 3191.10, writes: 1418.50, response time: 48.96ms (95%), errors: 0.00, reconnects:  0.00
[ 170s] threads: 10, tps: 322.50, reads: 2901.98, writes: 1289.89, response time: 55.45ms (95%), errors: 0.00, reconnects:  0.00
[ 180s] threads: 10, tps: 304.60, reads: 2743.12, writes: 1219.91, response time: 58.09ms (95%), errors: 0.10, reconnects:  0.00 <--- moment of the switch
[ 190s] threads: 10, tps: 330.40, reads: 2973.40, writes: 1321.00, response time: 50.52ms (95%), errors: 0.00, reconnects:  0.00
[ 200s] threads: 10, tps: 304.20, reads: 2745.60, writes: 1217.60, response time: 58.40ms (95%), errors: 0.00, reconnects:  1.00
[ 210s] threads: 10, tps: 353.80, reads: 3183.80, writes: 1414.40, response time: 48.15ms (95%), errors: 0.00, reconnects:  0.00

 

 

Check ProxySQL :

 

mysql> SELECT * FROM stats_mysql_connection_pool WHERE hostgroup BETWEEN 600 AND 601 ORDER BY hostgroup,srv_host DESC;
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host      | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms |
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 600       | 192.168.1.107 | 3306     | ONLINE | 10       | 0        | 10     | 0       | 123457  | 9922280         | 0               | 658        | <--- new master
| 601       | 192.168.1.111 | 3306     | ONLINE | 2        | 6        | 14     | 0       | 1848302 | 91513537        | 7590137770      | 1044       |
| 601       | 192.168.1.109 | 3306     | ONLINE | 5        | 3        | 12     | 0       | 1688789 | 83717258        | 6927354689      | 220        |
| 601       | 192.168.1.107 | 3306     | ONLINE | 3        | 5        | 13     | 0       | 1834415 | 90789405        | 7524861792      | 658        |
| 601       | 192.168.1.104 | 3306     | ONLINE | 6        | 2        | 24     | 0       | 1667252 | 82509124        | 6789724589      | 265        |
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+

 

 

In this case the servers were not behind the master and switch happened quite fast.

We can see that the WRITE operations that normally are an issue given the need to move around an VIP or change name resolution, have a very limited hiccup.

Read operations were not affected, at all.

Nice eh?

Do you know how long it takes to do a switch in this conditions?

real 0m2.710s yes 2.7 seconds.

 

This is another evidence that most of the time in case of switch with MHA is cause by the need to redirect traffic from A to B using the network.

 

Crash fail-over

What happened if instead have a nice switch we have to cover a real failover?

First of all let us start masterha_manager:

 

nohup masterha_manager --conf=/etc/mha.cnf --wait_on_monitor_error=60 --wait_on_failover_error=60 >> /tmp/mha.log 2>&1

 

 

Then let us start some load again.

Finally go to the MySQL node that us master xxx.xxx.xxx.107

ps aux|grep mysql
mysql    18755  0.0  0.0 113248  1608 pts/0    S    Aug28   0:00 /bin/sh /opt/mysql_templates/mysql-57/bin/mysqld_safe --defaults-file=/opt/mysql_instances/mha1/my.cnf
mysql    21975  3.2 30.4 4398248 941748 pts/0  Sl   Aug28  93:21 /opt/mysql_templates/mysql-57/bin/mysqld --defaults-file=/opt/mysql_instances/mha1/my.cnf --basedir=/opt/mysql_templates/mysql-57/ --datadir=/opt/mysql_instances/mha1/data --plugin-dir=/opt/mysql_templates/mysql-57//lib/plugin --log-error=/opt/mysql_instances/mha1/mysql-3306.err --open-files-limit=65536 --pid-file=/opt/mysql_instances/mha1/mysql.pid --socket=/opt/mysql_instances/mha1/mysql.sock --port=3306

 

 

 

And kill the MySQL process.

 

kill -9 21975 18755

 

 

 

As before let us check what happened application side:

 

[  80s] threads: 4, tps: 213.20, reads: 1919.10, writes: 853.20, response time: 28.74ms (95%), errors: 0.00, reconnects:  0.00
[  90s] threads: 4, tps: 211.30, reads: 1901.80, writes: 844.70, response time: 28.63ms (95%), errors: 0.00, reconnects:  0.00
[ 100s] threads: 4, tps: 211.90, reads: 1906.40, writes: 847.90, response time: 28.60ms (95%), errors: 0.00, reconnects:  0.00
[ 110s] threads: 4, tps: 211.10, reads: 1903.10, writes: 845.30, response time: 29.27ms (95%), errors: 0.30, reconnects:  0.00 <-- issue starts
[ 120s] threads: 4, tps: 198.30, reads: 1785.10, writes: 792.40, response time: 28.43ms (95%), errors: 0.00, reconnects:  0.00
[ 130s] threads: 4, tps: 0.00, reads: 0.60, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects:  0.40         <-- total stop in write
[ 140s] threads: 4, tps: 173.80, reads: 1567.80, writes: 696.30, response time: 34.89ms (95%), errors: 0.40, reconnects:  0.00 <-- writes restart
[ 150s] threads: 4, tps: 195.20, reads: 1755.10, writes: 780.50, response time: 33.98ms (95%), errors: 0.00, reconnects:  0.00
[ 160s] threads: 4, tps: 196.90, reads: 1771.30, writes: 786.80, response time: 33.49ms (95%), errors: 0.00, reconnects:  0.00
[ 170s] threads: 4, tps: 193.70, reads: 1745.40, writes: 775.40, response time: 34.39ms (95%), errors: 0.00, reconnects:  0.00
[ 180s] threads: 4, tps: 191.60, reads: 1723.70, writes: 766.20, response time: 35.82ms (95%), errors: 0.00, reconnects:  0.00

 

 

So it takes ~10 seconds to perform failover.

To better understand let see what happened on MHA-land:

 

Tue Aug 30 09:33:33 2016 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue Aug 30 09:33:33 2016 - [info] Reading application default configuration from /etc/mha.cnf..
... Read conf and start
Tue Aug 30 09:33:47 2016 - [debug] Trying to get advisory lock..
Tue Aug 30 09:33:47 2016 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
... Wait for errors
Tue Aug 30 09:34:47 2016 - [warning] Got error on MySQL select ping: 2006 (MySQL server has gone away) <--- Error time
Tue Aug 30 09:34:56 2016 - [warning] Connection failed 4 time(s)..                                     <--- Finally MHA decide to do something 
Tue Aug 30 09:34:56 2016 - [warning] Master is not reachable from health checker!
Tue Aug 30 09:34:56 2016 - [warning] Master mha2r(192.168.1.107:3306) is not reachable!
Tue Aug 30 09:34:56 2016 - [warning] SSH is reachable.
Tue Aug 30 09:34:58 2016 - [info] Master failover to mha1r(192.168.1.104:3306) completed successfully. <--- end of the failover

 

 

MHA sees the server failing at xx:47, but because the retry and checks validation it actually fully acknowledge the downtime at xx:56.

As such ~8 seconds after.

To perform the whole failover it then takes only ~2 seconds (again).


No movable IP or dns involve and as such the operations were fast.

This is obviously true in case the servers have the binary-log there, different story may be if MHA has also to manage and push data from binarylog to MySQL.


As you can see ProxySQL may help a lot in reducing the timing also for this scenario, totally skipping the network related operations, that as we know, are the ones causing more trouble in these cases.


Last Updated on Monday, 14 November 2016 19:21
 
«StartPrev12345678910NextEnd»

Page 2 of 13
 

Connecting from

Your IP: 54.90.207.75

Location: UNITED STATES

Who's Online

We have 80 guests online