Before reading this post, it might add a bit of context to read the first post in this series, about Redundancy 1 and a 2+0 architecture.
Using the 2+0 architecture from the previous article, we add a “Redundancy” or “Slave” segment to each node. Node 11 will have Slave segment 2, and Node 12 will have Slave segment 1. The slave segments need to be the same size as the Master segment sizes; so in this case, also 4GB. This will take the total Volume size to 8GB, requiring a total disk space of 16GB.
Back to our slave segments. The slave on Node 11 is an exact copy of the Master segment 2 on Node 12. Likewise, the slave segment on Node 12 is the exact copy of Master segment 1 on Node 11. This is managed by data being committed to both the master and corresponding slave segment simultaneously. A commit is only valid if both node segments accept it.
Redundancy 2 ensures that there is always a live copy of data, in the event of a failure, so that the system can suffer as limited downtime as possible (data availability), but also to ensure data integrity.
With Redundancy enabled, the nodes are generating more network traffic. To ensure the best possible performance, it is recommended to have 1 dedicated network interface for storage (data to disk) and 1 dedicated network for database traffic (querying).
In a ‘+0’ approach, the database is down if one node fails. No recovery is available until the node is fixed first. Not good in a production environment.
In the next post we’ll look at a 2+1 architecture, to get the database back up and running pronto!