2+1 Architecture in Exasol (with minimum or Redundancy 2)

This blog post feeds on from 2 others, which I’d recommend reading before this one

Data Redundancy 1 in Exasol

Data Redundancy 2 in Exasol

In a 2+1 architecture we have the same setup as before (2 Data nodes), with the exception of an additional node. So, we have Node 11 and Node 12, these are our “Active” data nodes as before. However, we add another Node, Node 13. Node 13 is the “Spare” or “Reserved” node. We will also depend on Redundancy 2 here.


With a ‘+1’ architecture, if a node fails, it is automatically replaced by the Spare node. The slave replica segment of the failed nodes data is pulled from the failed nodes’ neighbour node, and is restored on the implemented Spare node.

For example, say Node 12 fails. It’s master data is gone, obliterated. The database is down! No problem. Node 13 is pushed in Node 12’s place. The slave segment 2 is pulled from Node 11 (great that we’ve got an exact copy huh!).

The database restarts, then the segment is restored onto Node 13 as the Master segment, with the end user knowing no different (apart from a hopefully brief outage).

The process of recovery in a ‘+1’ architecture is as follows:

1 – Database discovers failure

2- Database shuts down

3 – The failed node is automatically replaced with the reserved node

4 – Database starts

5 – Data is restored from the slave segment on the neighbour node onto the Spare node as the Master segment

The only place where it is logged that the failover has happened is the log. Notifications of this occurring would be flagged in Warning and Critical severity levels.


Data Redundancy 2 in Exasol

Before reading this post, it might add a bit of context to read the first post in this series, about Redundancy 1 and a 2+0 architecture.

Redundancy 2

Using the 2+0 architecture from the previous article, we add a “Redundancy” or “Slave” segment to each node. Node 11 will have Slave segment 2, and Node 12 will have Slave segment 1. The slave segments need to be the same size as the Master segment sizes; so in this case, also 4GB. This will take the total Volume size to 8GB, requiring a total disk space of 16GB.

Back to our slave segments. The slave on Node 11 is an exact copy of the Master segment 2 on Node 12. Likewise, the slave segment on Node 12 is the exact copy of Master segment 1 on Node 11. This is managed by data being committed to both the master and corresponding slave segment simultaneously. A commit is only valid if both node segments accept it.

Redundancy 2 ensures that there is always a live copy of data, in the event of a failure, so that the system can suffer as limited downtime as possible (data availability), but also to ensure data integrity.

With Redundancy enabled, the nodes are generating more network traffic. To ensure the best possible performance, it is recommended to have 1 dedicated network interface for storage (data to disk) and 1 dedicated network for database traffic (querying).

In a ‘+0’ approach, the database is down if one node fails. No recovery is available until the node is fixed first. Not good in a production environment.

In the next post we’ll look at a 2+1 architecture, to get the database back up and running pronto!