DRBD with PCS: Make production at HIGH availability!
DRBD Configuration with PCS
(Pacemaker/Corosync)
----------------------------------------------------------
1. Introduction
DRBD (Distributed Replicated Block Device) is a block-level replication tool that allows real-time data replication between two servers for high availability. When combined with PCS (Pacemaker/Corosync), it provides automatic failover in case of primary node failure.
2. Understanding DRBD with PCS
DRBD replicates data between two servers at the block level.
Both servers have an HDD connected via a dedicated cross cable to ensure fast and direct replication.
Only one server is Primary at a time, while the other remains Secondary.
The Primary node handles read/write operations, while the Secondary node keeps data in sync.
In case of failover, the Secondary node becomes Primary, ensuring high availability.
The IP address of the cross cable connection is mentioned in
/etc/drbd.d/drbd0.res, allowing ping or telnet to verify connectivity.
3. Key Verification Commands
Command |
Purpose |
|---|---|
|
Check if the
Primary and Secondary hosts are online |
|
Check the status of the DRBD service |
|
Verify if data synchronization is up-to-date |
|
Check DRBD resource configuration details |
|
Verify if the cross cable is connected |
|
Check network connectivity between nodes |
4. DRBD Configuration Steps
Step 1: Install Required Packages
#yum install -y drbd kmod-drbd pacemaker pcs#systemctl enable pcsd#systemctl start pcsd
Step 2: Authenticate PCS Cluster Nodes
#echo "password" | passwd --stdin hacluster#pcs cluster auth <node1> <node2> -u hacluster -p password
Step 3: Configure DRBD Resource
Create DRBD Configuration File
#vi /etc/drbd.d/drbd0.resExample:
resource drbd0 {device /dev/drbd0;disk /dev/sdb;meta-disk internal;on node1 {address 192.168.1.1:7789;}on node2 {address 192.168.1.2:7789;}}
Initialize DRBD
#drbdadm create-md drbd0#systemctl enable drbd#systemctl start drbd#drbdadm up drbd0
Step 4: Set Primary Node and Format DRBD Disk
#drbdadm primary --force drbd0#mkfs.ext4 /dev/drbd0
Step 5: Configure DRBD in PCS Cluster
#pcs cluster setup drbd-cluster <node1> <node2>#pcs cluster start --all#pcs property set stonith-enabled=false#pcs property set no-quorum-policy=ignore
Create DRBD Resource in PCS
#pcs resource create drbd0 ocf:linbit:drbd drbd_resource=drbd0 op monitor interval=30s#pcs resource master drbd0-master drbd0 master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
Create Filesystem Resource
#pcs resource create drbd_fs Filesystem device=/dev/drbd0 directory=/mnt/data fstype=ext4#pcs constraint colocation add drbd_fs with master drbd0-master INFINITY#pcs constraint order promote drbd0-master then start drbd_fs
5. Troubleshooting & Best Practices
Common Issues & Fixes:
Issue |
Cause |
Fix |
|---|---|---|
DRBD stuck in Secondary mode |
Primary node not set |
Run |
Split-brain situation |
Network failure |
Run # |
Service not starting |
Config error |
Check logs:
|
DRBD sync
issue |
Network connectivity |
Verify using
|
Failover not happening |
PCS misconfiguration |
Run |
Testing Failover Manually
#pcs resource move drbd_fs <other_node>Verifying DRBD Synchronization
#drbd-overview#drbdadm status
Related Config Files
/etc/drbd.conf – Config file
/etc/drbd.d/global_common.conf - contains the global and common sections of the DRBD configuration.
/etc/drbd.d/*.res – Each resource configuration file
Port Number : 7788
Service Name : drbd
6. Conclusion
This document provides a complete guide to configuring
DRBD with PCS for high availability. Regular monitoring
using pcs status, drbd-overview,
and network checks (ping, telnet)
is crucial to ensure smooth operation. For production setups, enable
STONITH (Shoot The Other Node In The Head) to
prevent split-brain scenarios.
======================================================================================
Thank you, check out more blogs for such information..!
For any queries,feel free to reach out me on shubhammore07007@gmail.com
Comments
Post a Comment