[SOLVED] Glsuterfs replace two servers backups stop working

svirus

Active Member
Jul 3, 2018
22
0
41
39
I just replace in my pve two servers with two new servers...
everything working but backups send me errors "could not get storage information for 'gv0': storage 'gv0' is not online"

I think this is problem becouse in PVE I stell have "Server" and "Second server" configured to my non-existing server.

How can I fix that without removing gv0 and adding second time.

I replace "pve1" and "pve2" with "pve9" and "pve10"

root@pve10:/mnt/pve/gv0# gluster volume info gv0

Volume Name: gv0
Type: Replicate
Volume ID: dbbc976e-eef2-4fd1-92ee-442fbee9fc39
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: pve10-corosync:/data/brick1/gv0
Brick2: pve10-corosync:/data/brick2/gv0
Brick3: pve9-corosync:/data/brick1/gv0
Brick4: pve9-corosync:/data/brick2/gv0


root@pve10:/mnt/pve/gv0# gluster volume status gv0
Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick pve10-corosync:/data/brick1/gv0 49154 0 Y 7276
Brick pve10-corosync:/data/brick2/gv0 49155 0 Y 7310
Brick pve9-corosync:/data/brick1/gv0 49155 0 Y 18869
Brick pve9-corosync:/data/brick2/gv0 49156 0 Y 18902
Self-heal Daemon on localhost N/A N/A Y 13108
Self-heal Daemon on pve9-corosync N/A N/A Y 23660
Self-heal Daemon on pve8-corosync N/A N/A Y 29498
Self-heal Daemon on pve6-corosync N/A N/A Y 9009
Self-heal Daemon on pve4-corosync N/A N/A Y 23557
 

Attachments

  • screenshot.png
    screenshot.png
    10.8 KB · Views: 4
  • screenshot2.png
    screenshot2.png
    19.1 KB · Views: 4
OK... I fixed this by manualy editing /etc/pve/storage.cfg and replacing servers in config.