Hi,
There are 18 nodes in our cluster. During the upgrade from version 5.4 to 6.1-7, there was a problem with a timeout. For any operations with storage lists, a delay of up to 30 seconds occurs. But after a while the list is loading (see screenshot). Moreover, there are no problems on the old nodes 5.4 until you update them.
It is interesting that at each launch of the command, different rbd storages are displayed as inactive. With lvm this does not happen.
Upd. I forgot to clarify that ceph is external.
There are 18 nodes in our cluster. During the upgrade from version 5.4 to 6.1-7, there was a problem with a timeout. For any operations with storage lists, a delay of up to 30 seconds occurs. But after a while the list is loading (see screenshot). Moreover, there are no problems on the old nodes 5.4 until you update them.
Bash:
[root@compute-5 ~]$ pvesm status
Name Type Status Total Used Available %
HPE-3PAR-STOR-01 lvm active 12884885504 10674503680 2210381824 82.85%
HPE-3PAR-STOR-02 lvm active 8992571392 7933526016 1059045376 88.22%
HPE-3PAR-STOR-03 lvm active 6442434560 830472192 5611962368 12.89%
CEPH-BUILD-SAS rbd active 14881893314 9299124162 5582769152 62.49%
CEPH-BUILD-SSD rbd active 28997599635 21123504531 7874095104 72.85%
CEPH-CUSTOM-SAS rbd active 6694895406 1112126254 5582769152 16.61%
CEPH-CUSTOM-SSD rbd active 10040456603 2166507419 7873949184 21.58%
backup nfs active 87816765440 72777157632 15039607808 82.87%
iso nfs active 314569728 278293536 36276192 88.47%
1st run
2nd run
3rd run
Bash:
[root@compute-14 ~]$ pvesm status
got timeout
got timeout
Name Type Status Total Used Available %
HPE-3PAR-STOR-01 lvm active 12884885504 10674503680 2210381824 82.85%
HPE-3PAR-STOR-02 lvm active 8992571392 7933526016 1059045376 88.22%
HPE-3PAR-STOR-03 lvm active 6442434560 830472192 5611962368 12.89%
CEPH-BUILD-SAS rbd inactive 0 0 0 0.00%
CEPH-BUILD-SSD rbd active 28997599635 21123504531 7874095104 72.85%
CEPH-CUSTOM-SAS rbd active 6694895406 1112126254 5582769152 16.61%
CEPH-CUSTOM-SSD rbd inactive 0 0 0 0.00%
backup nfs active 87816765440 72777157632 15039607808 82.87%
iso
Bash:
[root@compute-14 ~]$ pvesm status
got timeout
Name Type Status Total Used Available %
HPE-3PAR-STOR-01 lvm active 12884885504 10674503680 2210381824 82.85%
HPE-3PAR-STOR-02 lvm active 8992571392 7933526016 1059045376 88.22%
HPE-3PAR-STOR-03 lvm active 6442434560 830472192 5611962368 12.89%
CEPH-BUILD-SAS rbd active 14881893314 9299124162 5582769152 62.49%
CEPH-BUILD-SSD rbd active 28997599635 21123504531 7874095104 72.85%
CEPH-CUSTOM-SAS rbd inactive 0 0 0 0.00%
CEPH-CUSTOM-SSD rbd active 10040456603 2166507419 7873949184 21.58%
backup nfs active 87816765440 72777157632 15039607808 82.87%
iso nfs active 314569728 278293536 36276192 88.47%
Bash:
[root@compute-14 ~]$ pvesm status
got timeout
got timeout
Name Type Status Total Used Available %
HPE-3PAR-STOR-01 lvm active 12884885504 10674503680 2210381824 82.85%
HPE-3PAR-STOR-02 lvm active 8992571392 7933526016 1059045376 88.22%
HPE-3PAR-STOR-03 lvm active 6442434560 830472192 5611962368 12.89%
CEPH-BUILD-SAS rbd inactive 0 0 0 0.00%
CEPH-BUILD-SSD rbd inactive 0 0 0 0.00%
CEPH-CUSTOM-SAS rbd active 6694895406 1112126254 5582769152 16.61%
CEPH-CUSTOM-SSD rbd active 10040456603 2166507419 7873949184 21.58%
backup nfs active 87816765440 72777157632 15039607808 82.87%
iso nfs active 314569728 278293536 36276192 88.47%
It is interesting that at each launch of the command, different rbd storages are displayed as inactive. With lvm this does not happen.
Upd. I forgot to clarify that ceph is external.
Attachments
Last edited: