Are you using VLAN in your network? I noticed that "vlan aware" tag is enable on vmbr0.
If you don't configure trunk on your phisical switch properly, you won't be able to comunicate from between VMs HOST1 and VMs HOST2.
MM
I integrate the 2) point
Delete that node from the Proxmox cluster
After that, you will still see the OSDs (in down state) in the Ceph configuration
Stop and destroy all OSD layer, ONE by ONE
ceph osd rm osd.X
ceph auth del osd.X
ceph osd crush remove osd.X
ceph osd crush rm <node_name>
ceph...
Hi,
If you want to do a very crear soluzion, you can:
1) Tourn that node off
2) Delete that node from the Proxmox cluster
3) Change your hardware (Mainboard, CPU, RAM, HBA Controller and Network Cards, leave the same hard drives)
4) Re-Install proxmox (you could restore exacly the...
Hi,
You are trying to upgrade PVE not using ALL right repository.
Have you recently changed your subscription from community to enterprise?
you have to update ALL repository in these file (for ENTERPRISE REPOSITORY)
/etc/apt/sources.list
/etc/apt/sources.list.d/pve-enterprise.list...
Hi, before rebooting the node, you can change the weight of the node.
For example, if you have pve1 (weight=1) and pve2 (weight=1) you can change in pve1 (weight=2) and pve2 (weight=1) in the corosync.conf file in order to avoid and you can loss the quorum during the reboot proces
MM
Be careful when you will add more disks.
In my past experience in adding disk in ceph storage, I can do these considerations:
After adding a disk and including the OSD, we had to wait for the PG realignment. The graph was mostly yellow, but there was no impact on customer service.
Since it...
First of all, try accessing via the web all the nodes to understand where the cluster split occurred.
This usually happens when Corosync experiences communication issues.
If you see a red dot with an "X," and your server is powered on, it means that Corosync is having problems. Also, check the...
Here an example for the zpool attach command:
Find your pool with
zpool status
- probably your pool is rpool
Note: be sure the new disk is clean (or formatted)
I do not know with NVME, but you have to find your disk. For example with wwn
lsblk -o +MODEL,SERIAL,WWN
find the new disk and the...
seem that right now you do not have a mirror.
You have to create a mirror from existing drive
prepare the new disk! And then something like
sudo zpool attach YourPool /path/to/your/existingdisk /path/to/your/newdisk
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.