Yeah that's fair, but unfortunately you can't 100% trust error messages.
Are you running these commands from a Ceph host (where ceph is actually installed)?
Other than that, I'm out of ideas, so hopefully someone else can chime in.
I have a question that I haven't been able to find an answer to either on the wiki, else elsewhere on the web.
When you create a backup job and select multiple VMs, do all the VMs backup at the same time, or are they staggered?
That's how we set up our 3 node corosync network.
Create a bond between the two interface on each node, with bond mode: Broadcast.
Node A: 1st + 2nd link = bond0 (broadcast mode)
Node B: 1st + 2nd link = bond0 (broadcast mode)
Node C: 1st + 2nd link = bond0 (broadcast mode)
Then set a...
Oh that's very neat. It reminds be of FreeNAS's live boot disk additions and removals. That's probably what they use. Unfortunately, you still have to mess with BIOS settings to boot from new disks.
There's nothing built in to do that.
Easiest thing I can think of is to clone your current disk with Clonezilla to the new disk and make appropriate changes to the BIOS to boot from the new disk. It's a bit trickier if the NVME "disk" is smaller than current boot drive.
Else, reinstall and...
1. If you want to move a disk from one VM to another see this wiki article. Just be careful, if you are moving disk to an LVM-Thin storage, I believe the disk has to be in RAW format, not qcow2. Maybe you can keep things simple and not use LVM-Thin for now.
2. I don't think there is a...
I'm not an expert on Proxmox but I think you need to slow down.
Before you move production over to a new system you have to learn it a bit, and test before you do anything.....
Let's start at the beginning and go simple and slow:
How many disks and what size you do you have installed on the...
You should upload some pictures of the configuration on Prox1 VM.
Did you run out of disk space in the migration?
In my experience Proxmox will NOT warn you if you try to move a disk and there is not enough space on the destination. I wish it would.
If the old disk worked and is still...
Sure, here's an example from one of our 3 node cluster.
They're all same.
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
iface enp4s0 inet manual
auto enp4s0d1
iface enp4s0d1 inet static
address 10.10.1.16
netmask 255.255.255.0
#CEPH
iface...
I just ran into this error again when creating a new RDB pool.
I feel like Proxmox and Ceph with cephx disabled is not playing well. I hope more attention is devoted to make sure all components work well with cephx disabled.
To your first question, yes, it is as easy as adding the vlan tag to the network interface. We trunked the interfaces from Cisco switches.
See attached image.
At this time, here is the way monitors are registered in ceph.conf (an excerpt only):
[client]
[mon.VMHost4]
host = VMHost4
mon addr = 10.10.1.14:6789
[mon.VMHost3]
host = VMHost3
mon addr = 10.10.1.13:6789
[mon.VMHost2]
host = VMHost2...