Search results

  1. M

    Accessing VM Discs on CEPH Filesystems

    Yeah that's fair, but unfortunately you can't 100% trust error messages. Are you running these commands from a Ceph host (where ceph is actually installed)? Other than that, I'm out of ideas, so hopefully someone else can chime in.
  2. M

    Accessing VM Discs on CEPH Filesystems

    Try adding the disk extension .raw to that command. qemu-img convert -f raw rdb:ceph/vm-116-disk-1.raw -O vmdk vm-116-disk-1.vmdk
  3. M

    Accessing VM Discs on CEPH Filesystems

    qemu-img convert -f raw rdb:poolname/vm-116-disk-1 etc
  4. M

    Backup Logic

    Awesome thanks!
  5. M

    Backup Logic

    I have a question that I haven't been able to find an answer to either on the wiki, else elsewhere on the web. When you create a backup job and select multiple VMs, do all the VMs backup at the same time, or are they staggered?
  6. M

    Hyper-Converged Ceph 3 node Cluster direct attach - Proxmox VE

    That's how we set up our 3 node corosync network. Create a bond between the two interface on each node, with bond mode: Broadcast. Node A: 1st + 2nd link = bond0 (broadcast mode) Node B: 1st + 2nd link = bond0 (broadcast mode) Node C: 1st + 2nd link = bond0 (broadcast mode) Then set a...
  7. M

    Ceph Bluestore & Erasure Coding

    Since Proxmox supports CephFS and that could be used for non-VM workloads, I could see that as a reasonable argument to support EC.
  8. M

    Move Boot Disk to NVME

    Oh that's very neat. It reminds be of FreeNAS's live boot disk additions and removals. That's probably what they use. Unfortunately, you still have to mess with BIOS settings to boot from new disks.
  9. M

    Move Boot Disk to NVME

    There's nothing built in to do that. Easiest thing I can think of is to clone your current disk with Clonezilla to the new disk and make appropriate changes to the BIOS to boot from the new disk. It's a bit trickier if the NVME "disk" is smaller than current boot drive. Else, reinstall and...
  10. M

    Moving VM from Local to Local-lvm. All system down for 5 hours...

    1. If you want to move a disk from one VM to another see this wiki article. Just be careful, if you are moving disk to an LVM-Thin storage, I believe the disk has to be in RAW format, not qcow2. Maybe you can keep things simple and not use LVM-Thin for now. 2. I don't think there is a...
  11. M

    Moving VM from Local to Local-lvm. All system down for 5 hours...

    I'm not an expert on Proxmox but I think you need to slow down. Before you move production over to a new system you have to learn it a bit, and test before you do anything..... Let's start at the beginning and go simple and slow: How many disks and what size you do you have installed on the...
  12. M

    Moving VM from Local to Local-lvm. All system down for 5 hours...

    You should upload some pictures of the configuration on Prox1 VM. Did you run out of disk space in the migration? In my experience Proxmox will NOT warn you if you try to move a disk and there is not enough space on the destination. I wish it would. If the old disk worked and is still...
  13. M

    ifdownup2 breaks network on nodes.

    Sure, here's an example from one of our 3 node cluster. They're all same. auto lo iface lo inet loopback iface eno1 inet manual iface eno2 inet manual iface enp4s0 inet manual auto enp4s0d1 iface enp4s0d1 inet static address 10.10.1.16 netmask 255.255.255.0 #CEPH iface...
  14. M

    ifdownup2 breaks network on nodes.

    No we don't use OpenvSwitch, just regular linux bonds/bridges.
  15. M

    ifdownup2 breaks network on nodes.

    I had the same issue. Had to uninstall ipdownup2 and reboot the nodes.
  16. M

    [SOLVED] ceph rbd error: rbd: list: (95) Operation not supported (500)

    I just ran into this error again when creating a new RDB pool. I feel like Proxmox and Ceph with cephx disabled is not playing well. I hope more attention is devoted to make sure all components work well with cephx disabled.
  17. M

    Best way to have VMs on a separate VLAN

    To your first question, yes, it is as easy as adding the vlan tag to the network interface. We trunked the interfaces from Cisco switches. See attached image.
  18. M

    CEPH Nautilus mon_host in [global] vs mon_host in [client]

    At this time, here is the way monitors are registered in ceph.conf (an excerpt only): [client] [mon.VMHost4] host = VMHost4 mon addr = 10.10.1.14:6789 [mon.VMHost3] host = VMHost3 mon addr = 10.10.1.13:6789 [mon.VMHost2] host = VMHost2...