Recent content by LBX_Blackjack

  1. L

    Quorum member with different network and storage?

    Yeah the only reason I wanted to add it to the cluster was to manage it from the same point, rather than having to use two portals. The reason the VM needs its own network is because it will likely saturate our primary network and I don't want it bogging down other traffic. Security is a concern...
  2. L

    Quorum member with different network and storage?

    I have five nodes in a cluster and am about to spin up a sixth for a special case. The first five currently all use the same network (working on separating out the management interfaces from the LACP bonds so I can trunk VLANs in on the rest, but that's for another day), but the VM that will run...
  3. L

    [SOLVED] Parameter verification failed. (400) Memory

    It was user error. I don't remember exactly what I was doing wrong (it was nearly a year ago), but if you're not lacking common sense like myself, then it's not the same issue. Sorry I can't be of more help. I think I was assuming the wrong unit conversion in the config or something. Like bytes...
  4. L

    zfs send/receive fails to unmount pool

    Update: While restarting the services and waiting a month did not work, stopping the services and waiting a couple of days did. I don't know why this is happening but it is causing a lot of downtime. If someone comes across this and has a suggestion on how to improve robustness to mitigate these...
  5. L

    zfs send/receive fails to unmount pool

    Replying again because the issue persists. Restarting the services has no effect. Stopping them changes the error to "Dataset is busy." All new backup jobs are failing but I can't troubleshoot it until I can resolve this first issue. Also, there is a partial recv but I can't destroy it.
  6. L

    zfs send/receive fails to unmount pool

    I restarted the service, but it's not unlocking. The datastore has been locked for weeks at this point, preventing me from taking snapshots and "send|recv"ing them. It's persisted through reboots and every attempt to fix it.
  7. L

    zfs send/receive fails to unmount pool

    Coming back to this because I'm having trouble again. I ran lsof to try to find the process locking the mountpoint and got this :~$ sudo lsof | grep /mnt/datastore/storage/ proxmox-b 2325 backup 19u REG 0,58 0 3...
  8. L

    [SOLVED] 'qemu convert' Reports No Space Left on Device

    That explains it. Thank you! The image finally imported properly.
  9. L

    [SOLVED] 'qemu convert' Reports No Space Left on Device

    Update: I tried to use qm importdisk but it says that the target storage contains illegal characters. The documentation for qm doesn't seem to specify what characters are unacceptable. The target storage is /mnt/VM_Storage
  10. L

    [SOLVED] 'qemu convert' Reports No Space Left on Device

    Fantastic! Thank you! I'll give that a shot an report back. I looked at the link you provided but I don't see where it says qcow2 isn't supported on LVM. I'm guessing it doesn't explicitly say that, but can be inferred by what is said?
  11. L

    [SOLVED] 'qemu convert' Reports No Space Left on Device

    I'm trying to import a .vmdk as per the official instructions, but it keeps failing, reporting that there is no space left. This is very much not true, as there are terabytes of space in the volume group, and the .vmdk is less than 130 gigabytes. The .vmdk is on an HDD connected via USB and...
  12. L

    [SOLVED] Reconfiguring LVM storage

    Hello, I'm trying to set up a node to host a file server. However, by default, Proxmox is assigning nearly all of the storage space to the LVM volume. Since I will only be hosting one VM on this node, and that VM needs extensive storage space, I need to reconfigure the storage volumes. Despite...
  13. L

    zfs send/receive fails to unmount pool

    At this point in thinking of nuking the datastore and pool and starting fresh. I jsut wish I knew what happened so I could try to prevent it in the future.
  14. L

    zfs send/receive fails to unmount pool

    Running zpool export -f s_pool or zpool export -F s_pool both failed to unmount. Even forced and lazy unmounts fail. I'm at a loss here.
  15. L

    zfs send/receive fails to unmount pool

    All drives passed SMART, so that's good. unshare complained that the share is not nfs or smb, so that answers that as well. I'll certainly set up a cronjob for scrubs, but for now should I use that force/hardforce export?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!