Search results

  1. B

    Unable to delete backups or prune when storage is on CIFS mount

    For Samba/CIFS there's no nolock option (only for NFS?), I'm testing nobrl option instead
  2. B

    Unable to delete backups or prune when storage is on CIFS mount

    I might try with the nolock option I found in other threads!
  3. B

    Unable to delete backups or prune when storage is on CIFS mount

    I'm having this same problem on PBS 3.3.2 with a VPS using Hetzner + Storage box (connected with Samba) EDIT: using nobrl option for Samba/CIFS might fix the problem
  4. B

    Failed to start Import ZFS pool

    you could always enable it back if you need to, I just found out that Proxmox was trying to start the ZFS pool from my TrueNAS disks at boot so I disabled it!
  5. B

    Failed to start Import ZFS pool

    yup I fixed it by disabling the import scan service as I don't need it systemctl disable --now zfs-import-scan.service
  6. B

    REMOVE

    REMOVE
  7. B

    10G SFP+ Nodes running slow

    Gotcha! I guess then this is the max speed I would be able to get for the migration?
  8. B

    10G SFP+ Nodes running slow

    This actually is doing something! I went from 400MB to 700-800MB. What are the implications of having an insecure migration network? I'm using these sync;fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=4M --size=4G --readwrite=write --ramp_time=4 sync;fio...
  9. B

    10G SFP+ Nodes running slow

    umm I could try but I'm not sure how to set that up?
  10. B

    10G SFP+ Nodes running slow

    MTU set to 9000 is a bit better, but still far away from 10G speeds! 2024-06-18 21:34:22 10442833920 bytes (10 GB, 9.7 GiB) copied, 27 s, 387 MB/s 2024-06-18 21:34:25 11662983168 bytes (12 GB, 11 GiB) copied, 30 s, 389 MB/s 2024-06-18 21:34:28 12918063104 bytes (13 GB, 12 GiB) copied, 33 s, 391...
  11. B

    10G SFP+ Nodes running slow

    Hi guys! I've created a cluster with 2 nodes, both are connected at 10Gb through SFP+ using a Unifi 10G Aggregation switch. Both machines are capable enough to sustain a 10G network at full speed, both CPU and SSD wise. However, when migrating an LXC, I'm getting around 380MB/s, which IMHO is...
  12. B

    How to expand node three automatically

    Hi, This isn't a Proxmox issue but rather a quality of life enhancement. I'm unsure if it's achievable within Proxmox or requires another solution. I have 3 nodes in my cluster. Each time I check them, I must manually expand each one to view the running services. Since I access my cluster...
  13. B

    [SOLVED] Live migration of a VM ends up in error

    This actually seems to have solved the problem. (Probably the problem was me, haha.)
  14. B

    [SOLVED] Live migration of a VM ends up in error

    Attaching dmesg -T output from around the time of the migration. I hope this helps! I'm moving VM 104 (haos 12.2) from host pve to pve2 pve log [Tue Apr 16 14:11:24 2024] vmbr0: port 5(tap104i0) entered blocking state [Tue Apr 16 14:11:24 2024] vmbr0: port 5(tap104i0) entered forwarding state...
  15. B

    [SOLVED] Live migration of a VM ends up in error

    Thanks @fiona! I'm sorry if I'm not understanding everything correctly. I'm not totally clear on how to get the system log, is it with dmesg? Or should I execute another command? VM configuration is as follows (this is the default configuration from the @tteckster script): - CPU: - `pve`: 8...
  16. B

    [SOLVED] Live migration of a VM ends up in error

    I was able to replicate this by using a brand new VM from https://tteck.github.io/Proxmox/ > Home Assistant OS VM. Installed it on one node, let it start, and then, while the migration went well, the VM is now unresponsive. Not sure if it's only happening to me or there's some actual bug!
  17. B

    [SOLVED] Live migration of a VM ends up in error

    It's so hung up that I had to delete the lock file to be able to stop it. root@pve2:~# qm stop 101 trying to acquire lock... can't lock file '/var/lock/qemu-server/lock-101.conf' - got timeout root@pve2:~# rm -rf /var/lock/qemu-server/lock-101.conf root@pve2:~# qm stop 101 root@pve2:~#
  18. B

    [SOLVED] Live migration of a VM ends up in error

    Here's the log of the successful migration 2024-04-15 20:53:44 starting migration of VM 101 to node 'pve2' (10.0.1.36) 2024-04-15 20:53:44 found local disk 'local-lvm:vm-101-disk-0' (attached) 2024-04-15 20:53:44 found local disk 'local-lvm:vm-101-disk-1' (attached) 2024-04-15 20:53:44 starting...