Recent content by mailinglists

  1. Very Slow boot

    I have only 16 GB, so it can't be that. :-)
  2. Public cloud with ProxMox

    Thank you for taking the time to reply.
  3. How to move raw image on ZFS

    Hi, i would create a VM as it is required. Same disk size as the transferred raw source. Then i would zfs remove rpool/data/vm-lxc-orwhatherver-vm-ID-disk-ID, to delete it's disk. The I would replace the disk with zfs send and receive or zfs rename rpool/sync/vm-103-disk-0...
  4. Public cloud with ProxMox

    Hi guys, i wonder what solutions do / would you use to create a public cloud with ProxMox? PMs HTTP GUI is not suitable for public users, because it reveals too much info about the cluster even with the most basic permissions. There is also no automatic provisioning, payment processing...
  5. ZFS Disk replacement

    Just a note in case of a confusion for future readers. :-) Looks like MH_MUC revived an old thread. In original issue we have had proxmox <= 5 and there is no efi boot with ZFS, and instructions still hold true. Latter issue looks like from PM 6 where we can have EFI boot and ZFS, hence the new...
  6. VM start timeout after snapshot deleted

    If just one VM in locked, use qm unlock VMID or something like this...
  7. Update to Proxmox 6.1 broke metrics

    Idea: did you check MTU size on PM hosts network interfaces, like what do you see when running: ip l l ?
  8. Simple nagios check script to monitor pve-zsync included

    I just took 5 minutes and wrote this, as there are no nagios plugins existing for monitoring pve-zsync jobs. Haven't really tested it yet, just sent it to my coworker, but I guess it should work as expected. Feel free to make it more advanced, share your mods back or just use as is...
  9. Online / Live Migration with ZFS-Replicated local VMs?

    @dcsapak there is also an option to "cheat" with this implementation and migrate suspended VMs. In this case, all you need to fix is locking mechanism. See here: As a bonus, we also get to keep ZFS snapshots on migration!
  10. PVE replication and ZFS Snapshot

    Is working as expected. Replication is not backup. Try pve-zsync to have more snapshots on both sides. Try other ZFS backup scripts or write your own to have more snaps on destination than on source.
  11. [SOLVED] Sync /etc/pve/priv/known_hosts ?

    Seems that you are correct in the case i checked: root@p32:~# ls -la /etc/ssh/ | grep -i known -rw------- 1 root root 6601 Oct 29 17:54 ssh_known_hosts lrwxrwxrwx 1 root root 25 Oct 29 17:25 ssh_known_hosts.old -> /etc/pve/priv/known_hosts I guess I can just rm ssh_known_hosts and...
  12. [SOLVED] Sync /etc/pve/priv/known_hosts ?

    No1 knows this? Shouldn't /etc/pve be identical on all nodes?
  13. Migration of VM with replication job not possible, why?

    if it is so trivial, please do contribute and submit the code yourself.
  14. Migration of VM with replication job not possible, why?

    If you set up one minute replication job, the max time to sync would be data written in last 60 seconds. This is pretty much the same as sync before send.
  15. Migration of VM with replication job not possible, why?

    1) Qemu contains no function for dirty bit map for delta sync? 2) If you set up replication beforehand, this is exactly what happens on offline migration. 3) See: .


The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!