Search results

  1. L

    move disk feature blocking behaviour with zfs

    Hi, yes but why should this block the creation / destroy of any other VM on the same storage? --------- yes its of course natural, that if you are using weak hardware, that you get what you paid for. And without doubt, moving this 2 TB disk with 10G is quiet unpleasent for ZFS using...
  2. L

    move disk feature blocking behaviour with zfs

    Hi Fiona, its clear that a migration will and must block all operations for the migrating vm to ensure a consistent state of the VM ( incl. its disks ). To me its not clear why a migration of VM A to node X will actually block / harm all other operations with the ZFS storage on node X. As i...
  3. L

    move disk feature blocking behaviour with zfs

    Hello fiona, as it seems, doing migrations blocks several zfs actions. When trying to create new VM's by cloning existing template, you will receive: () trying to acquire lock... TASK ERROR: can't lock file '/var/lock/qemu-server/lock-265.conf' - got timeout This is proxmox-ve: 7.2-1...
  4. L

    move disk feature blocking behaviour with zfs

    Hi, if you are moving a disk from ceph to a zfs storage with one VM and you try to remove another VM on the same host which is on zfs, you will receive TASK ERROR: zfs error: cannot destroy 'local-zfs/vm-124-disk-0': dataset is busy pve-manager/7.2-4/ca9d43cc (running kernel: 5.15.35-1-pve)...
  5. L

    VM will loose connectivity if datacenter firewall is enabled / howto setup anti ip spoofing

    Hi, yes Firewall is enabled on the net0 NIC. Maybe this whole thing is not working this way to put the datacenter on accept all. Maybe it has to be put on deny and then host/vm wise be opened. But unfortunatelly i can not find any documentation about details how this all concept is actually...
  6. L

    VM will loose connectivity if datacenter firewall is enabled / howto setup anti ip spoofing

    Hi, thank you for your advice! And yes this does in general solve the problem the VMs loosing IP connectivity. But it will not help activating anti ip spoof. With this settings: Datacenter Firewall: Input / Output Policy = ACCEPT Firewall = Yes ebtables = Yes Hostnode Firewall...
  7. L

    VM will loose connectivity if datacenter firewall is enabled / howto setup anti ip spoofing

    Hi, # cat firewall/112.fw [OPTIONS] enable: 1 dhcp: 0 ipfilter: 1 macfilter: 1 [IPSET ipfilter-net0] 192.168.1.200 192.168.1.201 in combination with: # cat firewall/cluster.fw [OPTIONS] enable: 1 policy_in: ACCEPT root@n1:/etc/pve# Will not work. So this VM will loose IP...
  8. L

    Bug Report: Rolling back a snapshot will not remove existing cloud init drives

    Hi, yes i already put the link of the forum thread into the bug report. And we didn't reproduce this via API. We just saw it during some testing with the GUI. Thank you for your countercheck! And yes i agree, deleting automatically regular disks during snapshot rollback is a bad idea, while...
  9. L

    Bug Report: Rolling back a snapshot will not remove existing cloud init drives

    Hi, we usually don't use the cli, as automations for proxmox goes through the API. But as it seems a reproduceable walkthrough makes sense: 1. create through GUI a vm ( all default, just click next, select "do not use any media" in OS tab ) then things might look like this: # qm...
  10. L

    Bug Report: Rolling back a snapshot will not remove existing cloud init drives

    Hi, thank you! Done. https://bugzilla.proxmox.com/show_bug.cgi?id=4151 Greetings Oliver
  11. L

    Bug Report: Rolling back a snapshot will not remove existing cloud init drives

    Hi, because i dont know where to report this, i hope its ok to do it here: with pve-manager/7.2-4/ca9d43cc on a kvm machine. If you: - create a snapshot - add cloud-init drive on storage A - restoring the snapshot the cloud-init drive will be removed, but not the cloud-init file created on...
  12. L

    backup register image failed: command error: no previous backup found, cannot do incremental backup

    Hi Dominik, thank you for your time. But i am sorry, i think you are wrong here. As far as i can see, you are using restic or something similar. That means there is ONE full copy at the beginning, and everything else is an incremental copy alias snapshot that is deduplicated. As you can see...
  13. L

    backup register image failed: command error: no previous backup found, cannot do incremental backup

    This issue was "solved" by deleting in the PBS server the faulty entry PLUS deleting the entry before the faulty entry. After this, a new backup was possible. It would be nice, if there would be a button/way to enforce a new full backup and start this way a new reference point for all future...
  14. L

    backup register image failed: command error: no previous backup found, cannot do incremental backup

    Hi, the PBS server had to be reinstalled because of a filesystem corruption of the OS. The datastore is backed by ZFS, is fine and was used in the new PBS again. Now some VMs have issues generating new backups: ``` INFO: starting new backup job: vzdump 118 --remove 0 --mode snapshot --node...
  15. L

    Proxmox API will return OK while the task actually failed

    Hi, when using ceph, the removal of an RBD volume might take a moment. Thats natural as ceph will remove the data. The problem is: When you are doing multiple actions ( no matter if its create or remove of RBD volumes ) -- like you do it when you create / destroy a VM then the different API...
  16. L

    Removing NFS storage via API will not unmount the NFS from the system

    Hi, its clear that i could manually remove the mount. My point is: Should it not be an expected behavior of the API to do this? When the API receives the command to add storage, it will do the whole process. It will not require the admin to manually mount the NFS share. But when the API...
  17. L

    Removing NFS storage via API will not unmount the NFS from the system

    Hi, no. After reboot the nfs will not be mounted anymore. ( as expected ). So it seems the API simply does not unmount the NFS storage after removing the storage from proxmox. Its reproduceable. Seems to be designed like this. Greetings Oliver
  18. L

    Removing NFS storage via API will not unmount the NFS from the system

    Hi, when a new nfs type storage is added via API, the proxmox UI will see a new entry for the storage and the debian system will mount the nfs share. So far, so nice. But when removing the storage via API, only from the proxmox UI the entry will be removed, but the debian system will still...
  19. L

    Cluster Templates ( NFS )

    Hi, as far as i can see, thats not possible with proxmox. But it would be definitly an interesting feature that templates can be shared between hosts / clusters.
  20. L

    Backup server zfs chunck corrupt

    Hi, zfs reportet einen permanenten Fehler in der Datei: //backup/.chunks/4b43/4b4316ea6fb574f54b17009b1bbf7f99c04581fc927eea63ebd1c0b2fdb56f48 Nun waere meine Frage, ob und wenn ja wie man herausbekommen kann zu welchem Backup / VM dieser Chunk genau gehoert. Version waere: Backup...