Search results

  1. E

    MTU not respected after enabling Cluster firewall

    Hi, strange things happens: We have a small cluster with 4 pves. All of them has vmbrs with a MTU of 9000 cause of ceph speed, 2 of them with a bond, but that's not important. The client VMs uses a MTU of 1500 for the internet interface. Every thing worked fine, large udp packets were...
  2. E

    Error with backup (when backing up) - qmp command 'query-backup' failed - got wrong command id

    As recommended by fabian in the thread https://forum.proxmox.com/threads/pve-shows-failed-pbs-shows-ok.88045/ pveversion -v proxmox-ve: 6.3-1 (running kernel: 5.4.103-1-pve) pve-manager: 6.3-6 (running version: 6.3-6/2184247e) pve-kernel-5.4: 6.3-7 pve-kernel-helper: 6.3-7 pve-kernel-5.3: 6.1-6...
  3. E

    pve shows failed, pbs shows ok

    Hi, last night I see the following: pve 6.3-6 INFO: Starting Backup of VM 134 (qemu) INFO: Backup started at 2021-04-22 01:01:26 INFO: status = running INFO: VM Name: DC001-10000 INFO: include disk 'scsi0' 'hdd-1:vm-134-disk-0' 100G INFO: backup mode: snapshot INFO: ionice priority: 7 INFO...
  4. E

    backup failed: connection error: Connection timed out (os error 110)

    I have the same problem with large VMs I think I found the problem, but not a stable solution: https://forum.proxmox.com/threads/problems-with-big-sync-job.87556/post-384474
  5. E

    Problems with big sync job

    I think it is an other problem: If it is a large VM to sync, than it takes very very long on the source server to get the changed chunks. During this long time the tcp connection is timed out. I played with net.ipv4.tcp_keepalive_time and the other 2 settings for keepalive and I was able to...
  6. E

    Problems with big sync job

    Since in the 'national' forum no one gaves an answer... I have a problem with a big sync job. I sync all the data from one PBS to an other at an other location. After many tries I have now from all VMs something on the sync PBS. But it fails daily on 2 images: ct/129 and vm/100 with - broken...
  7. E

    Probleme mit großem Sync Job

    Neuer Versuch nach reboots von beiden PBS: Die große VM will nicht: 2021-04-01T15:56:03+02:00: sync snapshot "vm/100/2021-03-26T22:00:02Z" 2021-04-01T15:56:03+02:00: sync archive qemu-server.conf.blob 2021-04-01T15:56:03+02:00: sync archive drive-scsi4.img.fidx 2021-04-01T15:56:22+02:00...
  8. E

    Probleme mit großem Sync Job

    Wir habe einen PBS1 auf den die Backups gemacht werden. Ein PBS2 an einem anderen Standort soll dort alles mittels 2er Sync-Jobs holen. Das Problem ist anscheinend, dass eine VM, unser file server, 2.7TB hat und es schon 6 (inkrementelle) Backups gibt. Der Sync job brach schon des öfteren ab...
  9. E

    Good way to backup from several PVE clusters to one PBS (I hope)

    But from the directory structure it is then still 'nested'. Or I have to remove the parent datastore from the PBS. I have to check if this is possible without loosing the datasets below.
  10. E

    Good way to backup from several PVE clusters to one PBS (I hope)

    That was the idea, like in TrueNAS. But it is not possible via the WEB GUI. Also I did not found the corresponding zfs cmd. (Personally I don't like zfs)
  11. E

    Good way to backup from several PVE clusters to one PBS (I hope)

    I thought this would be interresting for others too: If you want to use a central PBS for more then one cluster, you run into the problem of double IDs. Ok, you can use different datastores, but then you have to know the expected sizes. We did an other aproach: We build one big datastore and...
  12. E

    Windows VM's lose network connectivity

    We have also problems after updating PM. (from 6.2.x to 6.3.x if I remember correct) Windows server have then an interface with DHCP (was fixed IP before) When I try to give this interface the norml IP address a message pops up that there is already an interface with this address. If you tell...
  13. E

    Changing vlan-id and ifupdown2 fails 6.3.3

    Hi, we are using ProxMox 6.3.3 with installed ifupdown2 packet. If I change the vlan-id of a linux-vlan inteface where the vlan-id is not in the interface name and apply it, it looks good (no error), but with cat /proc/net/vlan/config you can see that the old vlan-id is in use. I need to run...
  14. E

    ERROR: Backup of VM failed ... exit code 32

    In our case there are daemonized tar jobs which blocks the device. You can userbd showmappedto see the used devices. Andfuser -amv /dev/rbdXto see who is using this device. But since in our case it were already daemonized tar jobs, the only solution was a reboot.
  15. E

    PBS restore lxc fails

    Ups... sorry. I had to uncheck the 'Unpriviledged Container' than it worked. But for what is this option, if it is not working? I have the same problem with 'normal' backups (just checked) But the message is different: extracting archive...
  16. E

    PBS restore lxc fails

    I use a PVE 6.2-12 and a PBS 0.9-1. When I try to restore a container to a different id I get Error: error extracting archive - error at entry "random": failed to create device node: Operation not permitted (os error 1) TASK ERROR: unable to restore CT 303 - command 'lxc-usernsexec -m...
  17. E

    Create VM from existing qcow2 image ?

    Only since I reached the same point: You have to look with qemu-img info xxx.qcow2 for the 'virtual size' of the disk. This size needs to be created as vm-disk. Then it fits.:)
  18. E

    Update zu 6.1.8 legt Server lahm

    Hallo ProxMox, irgendein feedback dazu?
  19. E

    Update zu 6.1.8 legt Server lahm

    Wir hatten das auch ohne CEPH (aber immer mit open-vswitch in Benutzung) Es sieht jedoch so aus, als ob der benutzte Kernel eine Rolle spielt. Ist schon der Kernel 5.3.18-2 am Laufen scheint alles zu klappen. Ist noch der 5.3.13-1 am Laufen, sieht es zwar so aus als ob das Update...
  20. E

    Update zu 6.1.8 legt Server lahm

    Wenn man mit aptitude open-vswitch auf hold setzt, läuft das Update durch. Wenn man den Server vor dem update rebootet, dann geht es anscheinend auch.