Search results

  1. F

    Linux Bridge reassemble fragmented packets

    just a stupid question. as vlan ware does not work for me i tried to change one host. as i want the cluster & management in a vlan i want do make vmbr0v200 with the ip but i am not able to create a bridge with that name in the gui.... (its red not allowed to press the button because name is not...
  2. F

    Linux Bridge reassemble fragmented packets

    hmmm yes very hard to find out. but seems to be a bug. strange that not many people report this. but it is 100% reproducible. like that vlan aware bridges will allways make problems when guest vms send packages bigger 1500. of course one can set the mtu higher at the bond device after the bridge...
  3. F

    Linux Bridge reassemble fragmented packets

    Hi we experienced similar problems. Sending packages < 1500 where never fragmented again (after beeing reassembled) and where droped this ONLY happens for us if 1) we use vlan aware bridge 2) VM is VLAN tagged so if the vm (tap device) is not vlan tagged or we use normal bridges (not vlan aware...
  4. F

    MTU problem and packet loss with vlan aware brige

    to get better understanding i figured out following: because of netfilter all packages > 1500 allways get reassebled on the bridgers (if turned on) the problem now in my case is - if the package is vlan taged on bridges with vlan-aware the packages are dropped by bond0 afterwards. if i switch...
  5. F

    MTU problem and packet loss with vlan aware brige

    hi, i have changed our setup for the internet uplinks from access vlans to tagged vlans on the switch. around 200 vms perform normally. only one vm has the problem of a packet loss: so traffick sniffing shows following for the old working setup: eth0 -> smaller 1500 byte split in two packages...
  6. F

    Get disk usage for VMS in CLI

    i am not talking about inside the vm. we have monitoring (zabbix) but from the proxmox view how much storage a vm has set (including all disks)
  7. F

    Get disk usage for VMS in CLI

    Hi, i would like to make a script to collect customer usage stats. CPU / RAM is no problem. But i don't know how to get all disk from one vm and not only the root disk. for example pvesh get /cluster/resources but this lists only one disk per vm. Thank you.
  8. F

    existing bitmap was invalid and has been cleared - only 1 backup server

    Hi, we got the message: INFO: backup mode: snapshot INFO: ionice priority: 7 INFO: creating Proxmox Backup Server archive 'vm/200/2021-05-05T00:26:06Z' INFO: issuing guest-agent 'fs-freeze' command INFO: enabling encryption INFO: issuing guest-agent 'fs-thaw' command INFO: started backup task...
  9. F

    iSCSI Portal single Point of Failure

    >>the iscsiadm output seems like it's from a lenovo system and not a netapp? lenovo is just reselling netapps. its netapp hardware. maybe. but in case of switch failure it can happen some time to bring up the interface. of course there are workarounds. but i like things straight forward. if...
  10. F

    iSCSI Portal single Point of Failure

    Thank you very much for the answer. So it is like i suspected. We use CEPH which is really nice integrated into pve- but allways we want to have 2 HA storages in case 1 for whatever reasons badly fails or is not available for some days. SCSCI/FC only nettaps for example cost the half price of...
  11. F

    iSCSI Portal single Point of Failure

    iscsi admin shoes me: tcp: [2] 192.168.10.231:3260,1 iqn.2002-09.com.lenovo:thinksystem.600a098000e6a260000000005c59d825 (non-flash) tcp: [3] 192.168.10.232:3260,1 iqn.2002-09.com.lenovo:thinksystem.600a098000e6a260000000005c59d825 (non-flash) tcp: [4] 192.168.10.234:3260,2...
  12. F

    iSCSI Portal single Point of Failure

    see below iscsid looks bad but multiptah looks good: Feb 18 12:18:14 node3 kernel: [7940340.269119] connection2:0: ping timeout of 5 secs expired, recv timeout 5, last rx 6279954892, last ping 6279956144, now 6279957440 Feb 18 12:18:14 node3 kernel: [7940340.270145] connection2:0: detected...
  13. F

    iSCSI Portal single Point of Failure

    if i deactivate the switch where the iscasci port of the SAN is connected which ip i used for configuring via proxmox gui the storage shows as offline. so i cannot start any vm? noby ever tried this? if you have multipath normally you don't use lacp und then if the iscsi interface of the SAN is...
  14. F

    iSCSI Portal single Point of Failure

    HI, we have a Netapp E Series which does not support LACP. HA is done over Multipath. This works as expected (tested deactivating one nic for example) The problem is that proxmox is querying allways the ip of the portal IP to get the status of the iscasi target(s). When i deactivate the...
  15. F

    Add scsi to existing proxmox cluster

    Hi, we have an existing proxmox cluster with ceph and also nfs storage: eth0,eth1 = bond0 => internet traffick eth2,eth3 = bond1 => storage trafick (ceph, nfs) eth4, eth5 = bond2 => osd traffick all bonds are lacp. to add scsi multipath we simply need to add the multipath driver like in the...
  16. F

    migrationg storage backend ends in new copy of all data

    INFO: resuming VM again INFO: virtio0: dirty-bitmap status: created new ...... INFO: backup is sparse: 18.52 GiB (46%) total zero data INFO: backup was done incrementally, reused 37.55 GiB (93%) INFO: transferred 40.00 GiB in 235 seconds (174.3 MiB/s) so it looks like it was not tranfered. as...
  17. F

    migrationg storage backend ends in new copy of all data

    if the data usage is double i dont know because it is hard to find. i will make a check to test if there is no network load. i just saw that it started to read all over again but did not check on the network. but something to have in mind because it behaves like a new first upload which ist many...
  18. F

    migrationg storage backend ends in new copy of all data

    question to proxmox team: 1) is that itentionally 2) whats with the data of the "old" virtual disk? is it also on the backup data set on the backup server or will it be deleted?