Recent content by sseidel

  1. S

    Critical Ceph 19.2.2 update not yet in repo?

    I see. So instead of including the patch that fixes a critical data corruption issue, the Proxmox team decided to just change a setting. Great.
  2. S

    Critical Ceph 19.2.2 update not yet in repo?

    Hi, Ceph 19.2.2 was released in April (!) and it contains a critical bugfix. My cluster was affected by the bug (which can only be fixed by wiping and re-creating all affected OSDs). Why is it not in the repo yet? Could someone from the team take a look and update the repo? Thanks, Stefan
  3. S

    Network initialization issues after latest update

    Okay, good to see I'm not the only one. So better not restart containers until this is fixed. Really bad issue. Luckily I have a second host which I haven't converted to OpenVSwitch yet, so at least I could start the container there. But not an ideal situation.
  4. S

    Proxmox running PFSENSE FW with single NIC

    Yeah, I think all that stuff about vlan-aware and tagging comes from old versions of Debian that didn't do this by default.
  5. S

    Proxmox running PFSENSE FW with single NIC

    I think the multi-bridge setup is way too complicated. I just use one bridge: # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug enp2s0 iface enp2s0 inet manual auto vmbr0 iface vmbr0 inet dhcp bridge-ports enp2s0 bridge-stp...
  6. S

    Proxmox running PFSENSE FW with single NIC

    Yes, it works. There are two ways to do it: 1. is to pass through only one interface to pfSense and set up VLANs in pfSense. That gives you a little more flexibility. Remember to allow the VLANs for the interface by editing the conf file for the VM and adding ,trunks=1;2;3 to allow VLANs 1, 2...
  7. S

    [SOLVED] Can I move VMs from Intel to AMD?

    We do Live Migrations between AMD and Intel without problems. The key is for Windows to set the CPU type to "Westmere". Then there's no problem. Cold migration (shut down, move, start) is no problem regardless the CPU setting. I have never had any version of Windows complain about a different...
  8. S

    issue on moving Ubuntu Server VMs from one server to another

    It's a bug. Proxmox team knows about it and won't acknowledge or do something about it. Live migration worked fine in 5.0 and then broke after that. See https://forum.proxmox.com/threads/live-migration-broken-for-vms-doing-real-work.49380/ and https://bugzilla.proxmox.com/show_bug.cgi?id=1660...
  9. S

    Live migration broken for VMs doing "real" work

    Ok, we got new hardware and since it still didn't work even between identical machines. We even purchased a PVE subscription to eliminate this as a factor. I did some more searching then and found this thread: https://pve.proxmox.com/pipermail/pve-user/2018-February/169238.html Which describes...
  10. S

    CephFS EC Pool recommendations

    Yes, it would be an interesting experiment (but not more!) to have a high redundancy, then take 3 of the servers and start them up at one location and the other 3 in a separate network and see if the data can be read in both clusters :confused:
  11. S

    CephFS EC Pool recommendations

    Somebody can correct me if this is wrong, but to my understanding if you want to survive with 6 OSD down then you need m=6. Shutdown/startup usually work fine if you follow proper procedures: first shut down all clients (VMs), then shutdown all servers. Then start up again. Ceph will not start...
  12. S

    is ext4 file system thin or thick ?

    Of course you can, also with RAW. You will need to enable and use TRIM in your VM and then the backup file will only contain the used blocks. But PVE 3 is really old, I have no idea whether TRIM is supported.
  13. S

    CephFS EC Pool recommendations

    It's been a while since I set our EC pool up, but I think K=2, M=1 won't work because it's not spread out enough to work with one failed host. We have k=4 and m=2, which I think is the minimum (we have 2 or 3 SSDs per host and 5 hosts). It works well, but there's not a lot more I can say about...
  14. S

    Live migration broken for VMs doing "real" work

    So, is anybody able to confirm or deny that installing Debian 9 in a VM with the parameters outlined above works? I think that could be the first step to find out where the problem is.
  15. S

    Live migration broken for VMs doing "real" work

    Why would you guess that? The BIOS versions and CPU microcode levels as well as /proc/cpuinfo are absolutely identical. These were provisioned on the same day, so I wouldn't have thought why they would be different. Also, how would anything there explain that migration from Ryzen to Ryzen...