Recent content by VictorSTS

  1. VictorSTS

    Is PVE9 supported on Veeam 12.3?

    Hello, Is Veeam restore working for you with PVE plugin v12.1.5.17 either with PVE9 or PVE8.x? Recently Veeam has published PVE plugin v12.1.5.17 which should support PVE9 [1]. Tried using it to restore some backups made from ESXi but after selecting the destination storage and disk format I...
  2. VictorSTS

    iSCSI where to put ISOs or Import-Images?

    I would have tried to: - create another LUN in the iSCSI storage - connect it manually on one node - create an ext4 on it - mount it on that single node at /nnt/IMPORT - add it to PVE as directory storage in just that host. Essentially the same as the local disk you did, but using the iSCSI storage
  3. VictorSTS

    VM network interruptions and Conntrack weirdness

    Manual [1] says default is 262144, although it should not apply unless you have firewall enabled both for the host and at Datacenter level. I would also think that "default" could mean "use whatever is in the system", but it does in fact apply PVE's default for the value. Which, OTOH, is high...
  4. VictorSTS

    VM network interruptions and Conntrack weirdness

    Maybe you have it set at host's level firewall on PVE? Check on webUI or in /etc/pve/nodes/<NODE>/host.fw
  5. VictorSTS

    Nested Proxmox - CoW on CoW concerns?

    I would go with LVM unless you want to use i.e. zfs-sync replication from "your" proxmox at the VPS to some other external Proxmox. It makes little sense to use ZFS in "your" Proxmox: CoW stacking and read/write amplification stacking will affect performance (which OTOH will be already be...
  6. VictorSTS

    Slow garbage collection and failed backups

    This has been discussed dozens of times already. That setup with those backup sizes won't ever perform well. You are using the two things that kill PBS GC performance: network shares and HDD only datastore. Your datastore size requires proper deployment. Every GC has to touch every single chunk...
  7. VictorSTS

    CEPH Erasure Coded Configuration: Review/Confirmation

    Each K and M must be in a different host because you want your fault domain to be host (the default), not disk: i.e. if fault domain was disk you may end up with too many K or M (or both!) for some PGs in the same host and if that host goes down (i.e. a simple reboot) your VMs will hang because...
  8. VictorSTS

    Nic dissappeared from a QEMU VM

    Thanks for the tip, it put me on the right track :). This is definitely what happened: someone did eject the nic from Windows: Any user can eject devices, even non-admin ones (come on, Microsoft, it's a server OS!!). It's an RDP host and I bet their GPO don't restrict user access to that...
  9. VictorSTS

    Nic dissappeared from a QEMU VM

    Umm, might be but AFAIK no user should have permissions for it. Checked a log of event logs too and didn't find anything relevant (there's a lot of noise in the event log related to networked disk errors). By chance, do you know in which log exactly would something like that show up?
  10. VictorSTS

    Nic dissappeared from a QEMU VM

    Looking for some clues about this or if someone else has seen this happening too (it's been a first for me and I do have thousand of VMs). Using PVE8.4.5. Have a VM with Windows 2019 with virtio drivers 0.1.271, running fine for a couple of weeks since the last reboot. This morning all of a...
  11. VictorSTS

    Proxmox Datacenter Manager 0.9 Beta released!

    Which is still very useful! Dreaming is free (and fun!).
  12. VictorSTS

    Proxmox Datacenter Manager 0.9 Beta released!

    Does this means that PDM will be able to automagically setup SDN on both host/clusters so they automagically see each other so we can migrate between them without relying on other means for connectivity (VPNs and so on)?
  13. VictorSTS

    [SOLVED] PVE 9 - can't create snapshot on LVM thick

    Been out of the loop for a while. Would you mind posting a link to that thread? Think I've missed that issue completely. Thanks!
  14. VictorSTS

    PBS Backup to TrueNAS: How to do best?

    If backups are what you value most, install PBS on bare metal following best practices (special device, care with RAIDz depending on performance needed, etc). Leave some room for a TrueNAS VM (or OMV or any other appliance) if you really need file sharing services running on that same hardware.
  15. VictorSTS

    Add previously OSD to a new installation of a 3 nodes Proxmox CEPH

    No, you can't wipe the disks if you want to use the data in them. Don't remember the exact steps ATM, can't check them out and isn't super trivial to carry out. You are essentially in disaster recovery scenario. From the top of my mind, you need to deploy one MON and MGR. Export the...