Search results

  1. VictorSTS

    Is PVE9 supported on Veeam 12.3?

    Hello, Is Veeam restore working for you with PVE plugin v12.1.5.17 either with PVE9 or PVE8.x? Recently Veeam has published PVE plugin v12.1.5.17 which should support PVE9 [1]. Tried using it to restore some backups made from ESXi but after selecting the destination storage and disk format I...
  2. VictorSTS

    iSCSI where to put ISOs or Import-Images?

    I would have tried to: - create another LUN in the iSCSI storage - connect it manually on one node - create an ext4 on it - mount it on that single node at /nnt/IMPORT - add it to PVE as directory storage in just that host. Essentially the same as the local disk you did, but using the iSCSI storage
  3. VictorSTS

    VM network interruptions and Conntrack weirdness

    Manual [1] says default is 262144, although it should not apply unless you have firewall enabled both for the host and at Datacenter level. I would also think that "default" could mean "use whatever is in the system", but it does in fact apply PVE's default for the value. Which, OTOH, is high...
  4. VictorSTS

    VM network interruptions and Conntrack weirdness

    Maybe you have it set at host's level firewall on PVE? Check on webUI or in /etc/pve/nodes/<NODE>/host.fw
  5. VictorSTS

    Nested Proxmox - CoW on CoW concerns?

    I would go with LVM unless you want to use i.e. zfs-sync replication from "your" proxmox at the VPS to some other external Proxmox. It makes little sense to use ZFS in "your" Proxmox: CoW stacking and read/write amplification stacking will affect performance (which OTOH will be already be...
  6. VictorSTS

    Slow garbage collection and failed backups

    This has been discussed dozens of times already. That setup with those backup sizes won't ever perform well. You are using the two things that kill PBS GC performance: network shares and HDD only datastore. Your datastore size requires proper deployment. Every GC has to touch every single chunk...
  7. VictorSTS

    CEPH Erasure Coded Configuration: Review/Confirmation

    Each K and M must be in a different host because you want your fault domain to be host (the default), not disk: i.e. if fault domain was disk you may end up with too many K or M (or both!) for some PGs in the same host and if that host goes down (i.e. a simple reboot) your VMs will hang because...
  8. VictorSTS

    Nic dissappeared from a QEMU VM

    Thanks for the tip, it put me on the right track :). This is definitely what happened: someone did eject the nic from Windows: Any user can eject devices, even non-admin ones (come on, Microsoft, it's a server OS!!). It's an RDP host and I bet their GPO don't restrict user access to that...
  9. VictorSTS

    Nic dissappeared from a QEMU VM

    Umm, might be but AFAIK no user should have permissions for it. Checked a log of event logs too and didn't find anything relevant (there's a lot of noise in the event log related to networked disk errors). By chance, do you know in which log exactly would something like that show up?
  10. VictorSTS

    Nic dissappeared from a QEMU VM

    Looking for some clues about this or if someone else has seen this happening too (it's been a first for me and I do have thousand of VMs). Using PVE8.4.5. Have a VM with Windows 2019 with virtio drivers 0.1.271, running fine for a couple of weeks since the last reboot. This morning all of a...
  11. VictorSTS

    Proxmox Datacenter Manager 0.9 Beta released!

    Which is still very useful! Dreaming is free (and fun!).
  12. VictorSTS

    Proxmox Datacenter Manager 0.9 Beta released!

    Does this means that PDM will be able to automagically setup SDN on both host/clusters so they automagically see each other so we can migrate between them without relying on other means for connectivity (VPNs and so on)?
  13. VictorSTS

    [SOLVED] PVE 9 - can't create snapshot on LVM thick

    Been out of the loop for a while. Would you mind posting a link to that thread? Think I've missed that issue completely. Thanks!
  14. VictorSTS

    PBS Backup to TrueNAS: How to do best?

    If backups are what you value most, install PBS on bare metal following best practices (special device, care with RAIDz depending on performance needed, etc). Leave some room for a TrueNAS VM (or OMV or any other appliance) if you really need file sharing services running on that same hardware.
  15. VictorSTS

    Add previously OSD to a new installation of a 3 nodes Proxmox CEPH

    No, you can't wipe the disks if you want to use the data in them. Don't remember the exact steps ATM, can't check them out and isn't super trivial to carry out. You are essentially in disaster recovery scenario. From the top of my mind, you need to deploy one MON and MGR. Export the...
  16. VictorSTS

    Failed replication Cluster

    Create a mirror vdev and add it to the current RAID10 zpool, which will have 3 mirror vdevs instead of the current 2 mirror vdevs. Capacity will increase in ~8TB. No data will be moved to the new disks, so most of your I/O will still hit your current 4 disks and at least initially there won't...
  17. VictorSTS

    Problem with LXC container on PVE8 due to mmp_update_interval being too big.

    Hello, <TLDR> Seems that PVE or LXC or even Ceph change ext4's mmp_update_interval dynamically. Why, when and how it does? </TLDR> Full detailes below: In a PVE8.1 cluster with Ceph 18.2.1 storage, had a situation yesterday where a privileged LXC (id 200) with a 4'2TB ext4 disk as mp0 somehow...
  18. VictorSTS

    MSA 2060 SAN FC with single server (no shared access)

    Use RAID10 (stripped mirrors). The capacity of the storage will be 50% of the total of all drives. You can select ZFS during installation and the raid type too or even install in a mirror of two drives and use the rest later as a different storage. I suggest you try different configurations...
  19. VictorSTS

    Abysmally slow restore from backup

    I know, I was involved in that conversation. I did not for two reasons: - Had no time to implement a proper test methodology. - Modifying each host systemd's files is a no go as that becomes unmanageable and hard to trace over time, so I'll just stick to defaults unless absolutely necessary and...
  20. VictorSTS

    [SOLVED] Ceph (stretched cluster) performance troubleshooting

    The bill compares in the same range too? Cause few people need a Lambo and from them even less needs a Lambo. Feels like Ceph and that Hammerspace thing target completely different use cases/budgets.