Search results

  1. VictorSTS

    Nic dissappeared from a QEMU VM

    Umm, might be but AFAIK no user should have permissions for it. Checked a log of event logs too and didn't find anything relevant (there's a lot of noise in the event log related to networked disk errors). By chance, do you know in which log exactly would something like that show up?
  2. VictorSTS

    Nic dissappeared from a QEMU VM

    Looking for some clues about this or if someone else has seen this happening too (it's been a first for me and I do have thousand of VMs). Using PVE8.4.5. Have a VM with Windows 2019 with virtio drivers 0.1.271, running fine for a couple of weeks since the last reboot. This morning all of a...
  3. VictorSTS

    Proxmox Datacenter Manager 0.9 Beta released!

    Which is still very useful! Dreaming is free (and fun!).
  4. VictorSTS

    Proxmox Datacenter Manager 0.9 Beta released!

    Does this means that PDM will be able to automagically setup SDN on both host/clusters so they automagically see each other so we can migrate between them without relying on other means for connectivity (VPNs and so on)?
  5. VictorSTS

    [SOLVED] PVE 9 - can't create snapshot on LVM thick

    Been out of the loop for a while. Would you mind posting a link to that thread? Think I've missed that issue completely. Thanks!
  6. VictorSTS

    PBS Backup to TrueNAS: How to do best?

    If backups are what you value most, install PBS on bare metal following best practices (special device, care with RAIDz depending on performance needed, etc). Leave some room for a TrueNAS VM (or OMV or any other appliance) if you really need file sharing services running on that same hardware.
  7. VictorSTS

    Add previously OSD to a new installation of a 3 nodes Proxmox CEPH

    No, you can't wipe the disks if you want to use the data in them. Don't remember the exact steps ATM, can't check them out and isn't super trivial to carry out. You are essentially in disaster recovery scenario. From the top of my mind, you need to deploy one MON and MGR. Export the...
  8. VictorSTS

    Failed replication Cluster

    Create a mirror vdev and add it to the current RAID10 zpool, which will have 3 mirror vdevs instead of the current 2 mirror vdevs. Capacity will increase in ~8TB. No data will be moved to the new disks, so most of your I/O will still hit your current 4 disks and at least initially there won't...
  9. VictorSTS

    Problem with LXC container on PVE8 due to mmp_update_interval being too big.

    Hello, <TLDR> Seems that PVE or LXC or even Ceph change ext4's mmp_update_interval dynamically. Why, when and how it does? </TLDR> Full detailes below: In a PVE8.1 cluster with Ceph 18.2.1 storage, had a situation yesterday where a privileged LXC (id 200) with a 4'2TB ext4 disk as mp0 somehow...
  10. VictorSTS

    MSA 2060 SAN FC with single server (no shared access)

    Use RAID10 (stripped mirrors). The capacity of the storage will be 50% of the total of all drives. You can select ZFS during installation and the raid type too or even install in a mirror of two drives and use the rest later as a different storage. I suggest you try different configurations...
  11. VictorSTS

    Abysmally slow restore from backup

    I know, I was involved in that conversation. I did not for two reasons: - Had no time to implement a proper test methodology. - Modifying each host systemd's files is a no go as that becomes unmanageable and hard to trace over time, so I'll just stick to defaults unless absolutely necessary and...
  12. VictorSTS

    [SOLVED] Ceph (stretched cluster) performance troubleshooting

    The bill compares in the same range too? Cause few people need a Lambo and from them even less needs a Lambo. Feels like Ceph and that Hammerspace thing target completely different use cases/budgets.
  13. VictorSTS

    Fiber Chanel and Shared Storage - Snapshot supported (HA enabled)

    Another proof about @LnxBil arguments is that you cannot use LXC container's disk on ZFS over iSCSI storage because on LXC there is no QEMU. Maybe we are mixing terms and referring to different kinds of storage from PVE perspective even if they use the same technologies like ZFS and iSCSI?
  14. VictorSTS

    Abysmally slow restore from backup

    Yes, full default settings. Install package, do the restore from webUI. This is the full log, which shows it used 4 restore threads, 16 parallel chunks: new volume ID is 'Ceph_VMs:vm-5002-disk-0' restore proxmox backup image: [REDACTED] connecting to repository '[REDACTED]' using up to 4...
  15. VictorSTS

    Failed replication Cluster

    Literally first search result on Google for "pve zfs replication cannot create snapshot out of space": https://forum.proxmox.com/threads/replication-error-out-of-space.103117/post-444342 Please, make the effort to use CODE tags and format you post properly. Not using them makes posts very...
  16. VictorSTS

    Abysmally slow restore from backup

    Did a test right now with production level hardware (Epyc Gen4, many cores, much ram, 5 node cluster + Ceph 3/2 pool on NVMe drives, 25G networks and PBS with a 8 HDD raid10 + special device 74% full, nearly 15000 snapshots): libproxmox-backup-qemu0 v1.5.1 progress 100% (read 80530636800 bytes...
  17. VictorSTS

    Is this an enterprise SSD?

    It's much more than that. A PLP drive will return ACK to the OS/app once that is written in the cache because PLP will allow that data to be written to the chips no mater what happens (OS panic, app crashes, power loss, hardware failure, etc). Of course, assuming that firmware isn't buggy. That...
  18. VictorSTS

    Is there a mod repository ? How to make mods ?

    Personally, I never ever run any script from anywhere. Not even from known sources like ttek / community scripts[1]. Never really had the need for it, neither in my homelabs nor in any of the clusters I manage. No idea what are you referring to. You apply them in which ever order you want and...
  19. VictorSTS

    Is there a mod repository ? How to make mods ?

    Mate, it's super simple: use the API to interact with PVE and make your tool to do whatever you need. A script, a GUI, a docker, a full blown app... and let everyone to be happy. There are lots of examples with this approach. IMHO: no one wants PVE to become "the wordpress of hypervisors"...
  20. VictorSTS

    [SOLVED] Ceph (stretched cluster) performance troubleshooting

    Beware of this if both DC see the remote MON but MONs at each "local" DC can't reach each other. Have you tried that in a lab? It won't work unless you do a lot of manual disaster recovery. No side will have Ceph quorum due to MON election loop. Can't remember the dirty details atm, but AFAIR...