Search results

  1. B

    Rate/roast my proposed homelab Ceph network

    Are you going to use SATA SSD or regular spinning rust? I think 10Gbit is more than enough for spinning rust.
  2. B

    Proxmox VE 8.2 released!

    Just wondering, which kernel version is used in latest PVE Enterprise repo, if that's not secret information? We upgraded some of our clusters to latest PVE 8.2, however the nodes still run w/o reboot using 6.5.11-7-pve kernel.
  3. B

    Slow restore performance

    I think the network latency is the bottleneck here. Some time ago we gave up on running PBS when PVE and PBS are located in different datacenter locations - east and west of EU, even thought we had stable 1Gbit/s connectivity between the servers - PBS uses lots of small chunks and even several...
  4. B

    Convert PBS backup/vm snapshot to regular Proxmox backup

    Thanks. I guess right now the only way to solve this task it to 1) restore VM from PBS snapshot somewhere on Proxmox node 2) backup restored VM using default Proxmox backup method (single file archive) 3) fetch that VM backup archive file and purge the VM Will try to post feature request on...
  5. B

    Convert PBS backup/vm snapshot to regular Proxmox backup

    That's OK when you can have such archival PBS instance running somehwere else. Our case in particular: the company running Proxmox cluster and it's PBS server is ceasing it's operations literally, shutting down all hardware on remote location (datacenter). There's a small chance that some data...
  6. B

    Convert PBS backup/vm snapshot to regular Proxmox backup

    Hello, I'm wondering if there's any known procedure to create regular (i.e. vm-123.tar.zst) VM backup file from existent backups of that VM that's housed on Proxmox Backup Server? There's simple use-case for this - we have cluster with many VMs running and all of them get backup'ed on central...
  7. B

    Editing storage.cfg Disaster

    As you mentioned, the system boots up to login screen and the Proxmox was installed using ZFS option. That means the filesystem exists and it is present in some kind of state that allows it to boot itself. Also your screenshot from "lsblk" lists 3 partitions - that's how ZFS/Proxmox usually...
  8. B

    Editing storage.cfg Disaster

    According to your screenshot it looks like ZFS might be still present. Try running this command from LiveCD: zpool import and after that check if any ZFS pool / dataset exists: zpool list && zfs list
  9. B

    Editing storage.cfg Disaster

    For the Proxmox node where you lost access - you can try to boot from Linux live CD with ZFS support - I think Ubuntu ISO images support ZFS out of the box. After that, check if your ZFS pool exists and data is present (zpool list, zfs list, etc). If it is there, then next steps is to...
  10. B

    HA in storage / HA in PROXMOX.

    That's something not directly related to Proxmox. I suppose this may work, ie you can "zpool create hapool mirror iscsi-LUN1 iscsi-LUN2" from each of the two storages. But in the real world I would highly recommend avoiding such setup. ZFS is a filesystem designed to be used as a local storage...
  11. B

    Storage with 24 disk per server

    Previously we were using ZFS RAIDZ2 on 6x10TB SAS (7200 rpm) drives for PBS in our setup. All was fine until we tried to restore some ~1TB VMs from this PBS ZFS pool - the read speed of RAIDZ2 is horrible, even with ZFS special device (made of mirrored SSD) attached to it. The read performance...
  12. B

    NFS session trunking / Multipathing / MPIO

    We're using Netgear's M4300-24X (XSM4324CS) switches, they're perfectly stacking and do what you're looking for. Probably the only weak hardware side of these switches is single PSU per unit. These switches cost around $5k.
  13. B

    How do you squeeze max performance with software raid on NVME drives?

    Sorry for late response. Nothing specific within our configuration: Just created plain MDADM device out of nvme drives with defaults, then added this MDADM device as LVM PV, then created VG GROUP using the PV and then use it as LVM-thin type of storage within Proxmox.
  14. B

    How do you squeeze max performance with software raid on NVME drives?

    @davemcl, what's the NVME model you used in these tests? Your tests just confirm what I see on our systems.
  15. B

    How do you squeeze max performance with software raid on NVME drives?

    Hi! Yes, we stick with MDADM+LVM on top of NVMe for now. MDADM+LVM still outperforms ZFS raid variants a lot and not causing system load as much as ZFS does.
  16. B

    How do you squeeze max performance with software raid on NVME drives?

    Yes, I tried all variants of ZFS RAID and RAID-z that I could compose out of 4xNVME drives. At the present moment we decided to not use ZFS on top of NVME in our setup and use MDADM+LVM instead. We use ZFS actively on SAS drives thought and we're happy with it's performance for the tasks!
  17. B

    How do you squeeze max performance with software raid on NVME drives?

    I want to run virtual machines using PVE on such RAID storage, so that would be block storage (either ZFS ZVOL or LVM volumes). My original question is clear enough - which storage model to use with PVE to get maximum speed/IOPS out of modern NVME disks in RAID-type setups.
  18. B

    How do you squeeze max performance with software raid on NVME drives?

    Hi @shanreich! Thanks for pointing. I've read the mentioned documentation and many others, including Proxmox's document regarding ZFS on NVME tests. Just for sanity I did simple identical test for both MDADM+LVM and ZFS - all on 2xIntel P7-5620 3.2TB NVME U.2 RAID-1 storage. FIO command taken...
  19. B

    How do you squeeze max performance with software raid on NVME drives?

    Hello everybody! First of all, I do not want to run into flames on this thread, but I'm looking for your advises. Second - I'm a big-big fan of Proxmox (using since PVE 2.x) and I'm also a big fan of ZFS (using since it became avail on FreeBSD first). I suppose my question is very common these...
  20. B

    Hot-Plug NVME not working

    Hi all, We also hit this problem on Proxmox 6.4-6 - we tried to add 4 NVME disks into running system using hot-plug. One of the ways to solve this issue without reboot/patch/etc on critical running systems - 1) Install "nvme-cli" 2) Do "nvme reset /dev/nvmeX" on each new disk (can be found via...