Recent content by bomzh

  1. B

    Rate/roast my proposed homelab Ceph network

    Are you going to use SATA SSD or regular spinning rust? I think 10Gbit is more than enough for spinning rust.
  2. B

    Proxmox VE 8.2 released!

    Just wondering, which kernel version is used in latest PVE Enterprise repo, if that's not secret information? We upgraded some of our clusters to latest PVE 8.2, however the nodes still run w/o reboot using 6.5.11-7-pve kernel.
  3. B

    Slow restore performance

    I think the network latency is the bottleneck here. Some time ago we gave up on running PBS when PVE and PBS are located in different datacenter locations - east and west of EU, even thought we had stable 1Gbit/s connectivity between the servers - PBS uses lots of small chunks and even several...
  4. B

    Convert PBS backup/vm snapshot to regular Proxmox backup

    Thanks. I guess right now the only way to solve this task it to 1) restore VM from PBS snapshot somewhere on Proxmox node 2) backup restored VM using default Proxmox backup method (single file archive) 3) fetch that VM backup archive file and purge the VM Will try to post feature request on...
  5. B

    Convert PBS backup/vm snapshot to regular Proxmox backup

    That's OK when you can have such archival PBS instance running somehwere else. Our case in particular: the company running Proxmox cluster and it's PBS server is ceasing it's operations literally, shutting down all hardware on remote location (datacenter). There's a small chance that some data...
  6. B

    Convert PBS backup/vm snapshot to regular Proxmox backup

    Hello, I'm wondering if there's any known procedure to create regular (i.e. vm-123.tar.zst) VM backup file from existent backups of that VM that's housed on Proxmox Backup Server? There's simple use-case for this - we have cluster with many VMs running and all of them get backup'ed on central...
  7. B

    Editing storage.cfg Disaster

    As you mentioned, the system boots up to login screen and the Proxmox was installed using ZFS option. That means the filesystem exists and it is present in some kind of state that allows it to boot itself. Also your screenshot from "lsblk" lists 3 partitions - that's how ZFS/Proxmox usually...
  8. B

    Editing storage.cfg Disaster

    According to your screenshot it looks like ZFS might be still present. Try running this command from LiveCD: zpool import and after that check if any ZFS pool / dataset exists: zpool list && zfs list
  9. B

    Editing storage.cfg Disaster

    For the Proxmox node where you lost access - you can try to boot from Linux live CD with ZFS support - I think Ubuntu ISO images support ZFS out of the box. After that, check if your ZFS pool exists and data is present (zpool list, zfs list, etc). If it is there, then next steps is to...
  10. B

    HA in storage / HA in PROXMOX.

    That's something not directly related to Proxmox. I suppose this may work, ie you can "zpool create hapool mirror iscsi-LUN1 iscsi-LUN2" from each of the two storages. But in the real world I would highly recommend avoiding such setup. ZFS is a filesystem designed to be used as a local storage...
  11. B

    Storage with 24 disk per server

    Previously we were using ZFS RAIDZ2 on 6x10TB SAS (7200 rpm) drives for PBS in our setup. All was fine until we tried to restore some ~1TB VMs from this PBS ZFS pool - the read speed of RAIDZ2 is horrible, even with ZFS special device (made of mirrored SSD) attached to it. The read performance...
  12. B

    NFS session trunking / Multipathing / MPIO

    We're using Netgear's M4300-24X (XSM4324CS) switches, they're perfectly stacking and do what you're looking for. Probably the only weak hardware side of these switches is single PSU per unit. These switches cost around $5k.
  13. B

    How do you squeeze max performance with software raid on NVME drives?

    Sorry for late response. Nothing specific within our configuration: Just created plain MDADM device out of nvme drives with defaults, then added this MDADM device as LVM PV, then created VG GROUP using the PV and then use it as LVM-thin type of storage within Proxmox.
  14. B

    How do you squeeze max performance with software raid on NVME drives?

    @davemcl, what's the NVME model you used in these tests? Your tests just confirm what I see on our systems.
  15. B

    How do you squeeze max performance with software raid on NVME drives?

    Hi! Yes, we stick with MDADM+LVM on top of NVMe for now. MDADM+LVM still outperforms ZFS raid variants a lot and not causing system load as much as ZFS does.