Search results

  1. B

    Editing storage.cfg Disaster

    As you mentioned, the system boots up to login screen and the Proxmox was installed using ZFS option. That means the filesystem exists and it is present in some kind of state that allows it to boot itself. Also your screenshot from "lsblk" lists 3 partitions - that's how ZFS/Proxmox usually...
  2. B

    Editing storage.cfg Disaster

    According to your screenshot it looks like ZFS might be still present. Try running this command from LiveCD: zpool import and after that check if any ZFS pool / dataset exists: zpool list && zfs list
  3. B

    Editing storage.cfg Disaster

    For the Proxmox node where you lost access - you can try to boot from Linux live CD with ZFS support - I think Ubuntu ISO images support ZFS out of the box. After that, check if your ZFS pool exists and data is present (zpool list, zfs list, etc). If it is there, then next steps is to...
  4. B

    HA in storage / HA in PROXMOX.

    That's something not directly related to Proxmox. I suppose this may work, ie you can "zpool create hapool mirror iscsi-LUN1 iscsi-LUN2" from each of the two storages. But in the real world I would highly recommend avoiding such setup. ZFS is a filesystem designed to be used as a local storage...
  5. B

    Storage with 24 disk per server

    Previously we were using ZFS RAIDZ2 on 6x10TB SAS (7200 rpm) drives for PBS in our setup. All was fine until we tried to restore some ~1TB VMs from this PBS ZFS pool - the read speed of RAIDZ2 is horrible, even with ZFS special device (made of mirrored SSD) attached to it. The read performance...
  6. B

    NFS session trunking / Multipathing / MPIO

    We're using Netgear's M4300-24X (XSM4324CS) switches, they're perfectly stacking and do what you're looking for. Probably the only weak hardware side of these switches is single PSU per unit. These switches cost around $5k.
  7. B

    How do you squeeze max performance with software raid on NVME drives?

    Sorry for late response. Nothing specific within our configuration: Just created plain MDADM device out of nvme drives with defaults, then added this MDADM device as LVM PV, then created VG GROUP using the PV and then use it as LVM-thin type of storage within Proxmox.
  8. B

    How do you squeeze max performance with software raid on NVME drives?

    @davemcl, what's the NVME model you used in these tests? Your tests just confirm what I see on our systems.
  9. B

    How do you squeeze max performance with software raid on NVME drives?

    Hi! Yes, we stick with MDADM+LVM on top of NVMe for now. MDADM+LVM still outperforms ZFS raid variants a lot and not causing system load as much as ZFS does.
  10. B

    How do you squeeze max performance with software raid on NVME drives?

    Yes, I tried all variants of ZFS RAID and RAID-z that I could compose out of 4xNVME drives. At the present moment we decided to not use ZFS on top of NVME in our setup and use MDADM+LVM instead. We use ZFS actively on SAS drives thought and we're happy with it's performance for the tasks!
  11. B

    How do you squeeze max performance with software raid on NVME drives?

    I want to run virtual machines using PVE on such RAID storage, so that would be block storage (either ZFS ZVOL or LVM volumes). My original question is clear enough - which storage model to use with PVE to get maximum speed/IOPS out of modern NVME disks in RAID-type setups.
  12. B

    How do you squeeze max performance with software raid on NVME drives?

    Hi @shanreich! Thanks for pointing. I've read the mentioned documentation and many others, including Proxmox's document regarding ZFS on NVME tests. Just for sanity I did simple identical test for both MDADM+LVM and ZFS - all on 2xIntel P7-5620 3.2TB NVME U.2 RAID-1 storage. FIO command taken...
  13. B

    How do you squeeze max performance with software raid on NVME drives?

    Hello everybody! First of all, I do not want to run into flames on this thread, but I'm looking for your advises. Second - I'm a big-big fan of Proxmox (using since PVE 2.x) and I'm also a big fan of ZFS (using since it became avail on FreeBSD first). I suppose my question is very common these...
  14. B

    Hot-Plug NVME not working

    Hi all, We also hit this problem on Proxmox 6.4-6 - we tried to add 4 NVME disks into running system using hot-plug. One of the ways to solve this issue without reboot/patch/etc on critical running systems - 1) Install "nvme-cli" 2) Do "nvme reset /dev/nvmeX" on each new disk (can be found via...
  15. B

    ZFS - error: no such device, error: unknown filesystem, Entering rescue mode

    Thanks for pointing. I wanted to say that the warning could be placed somewhere during apt-get dist-upgrade (from lets say Proxmox 6.3 to Proxmox 6.4) - like apt-get stops and acknowledges sysadmin regarding this potential issue. I actually followed this excellent howto to fix boot loader. As...
  16. B

    ZFS - error: no such device, error: unknown filesystem, Entering rescue mode

    Just want to add that we also hit this problem after doing recent apt-get update/upgrade on one of the servers and doing "zpool upgrade -a" afterwards.. I only want to say that this is very serious and somewhat undocummented behavior that can cause a lot of problems to sysadmin, especially when...
  17. B

    [SOLVED] Passing 32+ VLANs to VM - what options?

    Hi Thanks for your response and confirming this setup! We're going to implement this on our new network to see how it goes. regards,
  18. B

    [SOLVED] Passing 32+ VLANs to VM - what options?

    Greetings, I know this has been asked many times, but I still did not find exact answer or solution. Using latest no-subscription PVE. We need more than 32 VLANs passed to single VM. We were fine until we hit 32 virtual nics limit in VM configuration (I know there's an option to bump that...
  19. B

    All VMs locking up after latest PVE update

    For the past 2 months we've hit this all VMs freezing couple times. Last freeze occured last night on 7-node cluster (storage: CEPH, backups to PBS) - right after PBS backup finished on one node, all VMs started to freeze with the following message in logs: May 10 22:03:04...
  20. B

    Deleted datastore holds filesystem and speed question

    Greetings Proxmox developers and users! First of all, I would like to thank Proxmox team for PBS - we just started using it within our infrastructure and so far we see great results. Just to share: our new shiny backup server is built on Supermicro 2U platform, 2 x Xeon 4214R CPUs, 128GB RAM...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!