As you mentioned, the system boots up to login screen and the Proxmox was installed using ZFS option. That means the filesystem exists and it is present in some kind of state that allows it to boot itself.
Also your screenshot from "lsblk" lists 3 partitions - that's how ZFS/Proxmox usually...
According to your screenshot it looks like ZFS might be still present.
Try running this command from LiveCD: zpool import
and after that check if any ZFS pool / dataset exists: zpool list && zfs list
For the Proxmox node where you lost access - you can try to boot from Linux live CD with ZFS support - I think Ubuntu ISO images support ZFS out of the box. After that, check if your ZFS pool exists and data is present (zpool list, zfs list, etc). If it is there, then next steps is to...
That's something not directly related to Proxmox.
I suppose this may work, ie you can "zpool create hapool mirror iscsi-LUN1 iscsi-LUN2" from each of the two storages.
But in the real world I would highly recommend avoiding such setup. ZFS is a filesystem designed to be used as a local storage...
Previously we were using ZFS RAIDZ2 on 6x10TB SAS (7200 rpm) drives for PBS in our setup. All was fine until we tried to restore some ~1TB VMs from this PBS ZFS pool - the read speed of RAIDZ2 is horrible, even with ZFS special device (made of mirrored SSD) attached to it. The read performance...
We're using Netgear's M4300-24X (XSM4324CS) switches, they're perfectly stacking and do what you're looking for. Probably the only weak hardware side of these switches is single PSU per unit. These switches cost around $5k.
Sorry for late response.
Nothing specific within our configuration: Just created plain MDADM device out of nvme drives with defaults, then added this MDADM device as LVM PV, then created VG GROUP using the PV and then use it as LVM-thin type of storage within Proxmox.
Hi!
Yes, we stick with MDADM+LVM on top of NVMe for now. MDADM+LVM still outperforms ZFS raid variants a lot and not causing system load as much as ZFS does.
Yes, I tried all variants of ZFS RAID and RAID-z that I could compose out of 4xNVME drives. At the present moment we decided to not use ZFS on top of NVME in our setup and use MDADM+LVM instead. We use ZFS actively on SAS drives thought and we're happy with it's performance for the tasks!
I want to run virtual machines using PVE on such RAID storage, so that would be block storage (either ZFS ZVOL or LVM volumes).
My original question is clear enough - which storage model to use with PVE to get maximum speed/IOPS out of modern NVME disks in RAID-type setups.
Hi @shanreich!
Thanks for pointing. I've read the mentioned documentation and many others, including Proxmox's document regarding ZFS on NVME tests.
Just for sanity I did simple identical test for both MDADM+LVM and ZFS - all on 2xIntel P7-5620 3.2TB NVME U.2 RAID-1 storage.
FIO command taken...
Hello everybody!
First of all, I do not want to run into flames on this thread, but I'm looking for your advises.
Second - I'm a big-big fan of Proxmox (using since PVE 2.x) and I'm also a big fan of ZFS (using since it became avail on FreeBSD first).
I suppose my question is very common these...
Hi all,
We also hit this problem on Proxmox 6.4-6 - we tried to add 4 NVME disks into running system using hot-plug.
One of the ways to solve this issue without reboot/patch/etc on critical running systems -
1) Install "nvme-cli"
2) Do "nvme reset /dev/nvmeX" on each new disk (can be found via...
Thanks for pointing. I wanted to say that the warning could be placed somewhere during apt-get dist-upgrade (from lets say Proxmox 6.3 to Proxmox 6.4) - like apt-get stops and acknowledges sysadmin regarding this potential issue.
I actually followed this excellent howto to fix boot loader.
As...
Just want to add that we also hit this problem after doing recent apt-get update/upgrade on one of the servers and doing "zpool upgrade -a" afterwards.. I only want to say that this is very serious and somewhat undocummented behavior that can cause a lot of problems to sysadmin, especially when...
Greetings,
I know this has been asked many times, but I still did not find exact answer or solution. Using latest no-subscription PVE.
We need more than 32 VLANs passed to single VM. We were fine until we hit 32 virtual nics limit in VM configuration (I know there's an option to bump that...
For the past 2 months we've hit this all VMs freezing couple times.
Last freeze occured last night on 7-node cluster (storage: CEPH, backups to PBS) - right after PBS backup finished on one node, all VMs started to freeze with the following message in logs:
May 10 22:03:04...
Greetings Proxmox developers and users!
First of all, I would like to thank Proxmox team for PBS - we just started using it within our infrastructure and so far we see great results.
Just to share: our new shiny backup server is built on Supermicro 2U platform, 2 x Xeon 4214R CPUs, 128GB RAM...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.