We have flashed our H310 and H710 with this
https://forum.proxmox.com/threads/zfs-vs-perc-h710p-mini.44037/post-349355
Please follow all instructions. Be carefull.
I've done same on virtual machines and
PVE 6.3 same error. Updated to last packages with non subscription repos and got same error.
PVE 6.2 clean. Works OK.
PVE 6.2 upgraded to PVE 6.3 with non subscription repos and no error!!
We'll rollback to PVE 6.2 and report this.
Don't worry :)
Yes I've deleted the pool, sgdisk -Z in pool disks, deleted /etc/zfs/zpool.cache, rebooted, created again and same error. In other node installed today got same error but this node only have 2 SATA 7k disks, one Ext4 for PVE 6.3 and one RAID-0 ZFS Pool.
We have a cluster with...
I've installed PVE 6.3 in a Dell R620 with this disk config.
- 2TB SATA 7k OS Disk as Ext4 with LVM, with 0 GB Swap (we have 256 GB RAM) and 0 GB VMs
- 2TB SATA 7k disk unused and formatted with LSI Utility and "sgdisk -Z"
- 4 x 2 TB SSD Enterprise disk (new disks not ever used)
Once installed...
I'm running a R620 with a PERC H710 Mini running in IT mode following this guide https://fohdeesha.com/docs/perc/
Important. Follow all steps and read it carefully or you can damage/brick your H710.
Me too. Same error.
I've done this.
We had a ZFS Raid 1.
We have reinstalled Proxmox con THE FIRST DRIVE, seconds NOTHING using Ext4
Mount ZFS and added to storage.
Backup VMs
Reformat without ZFS Root Filesystem
Restore VMs on EXT4
On the other four nodes of the cluster, ROOT FS it's on Ext4...
Nobody?
I've booted using proxmox boot cd using rescue mode
zpool import -a
zfs set mountpoint=/mnt rpool/ROOT/pve-1
zfs mount rpool/ROOT/pve-1
mount -t proc /proc /mnt/proc
mount --rbind /dev /mnt/dev
mount --rbind /sys /mnt/sys
chroot /mnt /bin/bash
rm /etc/modprobe.d/zfs.conf...
This is the last messages before reboot
"/etc/modprobe.d/zfs.conf" [New] 1L, 35C written
root@vcloud05:~# update-initramfs -u
update-initramfs: Generating /boot/initrd.img-5.3.10-1-pve
Running hook script 'zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private...
Hi, we are running a Proxmox 6.1 cluster with five nodes. On a node with low memory, running ok for more than 100 days, i've limited ZFS max memory setting this max size to /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=4294967296
update-initramfs -u
reboot
Then i've rebooted server and...
Any way to check this?
Here is my arc_summary https://pastebin.com/cU8Vrfv8
I've seen this, but we have only one NVME per node. Need redundancy and actually we don't have. Maybe in a near future ;-)
Hi, we have a cluster with five nodes. All nodes have Proxmox installed on SSD and 4 x 2 TB SATA (3.5" 7200 RPM) ZFS Raid 10. All nodes have between 90 GB and 144 GB RAM.
On nodes 1 to 4, we have about 30-40 LXC container with Moodle on each node. All databases are on external server.
All...
Looking into /var/log/daemon.log at email received time got many lines like this
pr 12 07:00:04 node01 pmxcfs[1759]: [status] notice: received log
Apr 12 07:00:05 node01 pmxcfs[1759]: [status] notice: received log
Apr 12 07:00:05 node01 pmxcfs[1759]: [status] notice: received log
Apr 12...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.