Search results

  1. J

    Is my PERC H310 in IT mode?

    We have flashed our H310 and H710 with this https://forum.proxmox.com/threads/zfs-vs-perc-h710p-mini.44037/post-349355 Please follow all instructions. Be carefull.
  2. J

    PVE 6.3 - Clean Install - Failed to start Import ZFS pool rpool

    I've done same on virtual machines and PVE 6.3 same error. Updated to last packages with non subscription repos and got same error. PVE 6.2 clean. Works OK. PVE 6.2 upgraded to PVE 6.3 with non subscription repos and no error!! We'll rollback to PVE 6.2 and report this.
  3. J

    PVE 6.3 - Clean Install - Failed to start Import ZFS pool rpool

    If I create a different pool name, I'll need add another ZFS storage on cluster, then replication will fail and can't migrate.
  4. J

    PVE 6.3 - Clean Install - Failed to start Import ZFS pool rpool

    This config it's working successfully on the other nodes. If we change the pool name, we can't do migrations.
  5. J

    PVE 6.3 - Clean Install - Failed to start Import ZFS pool rpool

    All nodes have this similar config, one SATA rotational or SSD for PVE 6.x (Ext4 LVM) and Four rotational or SSD for ZFS Pool.
  6. J

    PVE 6.3 - Clean Install - Failed to start Import ZFS pool rpool

    Don't worry :) Yes I've deleted the pool, sgdisk -Z in pool disks, deleted /etc/zfs/zpool.cache, rebooted, created again and same error. In other node installed today got same error but this node only have 2 SATA 7k disks, one Ext4 for PVE 6.3 and one RAID-0 ZFS Pool. We have a cluster with...
  7. J

    PVE 6.3 - Clean Install - Failed to start Import ZFS pool rpool

    This is my question, why is trying to import twice?
  8. J

    PVE 6.3 - Clean Install - Failed to start Import ZFS pool rpool

    The pool rpool is on every node of cluster with same name . It's not shared between nodes, it's local storage.
  9. J

    PVE 6.3 - Clean Install - Failed to start Import ZFS pool rpool

    I've installed PVE 6.3 in a Dell R620 with this disk config. - 2TB SATA 7k OS Disk as Ext4 with LVM, with 0 GB Swap (we have 256 GB RAM) and 0 GB VMs - 2TB SATA 7k disk unused and formatted with LSI Utility and "sgdisk -Z" - 4 x 2 TB SSD Enterprise disk (new disks not ever used) Once installed...
  10. J

    ZFS vs PERC H710P Mini

    I'm running a R620 with a PERC H710 Mini running in IT mode following this guide https://fohdeesha.com/docs/perc/ Important. Follow all steps and read it carefully or you can damage/brick your H710.
  11. J

    ZFS - error: no such device, error: unknown filesystem, Entering rescue mode

    Me too. Same error. I've done this. We had a ZFS Raid 1. We have reinstalled Proxmox con THE FIRST DRIVE, seconds NOTHING using Ext4 Mount ZFS and added to storage. Backup VMs Reformat without ZFS Root Filesystem Restore VMs on EXT4 On the other four nodes of the cluster, ROOT FS it's on Ext4...
  12. J

    ZFS - error: no such device, error: unknown filesystem, Entering rescue mode

    Nobody? I've booted using proxmox boot cd using rescue mode zpool import -a zfs set mountpoint=/mnt rpool/ROOT/pve-1 zfs mount rpool/ROOT/pve-1 mount -t proc /proc /mnt/proc mount --rbind /dev /mnt/dev mount --rbind /sys /mnt/sys chroot /mnt /bin/bash rm /etc/modprobe.d/zfs.conf...
  13. J

    ZFS - error: no such device, error: unknown filesystem, Entering rescue mode

    This is the last messages before reboot "/etc/modprobe.d/zfs.conf" [New] 1L, 35C written root@vcloud05:~# update-initramfs -u update-initramfs: Generating /boot/initrd.img-5.3.10-1-pve Running hook script 'zz-pve-efiboot'.. Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private...
  14. J

    ZFS - error: no such device, error: unknown filesystem, Entering rescue mode

    Hi, we are running a Proxmox 6.1 cluster with five nodes. On a node with low memory, running ok for more than 100 days, i've limited ZFS max memory setting this max size to /etc/modprobe.d/zfs.conf options zfs zfs_arc_max=4294967296 update-initramfs -u reboot Then i've rebooted server and...
  15. J

    ZFS Replication failed: got timeout

    Can increase timeout values?
  16. J

    Apply ZFS xattr=sa dnodesize=auto on existing pool with data.

    Any way to check this? Here is my arc_summary https://pastebin.com/cU8Vrfv8 I've seen this, but we have only one NVME per node. Need redundancy and actually we don't have. Maybe in a near future ;-)
  17. J

    Apply ZFS xattr=sa dnodesize=auto on existing pool with data.

    Hi, we have a cluster with five nodes. All nodes have Proxmox installed on SSD and 4 x 2 TB SATA (3.5" 7200 RPM) ZFS Raid 10. All nodes have between 90 GB and 144 GB RAM. On nodes 1 to 4, we have about 30-40 LXC container with Moodle on each node. All databases are on external server. All...
  18. J

    ZFS device fault for pool BUT no SMART errors on drive

    dmesg and journalclt -r gives me results from two hours ago. I've rebooted the server :-( No info at this time on /var/log/kern.log
  19. J

    ZFS device fault for pool BUT no SMART errors on drive

    Looking into /var/log/daemon.log at email received time got many lines like this pr 12 07:00:04 node01 pmxcfs[1759]: [status] notice: received log Apr 12 07:00:05 node01 pmxcfs[1759]: [status] notice: received log Apr 12 07:00:05 node01 pmxcfs[1759]: [status] notice: received log Apr 12...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!