Recent content by dsh

  1. Multiple pool named rpool after clean install

    Thank you so much. Silly me thought deleting partitions would clear ZFS label. If only I've cleared zfs label before installation, it wouldn't have happened. But now I know thanks to you.
  2. Multiple pool named rpool after clean install

    I've deleted all partitions with fdisk before installation. It's right after clean installation. As you can see two disks of healthy rpool's mirror-0 (nvme-eui.0xxxxxxxxxxxxxxxxxxxxx-part3) are also in first and second rpool. So, if I do zpool labelclear nvme-eui.0xxxxxxxxxxxxxxxxxxxxx-part3...
  3. Multiple pool named rpool after clean install

    Other pools name "rpool" shows degraded. Top 2 pools' disk are just another symlink to same disk used in 3rd pool, which is healthy and one I want to use.
  4. Multiple pool named rpool after clean install

    Hi, I've installed proxmox 6.1 on 2x Intel P4510 (ZFS mirror). Upon successful installation, it reboots and stuck in initram console because there is multiple pool named rpool. If I manually import my pool using ID, it boots fine. How can I delete other pools named rpool. I've tried rpool...
  5. NVME ZFS 10, can't create rpool

    As a workaround, I created a mirrored ZFS using first two disk. After installation completed I've added mirrors manually using following command. zpool add rpool mirror /dev/disk/by-id/disk1 /dev/disk/by-id/disk2
  6. NVME ZFS 10, can't create rpool

    Thank you. I've edit the post.
  7. NVME ZFS 10, can't create rpool

    Hi, I'm trying to do clean installation with 8x Intel P4510 on ZFS 10. When I try to install following error shows. I've executed dd if=/dev/zero of=/dev/nvmeXn1 bs=64MB count=10 on all drives.Still the same. Can anybody help? Thanks
  8. Server froze, rebooted however can't boot due to mdadm failed

    Helllo everyone. I have Proxmox installed on top of Debian software RAID-10. Today it froze and I restarted manually, unfortunately it doesn't boot due to mdadm error. A. It gives following error on boot. B. cat /proc/mdstat C. blkid What I find weird is boot error UUID doesn't...
  9. [SOLVED] Get container data from old disk

    It seems I just can't mount. lvscan show inactive '/dev/pve/swap' [8.00 GiB] inherit inactive '/dev/pve/root' [96.00 GiB] inherit inactive '/dev/pve/data' [3.52 TiB] inherit inactive '/dev/pve/vm-101-disk-0' [8.00 GiB] inherit inactive...
  10. [SOLVED] Get container data from old disk

    I rent a dedicated server in remote datacenter and now hard disk is failing. So hard disk is going to be replaced and old disk will be attached for few hours. Since my storage is over 80% I can't backup my lxc container with biggest disk(about 3tb) on local disk. For small containers, I...
  11. Scaling beyond single server. Suggestion wanted.

    Hi Velocity08, Main VM in our setup is ERP database. I have an irrational fear about bit rot. Running it on ZFS gives me some confidence. I've never trusted Hardware/Software RAID. I have not made any progress yet on lab system as it has limited memory and had some problem due to lack of...
  12. Scaling beyond single server. Suggestion wanted.

    Thank you. After reading some documentation, I have finally configured DRBD9 on ZFS with Proxmox 6 on two nodes in lab. Now my only concern is it's not officially supported by Proxmox and possibly very few user-base compared to Ceph.
  13. [SOLVED] 2 Node ZFS replication not using link0

    Update: Live migration and storage replication network is needs to be configured separately and it's not related to corosync network.


The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!