Search results

  1. Y

    ZFS cache device by-id reverts to device name after reboot

    I added a cache device: zpool add rpool cache ata-MXXXX_BYYYYYY-part1 and I see the device ID in `zpool status` output at first. But after a reboot it reverts to device name cache sdi1 ONLINE 0 0 0 I tried to...
  2. Y

    [SOLVED] PBS filesystem feature table?

    There is a table at PVE wiki https://pve.proxmox.com/wiki/Storage about storage types and features they provide. Is there such a table for PBS? What do I lose if I use XFS instead of ZFS for example? Thanks!
  3. Y

    [SOLVED] How to remove dead ZIL/ZLOG from ZFS?

    Hi @aaron It is entirely possible that I was too hasty and did not wait enough time for ZFS to figure out what is going on. After all it was a test so I pulled the drives and after few seconds I was trying to remove the log devices :) When I pull the drives I see: root@proxmox1:~# zpool...
  4. Y

    [SOLVED] How to remove dead ZIL/ZLOG from ZFS?

    Hello, I am testing ZFS and I created a raidz2 vdev with 2 log devices. For testing purposes I pull the log devices, zfs shows them as faulted. Which is all fine. I then try to remove them using `zpool remove rpool device1 device2` but it causes "rpool has encountered an uncorrectable I/O...
  5. Y

    Is it possible to manually edit PVE->Disks->SMART settings

    I am hit by this problem: https://bugzilla.proxmox.com/show_bug.cgi?id=3270 The disks are detected as `/dev/sd*` however SMART works through `/dev/sg*` Is there a way to tell proxmox to use `/dev/sg*` even by manually setting the mapping between devices?
  6. Y

    [SOLVED] ZFS replica ~2x larger than original

    @guletz no. I never said 16k is written on one disk. I only said data does not have to spread to "every" disk on the vdev. Please read the article linked. It explain how this works in detail. With RaidZ2 using 6 disks. 16K will be written as Disk1 Disk2 Disk3 Disk4 Disk5 Disk6 P0 P1 D0 D1...
  7. Y

    [SOLVED] ZFS replica ~2x larger than original

    @guletz Can you tell why you think the blocksize is divided by number of data disk? Because data does not have to be spread into every disk in vdev? The only requirement is that the write size should be multiple of (number of parity + 1)...
  8. Y

    Proxmox 6.2 on ZFS on HPE Proliant servers

    @macleod for "3. zfs raid0 over hardware raid (i.e. raid6 onboard)" you can resolve your dilemma using `copies=2` See: https://docs.freebsd.org/en/books/handbook/zfs/#zfs-quickstart You also get use of the raid cache memory (assuming you have battery backup it is safe). On top of it, it is easy...
  9. Y

    Issues with nested virtualization on AMD Ryzen 7 CPU

    @mcdull my home pc/laptop arrived with windows 10 pro and hyper-v was NOT enabled. Although I tried it at some point, the enabling disabling was as simple as going to control panel and enable the Hyper-V Hypervisor by checking the checkbox and disable it clearing the checkbox...
  10. Y

    Issues with nested virtualization on AMD Ryzen 7 CPU

    @mcdull my apologies. I did not know vmware supported using hyper-v already. Anyway, I guess I got confused because you wrote when you apparently meant "hyper-v"... Just one question, how do you know that OP is using the hyper-v mode in windows?
  11. Y

    pbs to backup physical machines ?

    @Yuri Weinstein Did you read the information in the 2nd link I provided?
  12. Y

    pbs to backup physical machines ?

    https://www.proxmox.com/en/proxmox-backup-server I did not use it but seems like already possible? https://pbs.proxmox.com/docs/backup-client.html
  13. Y

    Nested Virtualization - VMware Workstation

    I could successfully use Win10 VirtualBox->PVE6.3->Windows10 using a Ryzen processor https://pve.proxmox.com/wiki/Proxmox_VE_inside_VirtualBox
  14. Y

    Issues with nested virtualization on AMD Ryzen 7 CPU

    @mcdull that microsoft article refers to hyper-v support of nested virtualization. One could achieve nested virtualization in windows using other software. @breaker_ nested virtualization on AMD was possible for long time. Although I am not sure if VMWare supports it. Here are instructions...
  15. Y

    [SOLVED] Host key verification failed when migrate

    @wolfgang that does not sound like a nice fix for somebody who has many nodes. Is there a way to automate this?
  16. Y

    Migrating VM using LVM to ZFS

    Hi @aaron and @Digitaldaz I was having the exact same problem and this solution helped me. Just out of curiosity, is it a technical limitation or is it a bug that only live migration can change storage? I would have thought it is easier to do things when the VM is turned off. Also ,as...
  17. Y

    Recommended number of OSDs per node?

    Hello, The wiki says: https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster Can somebody explain why 4 on each node? How is this calculated and is it same for HDD, SSD and NVME? Does it also mean there should be 4 drives dedicated for the OSDs or can a single drive divided into 4...
  18. Y

    Enable Qemu Guest Agent?

    Is there any documentation for whhat "Run guest-trim after clone disk" does exactly?