Search results

  1. B

    ZFS pools over multiple VMS

    If you pass your drives to a guest directly, only the guest can access it. Access for any other VM or physical host would have to go via FreeNAS' sharing.
  2. B

    Starting up my first server but I have a lot of questions

    Afair, if you simply leave some space on the volume, you're installing proxmox to, you can also create the needed partitions for ZIL and L2ARC afterwards. You'd be using parted for that and note that the ZIL will only have to be a few GBs, depending on the size of your ZPOOL and the amount of...
  3. B

    Starting up my first server but I have a lot of questions

    It really depends on what you actually want. Personally, I'd always put the hypervisor first, which would be Proxmox in this case. The rest of the setup depends on the amount of storage vs. the performance you want to get out of your ZPOOL - and on a single server, I'd always choose ZFS (okay...
  4. B

    Where is the local-zfs directory with all my VM`s?

    ZFS on Proxmox make use of ZVOLs instead of file-based images. So, each time a new guest is created on a ZFS volume, a new ZVOL is created for each disk. You can always use the qm tool to import a qemu compatible vdisk and qm will take care of creating a new ZVOL and you can then attach the...
  5. B

    Resize ROOT zfs partition smaller

    Not a chance - ZFS will not accept a mirror drive which is significantly smaller than the existing one. It doesn't matter, if there's still space left on the ZPOOL, the ZPOOL parameters can't be changed. However, you can create a new ZPOOL and zfs send/recv your rpool over - than make that...
  6. B

    ZFS wheres my data?

    You don't need to do that. Just convert your XenServer vdisks to vmdk and ship them over to your PVE host. Then create a new "empty" guest and use the qm tool to import it. I will create a zvol for that on its own. This is the way I have moved my Xen vdisks to PVE using ZFS storage. If you want...
  7. B

    [SOLVED] Give container low level drive access

    Well, I don't think that a container is supposed to have such control over the hardware, anyway. Someone on the host must be in charge of the hardware and that can't be - for obvious reasons - containers. This is what you can achieve with passtrough in a VM, while dom0 giving up control over...
  8. B

    kernel-4.15.18-8 ZFS freeze

    Well, I think you should probably head over to the ZFS discussion list and try to get some help there.
  9. B

    ZFS drive replaced and added new one, error

    Well… regarding the data redundancy… as long as this single-dev vdev is online on this zpool, ZFS will write data to it. However, as long as your're not storing any actual work data on the zpool, you should be relatively safe, but no… the data redundancy is currently not present, but this will...
  10. B

    ZFS drive replaced and added new one, error

    Then you'll probably need to update your PVE. On mine, running the latest version of PVE and thus ZFS it says: root@vm:~# zpool get feature@device_removal vmpool NAME PROPERTY VALUE SOURCE vmpool feature@device_removal enabled local Once...
  11. B

    ZFS drive replaced and added new one, error

    Oops… my bad: zpool get feature@device_removal rpool
  12. B

    ZFS drive replaced and added new one, error

    Hmm… check this: get feature@device_removal rpool
  13. B

    ZFS drive replaced and added new one, error

    Yeah - sorry, I somewhat expected to see a vdev with an underlying device… what ZFS version is on your PVE host. The removal of a vdev should be possible in recent versions of ZFS.
  14. B

    ZFS drive replaced and added new one, error

    Sorry, I obviously hit the reply butten, when I was still researching the issue. Please get me the output of zpool status rpool from the PVE Konsole.
  15. B

    ZFS drive replaced and added new one, error

    Please provide the output of zpool status rpool…
  16. B

    Replication failures on zfs

    No, not exactly, you can set the zpool's failmode from wait to panic, which would cause a reboot, if such an issue occurred.
  17. B

    Replication failures on zfs

    The crux of SATA-drives… SAS drives seem to handle these kind of issues much better, but they're more expensive and also come only with smaller capacities. ZFS's zpools are set to "WAIT" on error by default, which can cause such deadlocks.
  18. B

    ZFS drive replaced and added new one, error

    Sorry to have hit the nail on the head… Okay, first I'd check, if this server really doesn't have some extra option for using a DOM or other means of local storage to boot it. If it has, use that and install PVE on it. You may also get away with a durable USB thumb drive, on which you can...
  19. B

    Replication failures on zfs

    Well, ZFS actually can't get "overloaded" in a sense, that it lacks cpu or ram resources. The time out rather happens, because the zfs snapshot command does not return and the script has a timeout "around" the zfs commands, to be able to continue in a predictable manner. So, you'd first go ahead...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!