Search results for query: ZFS

  1. K

    SAS pool on one node, SATA pool on the other

    Feb 19 13:56:21 node2 pvestatd[1638]: could not activate storage 'SataPool', zfs error: cannot import 'SATAPool': no such pool available Feb 19 13:56:31 node2 pvestatd[1638]: zfs error: cannot open 'SATAPool': no such pool I meant Node1 currently has SATAPool and on Node2 I created SASPool -...
  2. A

    SAS pool on one node, SATA pool on the other

    actual errors would be useful- this can refer to the zfs pool or the storage pool in pve. No. each pool must have a different name on the same host. same goes for pve stores.
  3. G

    Best use of 4x NVMe drives

    no, I suggest you one way. ZFS is the way for multi boot disks, but it's slower because the overhead for checksum and integrity, even more with none datacenter drives. ZFS is enterprise oriented.
  4. J

    ZFS - How do I stop resilvering a degraded drive and replace it with a cold spare?

    Replacement HDD gets here tomorrow. In the mean time I'd like to save the mirrored good drive from working overtime to resilver the failing drive. I also don't understand how to swap the new drive with the old drive, since I don't have any open drive bays to add the new drive to the pool...
  5. UdoB

    Question about Feature Replication

    Just restore the VM of your choice on the other node. The storage type on the other node does not have to be the same as it was on the source. The result is basically a "replicated" VM from/to different filesystems ;-) The are CLI tools available, so you can script the whole process to...
  6. J

    I want nothing more than encrypted push sync

    ...a non-native sync mechanism for the datastore you need to think about some way to ensure the consistency and do a thourough test of everything, including the backup, sync and restore. And the OP wants to have encryption so he would also have to setup a encrypted zfs dataset on his offsite...
  7. W

    Question about Feature Replication

    hey, thanks for your answer. offside syn to another pbs, i know and its realy good. I mean replication to other pve nodes without zfs and from backup sources.
  8. D

    [SOLVED] ZFS scrubbing does not work anymore on Kernel 6.14.11 (and cannot upgrade to 6.17.X because of NVidia)

    I had both kernels installed but 6.14 was pinned and I was running 550 NVidia drivers. When moving 6.17, I had to upgrade the NVidia drivers to a version that supports the kernel. When I had a look to this problem upon Proxmox 9 upgrade, there were no drivers supporting 6.17 properly... Now...
  9. W

    Question about Feature Replication

    Hey, i would ask, give it a Plan, that the Proxmox Backup Server can be used for replication? I know it gives zfs replication but not all have zfs . Veeam ( roadmap) and nakivo , makes replication . What is with this solid product pbs? From backup replication to any storage , so that running...
  10. G

    Tape Backup: understanding media sets

    ...PBS "inventory" page, which reports "2.87 GB". However, in reality, the filesystem to be backupped amounts to 665 GB of data (this I get with "zfs list", but also in the "content" tab of the PBS datastore page). So it's pretty evident that what was written to the tape is an incremental...
  11. I

    Offline USB Backup mit PBS – Removable Datastore zu langsam, welche Alternativen nutzt ihr?

    ...gut umsetzten. Performance Ansprüche müsste man klären. 9TB ist viel für nur 40VMs (vermutlich files in VMs statt in shares). Mit Snapshots und ZFS send wäre das deutlich eleganter und schneller lösbar. Das wäre dann auch tatsächlich inkrementell. Aber je nach Ansprüchen bräuchte es andere...
  12. Z

    I want nothing more than encrypted push sync

    It's very possible that this is a fantastic solution, but I have never really gotten around to learn the advanced details about ZFS... Well, I have solved it now by switching encryption for my backups on, but I'll make sure to read about zfs send.
  13. B

    [SOLVED] Another io-error with yellow triangle

    ...8.1 (which includes PVE 9), the installer sets ARC to 10% of installed physical memory, clamped to max 16 GiB. This is written to /etc/modprobe.d/zfs.conf. On a 32 GiB system: the default ARC limit is ca. 3.2 GiB, not 8 GiB. The 8589934592 (8 GiB) value in the official Proxmox docs is just a...
  14. fba

    [SOLVED] Another io-error with yellow triangle

    Would you be so kind to not post untested ai generated answers? Parts like this are simply not correct. This would potentially increase the used memory for ARC.
  15. D

    I want nothing more than encrypted push sync

    I might misunderstand the issue, but couldn't you just backup locally and then `zfs send` the backups to your parents? Again, apologies if I misunderstood!
  16. P

    Best use of 4x NVMe drives

    I was told ZFS is bad for NVMe drives. My question is what option? there is ZFS (RAID0), ZFS (RAID1), ZFS (RAID10), ZFS (RAIDZ-1), ZFS (RAIDZ-2) and ZFS (RAIDZ-3) so not really obvious. One other question, will I be able to expand by adding another drive later using ZFS or will I need to...
  17. O

    Best use of 4x NVMe drives

    That's obvious. ZFS. Maybe this can give some insights: https://www.youtube.com/watch?v=KMNS_JoHFhg
  18. B

    [SOLVED] Another io-error with yellow triangle

    ...it was taken after the important part. The I/O error happened down in the storage path before QEMU had much to say about it. Given your stack. ZFS --> LVM --> LUKS --> NVMe soft RAID. the most likely culprit is somewhere in that layering, not an obvious QEMU userspace crash. io_uring...
  19. T

    [SOLVED] Nach Aktivierung SR-IOV (NIC) ist die PVE GUI nicht erreich- & pingbar

    ...-gut- und habe ein Problem mit meinem PVE Node. Ausgangsituation: Lenovo P330 Tiny mit Dual 10GbE NIC (SM AOC-STGN-i2s) nachgerüstet. PVE(ZFS) frisch installiert (v9.1.0) und für die NIC SR-IOV gemäß folgendem Thread erfolgreich eingerichtet. Nach Reboot startet die PVE Umgebung problemlos...