pool

  1. M

    ZFS pool is enabled but not active

    i have a Proxmox node running on version 7.1-7 with 4 SSD and 2 HDD storage configured with ZFS. Everything was working perfectly. Until yesterday, i rebooted node (whis was not my first reboot after setup), and now one of the SSD storages named 'zfs_ssd_1' is not activated. The GUI shows that...
  2. S

    one of my 20 HDDs stopped working in Proxmox, please help.

    All of a sudden without doing anything, one mounted drive appears with a "?". Every drive was mounted via the GUI and all 20 drives were merged into a large pool through a TrueNAS VM. Now suddenly one of the drives does not appear to be mounted and I get the error: "unable to activate storage...
  3. A

    Can't add external hdd drive to TrueNAS Scale VM

    Hi All, I am new to Proxmox and need some help with how to attach or add all my usb external storage, in my case I use "ORICO-3559C3" with 5-bays HDD drive. My goal is to provide ZFS storage to the VMs in Proxmox (with TrueNAS-Scale), I have VM 100 with TrueNAS Scale 22.12 on My Baremetal...
  4. D

    Adding vdev's to an existing pool

    I can't find the answer, and maybe that's down to my poor google skills, but in regards to adding vdevs: I have RAIDZ2 6x4TB created in the GUI. If I want to add more drives, would it be correct to use use "zpool add -f -o ashift=12 <pool> raidz2 /dev/sd* and then list 6x4TB drives? This would...
  5. O

    Proxmox 6.4.14 - Creating Additional SSD Pool under default HELP!

    Hello all, We have a cluster of 9 servers with hdd ceph pool. We have Recently purchased SSD disks for our SSD pool Now the situation is That when we need to create a new crush rule for ssd: ceph osd crush rule create-replicated replicated-ssd defaulthost ssd But getting the following...
  6. A

    Slow write speeds in ZFS pool after updating to 7.2-3

    Hello there, I have a Z440 workstation that I've been using as a Proxmox server for over a year. It has 192 GiBs of ECC RAM and an Intel Xeon CPU E5-2687W v4. Proxmox is installed on an Intel DC P3700 PCI-e SSD. The main storage pool is a raid z1 with 5 identical 4 TB Iron Wolf HDDs. The pool...
  7. N

    Add new OSD to existing CEPH POOL

    Hi all, i've 4 nodes proxmox, with CEPH, only three are monitors. For every node, i've three 3 SSD and 2 HDD (normal) , and two different pools: one for ssd and one for hdd. Now, i'm adding one OSD per node, to add this to existing pool HDD. But it's taking more time that i thinked. This is the...
  8. W

    Recreating RAIDZ-1 pool to add a disk

    Dear readers, The current problem that I'm facing right now is adding a 4th disk to my raidz-1 pool. I have read some articles that it is not possible to expand your current raid pool. To give some context, I'm running 3x6TB Toshiba X300 drives in a RAIDZ-1 configuration (I know, 3 drive...
  9. TwiX

    ZFS Datastore - Fragmentation with SSD ?

    Hi, I have recently replaced all my HDD for 10 SSD for my ZFS Datastore. This is now extremely fast specially verify jobs (even if we can discuss the interest of verifying snapshots on a ZFS pool). The weird thing is that PBS reports fragmentation on the related pool : AFAIK, on a SSD...
  10. H

    [SOLVED] Single VM volume filling 100% of ZFS pool

    Hello! I'm having this weird behavior with Proxmox installation with a ZFS pool (RAIDZ2, 12x10 TB) with a single VM with 72 Tb allocated. However, I have noticed that since September, the volume usage went from 76,89 TB to 88.55 TB, filling the pool to 100%. The GUI (as you can see in the...
  11. F

    Ceph select specific OSD to form a Pool

    Hi there! i'm needing a little help. I have a Proxmox + Ceph cluster with 4 nodes, with the following HDDs in each node 2x 900GB SAS 15.000K 2X 300GB SAS 10.000K 2x 480GB SSD 2x 240GB SSD I need to make a Pool for each class and size of HD, the classes of the HDs know how to separate them, but I...
  12. L

    zfs send/receive fails to unmount pool

    Hey all, I have my backups in a zfs pool ("s_pool") of which I take regular snapshots. These are then streamed elsewhere with zfs send ... | zfs recv .... Today, however, it is failing to stream because zfs can't unmount s_pool. Running an export fails as well (specifying that the device is...
  13. P

    Ceph pool resize

    Hi. I can't find any info about resizing ceph pool. I added few osd's, it's working but how to add this osd's to my pool to resize it? thanks Lukasz. Proxmox 6.3-6, ceph 14.2.18
  14. Bran-Ko

    Backup server installed on ZFS

    I installed backup server on 2x SATA 1TB disk. How I can move this installation to another 2x SATA 4TB. And after that I wan to replace old 2pcs 1TB HDD with another 2pcs 4TB. Finally I want to use 4pcs SATA 4TB with RAIDZ ZFS (RAID5). I thought that only things what I need to do is - add disk...
  15. K

    [SOLVED] ZFS - Belegter Speicherplatz größer als tatsächlich verwendet

    Hallo zusammen. Aus 4x 3TB HDDs habe ich einen RAIDz1 Pool erstellt, wodurch 7.7 TiB verfügbar sind. Den Pool nutze ich u.a. als Storage für zusätzliche Festplatten in den VMs (aktuell nur Dateien von der NAS-VM). Ein df -h innerhalb der NAS-VM zeigt eine Belegung von knapp 3 TB. Die Proxmox...
  16. M

    ZFS pools not mounting correctly

    Hello, after changing the physical case of my ProxmoxVE Server (due to upgrade reasons), my ZFS pools arent mounted properly. I have two pools, rpool and data as seen here NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data 3.62T 211G 3.42T -...
  17. M

    ZFS or something else?

    Hi guys, I have read a lot of threads about zfs vs ext4 and zfs in general. I’m still looking for some answers. Can you guys help me get more insigt into creating a good setup for my homeserver? Currently I use an HP elitedesk with 3 ext4 disks, 1 hdd for root, 1 ssd for VM’s, 1 hdd for data...
  18. ssaman

    Bandwidth very low - 2,3 MB/sec

    Hello together, we have a big problem with our ceph configuration. Since 2 weeks the Bandwidth dropped extreme low. Has anybody an idea how we can fix this?
  19. C

    Ceph show "slow requests are blocked" when creating / modifying CephFS

    Hi, I have noticed in Ceph log (ceph -w) an increase of "slow requests are blocked" when I create CephFS, e.g. 2019-10-14 16:41:32.083294 mon.ld5505 [INF] daemon mds.ld4465 assigned to filesystem cephfs as rank 0 2019-10-14 16:41:32.121895 mon.ld5505 [INF] daemon mds.ld4465 is now active in...
  20. K

    API interface and pool

    HI, I use API interface to manage VMs. I have a process that clone the collected VM ID into a new POOL. Some cloned VM will not be in the POOL. I use workaround. After the clone processes I check which VM located in the POOL. I add the missing VMs. But the Pool members responses sometimes not...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!