pool

  1. Ceph select specific OSD to form a Pool

    Hi there! i'm needing a little help. I have a Proxmox + Ceph cluster with 4 nodes, with the following HDDs in each node 2x 900GB SAS 15.000K 2X 300GB SAS 10.000K 2x 480GB SSD 2x 240GB SSD I need to make a Pool for each class and size of HD, the classes of the HDs know how to separate them, but I...
  2. zfs send/receive fails to unmount pool

    Hey all, I have my backups in a zfs pool ("s_pool") of which I take regular snapshots. These are then streamed elsewhere with zfs send ... | zfs recv .... Today, however, it is failing to stream because zfs can't unmount s_pool. Running an export fails as well (specifying that the device is...
  3. Ceph pool resize

    Hi. I can't find any info about resizing ceph pool. I added few osd's, it's working but how to add this osd's to my pool to resize it? thanks Lukasz. Proxmox 6.3-6, ceph 14.2.18
  4. Bran-Ko

    Backup server installed on ZFS

    I installed backup server on 2x SATA 1TB disk. How I can move this installation to another 2x SATA 4TB. And after that I wan to replace old 2pcs 1TB HDD with another 2pcs 4TB. Finally I want to use 4pcs SATA 4TB with RAIDZ ZFS (RAID5). I thought that only things what I need to do is - add disk...
  5. [SOLVED] ZFS - Belegter Speicherplatz größer als tatsächlich verwendet

    Hallo zusammen. Aus 4x 3TB HDDs habe ich einen RAIDz1 Pool erstellt, wodurch 7.7 TiB verfügbar sind. Den Pool nutze ich u.a. als Storage für zusätzliche Festplatten in den VMs (aktuell nur Dateien von der NAS-VM). Ein df -h innerhalb der NAS-VM zeigt eine Belegung von knapp 3 TB. Die Proxmox...
  6. ZFS pools not mounting correctly

    Hello, after changing the physical case of my ProxmoxVE Server (due to upgrade reasons), my ZFS pools arent mounted properly. I have two pools, rpool and data as seen here NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data 3.62T 211G 3.42T -...
  7. ZFS or something else?

    Hi guys, I have read a lot of threads about zfs vs ext4 and zfs in general. I’m still looking for some answers. Can you guys help me get more insigt into creating a good setup for my homeserver? Currently I use an HP elitedesk with 3 ext4 disks, 1 hdd for root, 1 ssd for VM’s, 1 hdd for data...
  8. ssaman

    Bandwidth very low - 2,3 MB/sec

    Hello together, we have a big problem with our ceph configuration. Since 2 weeks the Bandwidth dropped extreme low. Has anybody an idea how we can fix this?
  9. Ceph show "slow requests are blocked" when creating / modifying CephFS

    Hi, I have noticed in Ceph log (ceph -w) an increase of "slow requests are blocked" when I create CephFS, e.g. 2019-10-14 16:41:32.083294 mon.ld5505 [INF] daemon mds.ld4465 assigned to filesystem cephfs as rank 0 2019-10-14 16:41:32.121895 mon.ld5505 [INF] daemon mds.ld4465 is now active in...
  10. API interface and pool

    HI, I use API interface to manage VMs. I have a process that clone the collected VM ID into a new POOL. Some cloned VM will not be in the POOL. I use workaround. After the clone processes I check which VM located in the POOL. I add the missing VMs. But the Pool members responses sometimes not...
  11. Cannot delete this raw disk from Ceph storage pool

    Greetings all. This is my first time playing around Proxmox environment. Yesterday, I was trying to redo a Ceph pool storage and try to remove a container from the cluster that is connected to the Ceph pool storage. The container is a success, but then I encounter this problem where I cannot...
  12. Add restored VM to Pool via CLI

    Hello! Please help me. How to add new restored VM from backup dump to exist pool via CLI ? I need it to make auto tasks in Bamboo. THX!
  13. [SOLVED] Ceph Dashboard - Pool/raw files are not shown in browser

    Hi all! i have a three node cluster with ceph. every node has 7 osds. I know that i have seen the osds in the dashboard of ceph, but today i updated pve to latest version, and there are no osds shown in the dashboard. Normaly there are the raw files listed in the dashboard, but now it is...
  14. creating ceph pool according to pg calc

    Hello! I am trying to create a ceph pool through GUI with size 3, min 2, pg_num 2048 according to the pg calc i get the following error: TASK ERROR: mon_command failed - For better initial performance on pools expected to store a large number of objects, consider supplying the...
  15. [SOLVED] Weird problem with imported ZFS disk.

    I come from Freenas and had a lot of data on a ZFS pool I created (2 x 8 TB disk) and now I migrated to Proxmox, when I did zpool import it all went without error and the pool showed up under Server > Disks > ZFS. But I noticed that it didn't appear in the left collom where all my other storage...
  16. ceph misplaced objects

    Hi all, I'm new with ceph and ran into an interesting warning message which I think I can not interpret correctly and would appreciate any suggestion/comment on this, where to start rolling up the case and/or I missing something. First of all, the configuration structured as follows: 3 node...
  17. CephFS: How to create with different size?

    When creating a regular (RBD) Ceph pool, there are options in both the GUI and in pveceph to determine the size (replication count) and the min. size (online replicas for read) of the pool. However, when creating a CephFS pool, neither the GUI, not pveceph provides an option to create one with a...
  18. Howto define Ceph pools for use case: central DB backup storage

    Hi, my use case for Ceph is providing a central backup storage. This means I will backup multiple databases in Ceph storage cluster mainly using librados. There's a security demand that should be considered: DB-owner A can only modify the files that belong to A; other files (owned by B, C or D)...
  19. Error adding zfs pool on a new cluster node

    Hello, i have added a cluster node ( proxmox 5 no shared storage ), than i'm creating the pool: zpool create -f -o ashift=12 STORAGE mirror /dev/sdc /dev/sdd mountpoint '/STORAGE' exists and is not empty use '-m' option to provide a different default root@cvs7:~# zfs list NAME...
  20. Ceph storage mix SSD and HDD

    Good afternoon, colleagues! I have a problem with the implementation of Ceph storage. My configuration: There is a Supermicro Superblade chassis, which has 20 physical servers. Each server has 2 processors, 64 GB of RAM, 1xSDD 32GB for OS, 1xSSD and 1xHDD for Ceph There is a desire to collect...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!