pool

  1. C

    Ceph: creating pool with storage

    Hello! I have created a pool + storage with WebUI. This worked well, means both pool and storage are available. In the "storage view" I can see: <poolname>_ct <poolname>_vm Question: From Ceph point-of-view, what is represented by <poolname>_ct and <poolname>_vm respectively? It's not a RBD...
  2. C

    [SOLVED] Ceph: creating pool for SSD only

    Hi, in case I want to create a pool with SSDs only that is separated from HDDs I need to manipulate the CRUSH map and enter another root. Is my assumption correct? THX
  3. C

    Ceph HEALTH_WARN: Degraded data redundancy: 512 pgs undersized

    Hi, I have configured Ceph on a 3-node-cluster. Then I created OSDs as follows: Node 1: 3x 1TB HDD Node 2: 3x 8TB HDD Node 3: 4x 8TB HDD This results in following OSD tree: root@ld4257:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 54.20874 root default -3...
  4. A

    My POOLS MAX AVAIL is not full capacity in version 5.2

    Hello, I just completed a new setup of Proxmox version 5.2 with 3 hosts and 18 OSDs. This time my cluster setup is not as previous by manual command line in 5.2 version installation. I use GUI to complete my cluster setup, awesome :) While I finished ceph-pool setup with following: Size/min...
  5. J

    can't access tty: job control turned off -> after Failed to mount rpool/ROOT/pve-1/XXXXXXXXX

    I got the error message during boot. Errors during boot: First I get the error: Failed to mount rpool/ROOT/pve-1/XXXXXXXXX Then I get the blocking error: /bin/sh: can't access tty: job control turned off Last Things done on Proxmox: I activated Proxmox Cluster Installation and tried to bind...
  6. F

    Ceph - EC-Pool Setup with 3 hosts

    Hello, In the next few days we are going to setup a small Proxmox/Ceph cluster with 3 hosts, each having 3 hdds and 1 ssd. We want to create a EC-Pool using the 3 hdd's on each host. As fas as I know we have to setup the EC-Pool using the formula: n = k + m. If we use k=3 and m=3, then the...
  7. G

    Ceph: Erasure coded pools planned?

    Ceph provides erasure coded pools for a several years now (was introduced in 2013), and according to many sources the technology is quite stable. (Erasure coded pools provide much more effective storage utilization for the same number of drives that can fail in a pool, quite similarly as RAID5...
  8. G

    [SOLVED] zfs/zed.d not sending email

    I have decided to automate the scrubbing of my pool with the advice provided by @tom from this post ZFS Health and because it is no good scrubbing if you aren't notified of a problem looked in the suggestion at the end of that post about using /etc/zfs/zed.d/zed.rc. Information about this ZFS...
  9. F

    Struggling with pools, storages...

    English is not my mother tongue, so I'm probably missing some evident subtleties between several word meanings. (My final goal: using ceph storage on my 3 nodes cluster for containers and VMs) I feel somewhat confused at how are related - ceph storage - ceph pools - proxmox storage - proxmox...
  10. R

    Installation problem with ZFS root on SD card with var mounted from another ZFS pool

    I am currently testing the installation on a DELL t-630 server. PVE is installed with root-ZFS on SD card modules (16gb size, dual-cards, hardware mirrored). I am trying to move the /var folder to another ZFS file system with regular sas hdds. The root is on rpool on the SDcard. var is on...