We have a cluster of 9 servers with hdd ceph pool.
We have Recently purchased SSD disks for our SSD pool
Now the situation is
That when we need to create a new crush rule for ssd:
ceph osd crush rule create-replicated replicated-ssd defaulthost ssd
But getting the following...
I have a Z440 workstation that I've been using as a Proxmox server for over a year. It has 192 GiBs of ECC RAM and an Intel Xeon CPU E5-2687W v4. Proxmox is installed on an Intel DC P3700 PCI-e SSD. The main storage pool is a raid z1 with 5 identical 4 TB Iron Wolf HDDs. The pool...
i've 4 nodes proxmox, with CEPH, only three are monitors.
For every node, i've three 3 SSD and 2 HDD (normal) , and two different pools: one for ssd and one for hdd. Now, i'm adding one OSD per node, to add this to existing pool HDD. But it's taking more time that i thinked. This is the...
The current problem that I'm facing right now is adding a 4th disk to my raidz-1 pool. I have read some articles that it is not possible to expand your current raid pool. To give some context, I'm running 3x6TB Toshiba X300 drives in a RAIDZ-1 configuration (I know, 3 drive...
I have recently replaced all my HDD for 10 SSD for my ZFS Datastore.
This is now extremely fast specially verify jobs (even if we can discuss the interest of verifying snapshots on a ZFS pool).
The weird thing is that PBS reports fragmentation on the related pool :
AFAIK, on a SSD...
I'm having this weird behavior with Proxmox installation with a ZFS pool (RAIDZ2, 12x10 TB) with a single VM with 72 Tb allocated. However, I have noticed that since September, the volume usage went from 76,89 TB to 88.55 TB, filling the pool to 100%.
The GUI (as you can see in the...
Hi there! i'm needing a little help.
I have a Proxmox + Ceph cluster with 4 nodes, with the following HDDs in each node
2x 900GB SAS 15.000K
2X 300GB SAS 10.000K
2x 480GB SSD
2x 240GB SSD
I need to make a Pool for each class and size of HD, the classes of the HDs know how to separate them, but I...
I have my backups in a zfs pool ("s_pool") of which I take regular snapshots. These are then streamed elsewhere with zfs send ... | zfs recv .... Today, however, it is failing to stream because zfs can't unmount s_pool. Running an export fails as well (specifying that the device is...
I installed backup server on 2x SATA 1TB disk. How I can move this installation to another 2x SATA 4TB. And after that I wan to replace old 2pcs 1TB HDD with another 2pcs 4TB. Finally I want to use 4pcs SATA 4TB with RAIDZ ZFS (RAID5).
I thought that only things what I need to do is - add disk...
Aus 4x 3TB HDDs habe ich einen RAIDz1 Pool erstellt, wodurch 7.7 TiB verfügbar sind. Den Pool nutze ich u.a. als Storage für zusätzliche Festplatten in den VMs (aktuell nur Dateien von der NAS-VM). Ein df -h innerhalb der NAS-VM zeigt eine Belegung von knapp 3 TB. Die Proxmox...
after changing the physical case of my ProxmoxVE Server (due to upgrade reasons), my ZFS pools arent mounted properly. I have two pools, rpool and data as seen here
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
data 3.62T 211G 3.42T -...
I have read a lot of threads about zfs vs ext4 and zfs in general. I’m still looking for some answers. Can you guys help me get more insigt into creating a good setup for my homeserver? Currently I use an HP elitedesk with 3 ext4 disks, 1 hdd for root, 1 ssd for VM’s, 1 hdd for data...
I have noticed in Ceph log (ceph -w) an increase of "slow requests are blocked" when I create CephFS, e.g.
2019-10-14 16:41:32.083294 mon.ld5505 [INF] daemon mds.ld4465 assigned to filesystem cephfs as rank 0
2019-10-14 16:41:32.121895 mon.ld5505 [INF] daemon mds.ld4465 is now active in...
I use API interface to manage VMs.
I have a process that clone the collected VM ID into a new POOL.
Some cloned VM will not be in the POOL.
I use workaround. After the clone processes I check which VM located in the POOL. I add the missing VMs.
But the Pool members responses sometimes not...
This is my first time playing around Proxmox environment. Yesterday, I was trying to redo a Ceph pool storage and try to remove a container from the cluster that is connected to the Ceph pool storage. The container is a success, but then I encounter this problem where I cannot...
i have a three node cluster with ceph. every node has 7 osds. I know that i have seen the osds in the dashboard of ceph, but today i updated pve to latest version, and there are no osds shown in the dashboard.
Normaly there are the raw files listed in the dashboard, but now it is...
I am trying to create a ceph pool through GUI with size 3, min 2, pg_num 2048 according to the pg calc
i get the following error:
TASK ERROR: mon_command failed - For better initial performance on pools expected to store a large number of objects, consider supplying the...