I have read a lot of threads about zfs vs ext4 and zfs in general. I’m still looking for some answers. Can you guys help me get more insigt into creating a good setup for my homeserver? Currently I use an HP elitedesk with 3 ext4 disks, 1 hdd for root, 1 ssd for VM’s, 1 hdd for data...
I have noticed in Ceph log (ceph -w) an increase of "slow requests are blocked" when I create CephFS, e.g.
2019-10-14 16:41:32.083294 mon.ld5505 [INF] daemon mds.ld4465 assigned to filesystem cephfs as rank 0
2019-10-14 16:41:32.121895 mon.ld5505 [INF] daemon mds.ld4465 is now active in...
I use API interface to manage VMs.
I have a process that clone the collected VM ID into a new POOL.
Some cloned VM will not be in the POOL.
I use workaround. After the clone processes I check which VM located in the POOL. I add the missing VMs.
But the Pool members responses sometimes not...
This is my first time playing around Proxmox environment. Yesterday, I was trying to redo a Ceph pool storage and try to remove a container from the cluster that is connected to the Ceph pool storage. The container is a success, but then I encounter this problem where I cannot...
i have a three node cluster with ceph. every node has 7 osds. I know that i have seen the osds in the dashboard of ceph, but today i updated pve to latest version, and there are no osds shown in the dashboard.
Normaly there are the raw files listed in the dashboard, but now it is...
I am trying to create a ceph pool through GUI with size 3, min 2, pg_num 2048 according to the pg calc
i get the following error:
TASK ERROR: mon_command failed - For better initial performance on pools expected to store a large number of objects, consider supplying the...
I come from Freenas and had a lot of data on a ZFS pool I created (2 x 8 TB disk) and now I migrated to Proxmox, when I did zpool import it all went without error and the pool showed up under Server > Disks > ZFS.
But I noticed that it didn't appear in the left collom where all my other storage...
I'm new with ceph and ran into an interesting warning message which I think I can not interpret correctly and would appreciate any suggestion/comment on this, where to start rolling up the case and/or I missing something.
First of all, the configuration structured as follows:
When creating a regular (RBD) Ceph pool, there are options in both the GUI and in pveceph to determine the size (replication count) and the min. size (online replicas for read) of the pool. However, when creating a CephFS pool, neither the GUI, not pveceph provides an option to create one with a...
my use case for Ceph is providing a central backup storage.
This means I will backup multiple databases in Ceph storage cluster mainly using librados.
There's a security demand that should be considered:
DB-owner A can only modify the files that belong to A; other files (owned by B, C or D)...
Hello, i have added a cluster node ( proxmox 5 no shared storage ), than i'm creating the pool:
zpool create -f -o ashift=12 STORAGE mirror /dev/sdc /dev/sdd
mountpoint '/STORAGE' exists and is not empty
use '-m' option to provide a different default
root@cvs7:~# zfs list
Good afternoon, colleagues!
I have a problem with the implementation of Ceph storage.
There is a Supermicro Superblade chassis, which has 20 physical servers. Each server has 2 processors, 64 GB of RAM, 1xSDD 32GB for OS, 1xSSD and 1xHDD for Ceph
There is a desire to collect...
I'd like to be able to give access to two-three people so they can install stuff on my proxmox.
In order for these guys to be able to create a vm/lxc, they have to have the Datastore.AllocateTemplate on the storage which is only found in PVEDatastoreAdmin.
So in order for two people to create...
I have a proxmox 4 cluster using local disks as storage (type = directory) and I'm currently testing ZFS pools on my futur cluster (proxmox 5).
On my old cluster (storage type = directory), I can cold-migrate LXC containers from a proxmox host to another because every host has a...
I have created a pool + storage with WebUI.
This worked well, means both pool and storage are available.
In the "storage view" I can see:
From Ceph point-of-view, what is represented by <poolname>_ct and <poolname>_vm respectively?
It's not a RBD...
I have configured Ceph on a 3-node-cluster.
Then I created OSDs as follows:
Node 1: 3x 1TB HDD
Node 2: 3x 8TB HDD
Node 3: 4x 8TB HDD
This results in following OSD tree:
root@ld4257:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 54.20874 root default
I just completed a new setup of Proxmox version 5.2 with 3 hosts and 18 OSDs. This time my cluster setup is not as previous by manual command line in 5.2 version installation. I use GUI to complete my cluster setup, awesome :)
While I finished ceph-pool setup with following: