That worked just fine! Thanks a ton!
After zfs create rpool/data and editing
root@pmx1885:~# cat /etc/pve/storage.cfg
zfspool: local-zfs
pool rpool/data
content images,rootdir
nodes pmx1b,pmx2b,pmx2a,pmx1a,pmx1885
sparse 1
adding pmx1885 the storage named...
Hi Fabian! Thanks a lot for your help and clarification! Didn't know that restriction regarding storage for replication yet. Then, the error makes total sense.
Will try to zfs create rpool/data on this machine and then adding it to the local-zfs again to solve this.
Thanks :)
I have extended a test cluster with a new machine (called: pmx1885) which has its ZFS pool setup done after the install with the web-gui not with the installer (during installation time). This differs from my other cluster nodes in this case. The other cluster nodes (pmx2b,pmx1b,pmx2a,pmx1a)...
As there are some similar topics scattered around this forum, I will try to collect the information and update my list in the top post with the information gathered.
Hi, in case this happens, a service refusing shutdown, while shutting down pve/ceph nodes in a cluster, how to deal with it? Is it safe to just kill / poweroff the node?
Background: I am currently in process of planning a relocation of a ceph cluster, see...
Hi! Recently I took administration of an existing 3-node PVE/Ceph Cluster. First task/project is to plan a relocation of a small office branch datacentre. There is some network equipment, few physical servers and the PVE/Ceph cluster. Most of this is easy stuff, Ceph is quite new. Yeah!
As I am...
Okay, thanks for updates on zfs options. Was a bit confused regarding compression, as I thought PBS backups are zstd compressed already so it might be redundat overhead to have ZFS compression, but the web based gui wizard for ZFS on PBS suggests compression=on (as a default)- so I was confused...
So as I now have a ZFS pool called "backupz2", what is the "Backing Path"?
proxmox-backup-manager datastore create my-store-on-backupz2 /backup/disk1/store1 <- What is the backing path (with ZFS)?
Thanks a lot!
The error comes from the underlying zfs commands not from the proxmox-backup-manager, doesnt it? And it does not understand "-f" so I use the zfs commands directly?
By forcing it with "-f", does it just use the smallest of the disks sizes as size and it will just work as expected? This is more a safety switch for scenarios where one would accidentally mix totally different wrong disks, isnt it?
Hi! I tried creating a ZFS (raidz2) and a datastore via the GUI. It just fails with unknown error. :oops:
All disks are initialized with GPT and should be "ready" for ZFS. After issueing the corresponding command via shell, proxmox-backup-manager disk zpool create backupz2 --add-datastore...
Thanks! A lot of feedback, I will dig through on the weekend!
What I used as reference to better understand these figures is the following https://documentation.suse.com/en-us/ses/5.5/html/ses-all/ceph-monitor.html#monitor-watch
Hi, i have a 3 node PVE/CEPH cluster currently in testing. Each node has 7 OSD, so there is a total of 21 OSD in the cluster.
I have read a lot about never ever getting your cluster to become FULL - so I have set nearfull_ratio to 0.66
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.66...
Hi, sorry for digging this topic up. I am also looking for a solution to reset a container to a fresh install state but keep its config and all dependencies. Is it still the only solution to create a new container?
As far as I understood by reading the above; what I would do now is saving the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.