we have found a post similar to the issue we have when trying to move a VM from a Ceph cluster to another one.
they suggested to disable Discard , what we did and seem to fix the issue.
Source Cluster as the VM on a RBD Ceph and the destination is a CephFS as recommended by Proxmox team as RBD...
hello i did a test with a image that i sucessfully mirrored from Cluster1 to Cluster2
i didnt migrated the VM configuration file , so for testing purpose i created a VM with the matching vmid from the image file but i am not seeing it in disk avaiable to attached after vm is created.
so i...
i was doing my test on another blade intialy and i forgot on the second one i was working to do that command:
ln -s /etc/pve/site-a.conf /etc/ceph/site-a.conf
working like a charm now thx Ness1602
i still have the same issue, i have each cluster on the same subnet and can ping each node from each cluster.
May 30 15:20:28 cl3-bl3 rbd-mirror[2459329]: unable to get monitor info from DNS SRV with service name: ceph-mon
May 30 15:20:28 cl3-bl3 rbd-mirror[2459329]...
There is no step do define the primary ip in the wiki so I assume it read the configuration file copied from the master ceph and it try to connect from public ceph network to public ceph network right ?
So no ceph networks are not reachable actually but each proxmox cluster is accessible from...
hello all im at last step of configuring a Mirroring ( enabled the deamon service on the site-b node ) .
do i need to have each Cluster on the same CEPH Public network or it can be any other range. as i dont have any communication now on the Public network or Client Network between the cluster...
@dcsapak
any update would be appreciated. this is getting critical for alot of people in the same situation since 2020-2021
and this issue is present in PVE and PBS
any update would be appreciated. this is getting critical for alot of people in the same situation since 2020-2021
and this issue is present in PVE and PBS
@fabian
we complete the migration of the DB on a new one, Trough Sync job.
we then discovered that since we use Namespace , the Permisison are not following in a sync job as we need to devine a new owner with Max deep set to full.
our futur work around when we will configure 1-2-3 for now...
good to know. i didnt tough you could get resonable performance but you seem ton mention that they might not be so bad in Mirror similar to a raid10 i assume.
do you suggest to add a '' special device '' to the HDD pools ?
Been able then to expand a RAIDZ2 remain the best possible scenario, yet possible sadly .
do you have any other recommendation for me to have the best expandable ZFS scenario that will not consume more than 2 disk for redudancy when expanding or we all need to wait until the attach function...
about monitoring , sure we will and put a quota as well.
Correct me if im wrong the only way to expand my Raidz2 4 disks is currently to : create another RaidZ2 and '' add it '' to the existing raidz2-0 ?
or i can create another mirror / raidz1 of let say 2 disks and then add it to the raidz2...
I did nothing now no worry . As i can do absolutely nothing ....
Deleting the checkpoints folder is not freeing any space. Not a single byte.
So the datastore is full and there nothing we can do except installing a new enclosure . New drives create the stores the name space the permissions and...
multiple place online we now see it available.
https://freebsdfoundation.org/blog/raid-z-expansion-feature-for-zfs/
does it mean the feature is not part of the openZFS version inside the currents distro from proxmox ?
as @Dunuin mention our only work around if we cannot attach a new disk will...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.