So a have a test cluster, with 3 nodes.
I keep receiving this error from one node:
ZFS has detected that a device was removed.
impact: Fault tolerance of the pool may be compromised.
eid: 1
class: statechange
state: UNAVAIL
host: int2
time: 2023-08-07 19:53:42+0300
vpath...
So think its the second partition.
But this is what i get when running proxmox-boot-tool status:
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
9AD2-D507 is configured with: uefi (versions: 5.3.18-3-pve, 5.4.106-1-pve...
So i used detach.
zpool detach rpool ata-ST9500530NS_9SP25SRN
This removed it from the pool, and then wiped sdw.
I then did this:
sgdisk /dev/sdv -R /dev/sdw
zgdisk -G /dev/sdw
zpool replace -f rpool ata-ST9500530NS_9SP264V3-part3 ata-ST9500530NS_9SP25SRN-part3
After it replaced the disk, are...
So i have 2 drives in a ZFS pool.
/dev/sdv is failing and i want to place it.
I dont remember when i made this, but the thing was i was expecting the os to boot if one of the drives was missing.
I was expecting that if sdv is missing, the os will boot from sdw. But it doesnt have a boot...
Im gonna check this out as well.
Also, having this integrated into proxmox would be like getting past the last boss in the ultimate level in newgame+ :D
I was talking about the fact that ceph supports nfs.
So why use another vm to expose cephfs via nfs when ceph+ganesha can do that ? Maybe just to have some isolation, but if everything is internal then i dont really see a problem using ceph+ganesha directly.
So just an update.
This was easier than i was thinking.
So i installed a debian 11 container on the backup cluster.
I installed pbs and added a mount point inside it that was pointing to a folder in the cephfs instance i had it the backup cluster.
I then created a datastore in a directory in...
Well if you have a vm connected to cephfs and then expose it via nfs (from the vm) defeats the whole ideea.
You could use openmedia vault or something like that.
Ideally you could expose nfs export from the cephfs directly.
So im guessing there is no safe way to use nfs ganesha with ceph and proxmox right ?
Having the ability to expose nfs shares from a ceph fs directly from proxmox dashboard would be heaven :)
Yes, you would need to sync the config files of the vms and containers that you are mirroring.
You could do this with a simple cron job that can run a rsync command.
If node1-master comes online again and you want to keep the images that ran on the backup cluster, then you would need to get...
Like i said in some of my previous posts i have tried ceph mirroring before, and followed the instructions available here.
This worked great, but only if you just used virtual machines, and not containers. So this does not work with container disks, because you would need to enable the...
1st about my last post.
its not enough to install rbd-nbd.
So after you install it you can use it to map the image to a device.
Something like this :
rbd-nbd map storage_vms/vm-102-disk-0
Now, from my understanting, it seems containers need krbd in order to work (if using ceph images).
The...
So i saw that the image cannot be mounted.
So i get the same error if i run something like this:
rbd map vm-102-disk-0 --pool storage_vms
But if i install rbd-nbd i dont get any errors.
Im not sure what to think of it.
So i have been testing for a while what ways i have to replicate VMs and, because i was using ceph, tried rbd mirroring, following the instructions found here.
Everything worked fine until i tried to do this on images that were used by lxc containers. Then i ran into all sorts of problems. The...
I havent been able to find any solution to this, but the biggest problem is not the backup, is the fact that the lxc containers dont start anymore if they have that feature enabled.
That means i cannot use mirroring with ceph.
Did anyone manage to do something about this ?
So i have configured mirroring on some images in a proxmox cluster with ceph.
I have enable journaling on all images. But now i have a problem when backing a lxc container that has journaling enabled on its images.
I get this :
I have proxmox 6.4-13 and ceph 14.2.22
Has anyone encountered this ?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.