So if i have a folder, that has backups from another pbs instance, i cannot just used in a new pbs instance ?
I mean, cant i just point the second instance to the same directory ?
Hmm. I dont think this would be a problem. So the datastore would be in a directory in cephfs, so it would be available on the other nodes as well.
So in the event of a complete failure of the node running pbs, i could install it on another node, point it to the same data store, and i could use...
Yes, i saw that, i also understand that its not recommended. But on this point. Is the recommendation related to the fact the if the node fails you cand access the backup ? Or some other problems ? Like package conflicts, etc ?
The plan was to use PBS as a container or vm, in inside it i would have access to all the backups (via ceph). I would then use the tape device to write backup to it.
Thanks for the reply.
Yes, im aware that i could add another sas controller, and pass that to the VM.
Im trying to see if there is a alternative solution to this.
So yes, im using debian as its a proxmox host.
I dont actually know what options i have on debian in order to use this tape drive...
So i have a tape drive, connected to LSI sas controller.
The controller has 4 ports.
2 of them are used for some external enclosure, and 2 of them for this tape drive.
So what i need to do is somehow pass the tape drive to a VM.
I cannot pass the whole pci device because of the external...
So that cmd returns:
root@int2:~# zpool status -v
no pools available
And i only have these drives:
I did not use zfs on this server.
By the way. Im talking about proxmox, not PBS (i just saw i posted in the PBS section).
So a have a test cluster, with 3 nodes.
I keep receiving this error from one node:
ZFS has detected that a device was removed.
impact: Fault tolerance of the pool may be compromised.
eid: 1
class: statechange
state: UNAVAIL
host: int2
time: 2023-08-07 19:53:42+0300
vpath...
So think its the second partition.
But this is what i get when running proxmox-boot-tool status:
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
9AD2-D507 is configured with: uefi (versions: 5.3.18-3-pve, 5.4.106-1-pve...
So i used detach.
zpool detach rpool ata-ST9500530NS_9SP25SRN
This removed it from the pool, and then wiped sdw.
I then did this:
sgdisk /dev/sdv -R /dev/sdw
zgdisk -G /dev/sdw
zpool replace -f rpool ata-ST9500530NS_9SP264V3-part3 ata-ST9500530NS_9SP25SRN-part3
After it replaced the disk, are...
So i have 2 drives in a ZFS pool.
/dev/sdv is failing and i want to place it.
I dont remember when i made this, but the thing was i was expecting the os to boot if one of the drives was missing.
I was expecting that if sdv is missing, the os will boot from sdw. But it doesnt have a boot...
Im gonna check this out as well.
Also, having this integrated into proxmox would be like getting past the last boss in the ultimate level in newgame+ :D
I was talking about the fact that ceph supports nfs.
So why use another vm to expose cephfs via nfs when ceph+ganesha can do that ? Maybe just to have some isolation, but if everything is internal then i dont really see a problem using ceph+ganesha directly.
So just an update.
This was easier than i was thinking.
So i installed a debian 11 container on the backup cluster.
I installed pbs and added a mount point inside it that was pointing to a folder in the cephfs instance i had it the backup cluster.
I then created a datastore in a directory in...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.