I have a PBS instance running inside a CT with bind mount /zpool/pbs,mp=/zpool-pbs on node-A in a PVE cluster. Is it possible to move this CT to another node-B in the same cluster without data loss? My plan is following:
- backup the CT to local storage on node-A
- copy backup to node-B
- sync...
@fabian thank you for your hint; I followed the steps but got stuck at this point:
pvecm add 192.168.11.12
Please enter superuser (root) password for '192.168.11.12': ************
detected the following error(s):
* this host already contains virtual guests
Check if node may join a cluster...
I renamed hostname on a proxmox node and then added it to a cluster. For some reason it showed in the cluster under the old name, so I deleted it from the cluster with pvecm delnode. It was a blunder from me, I didn't read the wiki carefully.
Now my question is: apart from reinstalling the...
I did a systemctl restart networking on a node and after that networking stopped for all VMs. No real change in /etc/network/interfaces, just some cosmetic change to make it more readable and some comments.
The workaround was to restart all VMs. However I'd like to know this is the expected...
update (for those who might be in a similar situation):
# export pool (make sure no datasets are in use):
zpool export zpool2
# restart multipathd:
systemctl restart multipathd.service
# check mapper devs:
ls -l /dev/mapper # ensure that all mpath devs are present
# import pool:
zpool import...
can it be the case that zfs tries to import the disks and this causes multipath to fail? From the log I see right before the mpath device failed (mpathk), there is a message: cannot import 'zpool2': no such pool or dataset
Apr 28 15:57:45 teima multipathd[1563]: mpathj: addmap [0 39063650304...
it seems it was a mistake to create the pool with mutlipath name like dm-name-mpatha. Perhaps a better way is to use something like dm-uuid-mpath-35000c500d87d8953?
Or is it better to skip multipath all together? I thought it's a good idea since it can increase robustness of the setup. But now...
recently I did a reboot and 2 disks got different naming than the rest.:
pool: zpool2
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zpool2 ONLINE 0 0 0
raidz2-0...
I assembled a zfs pool from a bunch of disks that should be retied by now.
I can see the point of ZFS in HBA mode: ZFS can have a "closer look" at the individual disks and hence can potentially handle the errors better in case it has access to a single vdev from hw raid. In the later cases ZFS...
We see everywhere the advice "don't use ZFS on top hardware RAID". But I also see opinions that it's a myth.
This question is not about the rationals to avoid (or use) hw raid with ZFS. I'd like to hear from Proxmox community about first-hand experience with ZFS on top hw raid, especially...
I did configure 4 disks as follows:
1 disk as Non-RAID
3 disks as 1 single RAID0
perccli sees them as follows:
-------------------------------------------------------------------------
DG/VD TYPE State Access Consist Cache Cac sCC Size Name...
if Proxmox can see the SMART status of the drives, then they are exposed directly to the OS as raw disks and not as RAID0 drives? (I admit I don't understand the difference well enough, just read from ZFS docs that one is preferred over another).
Or is there a reliable way to check if it's raw...
I am trying to configure ZFS on a server with PERC H745 controller and 4 disks. The manual says this is a raid card that supports eHBA (Enhanced HBA) mode. I set the card to that mode, and set all the disks to non-RAID mode. After that the disks show up in Proxmox as individual disks. However...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.