The errors from zpool status show that the container file is corrupted but the CTs start fine.
Also, earlier it was 104 and now it's 102. Unable to reboot the CT, have to reboot the entire node to remove the errors. Also, the proxmox node host is flooded with
I have a couple of ZFS pools in my proxmox cluster but one of the pools has all drives constantly throwing errors and getting degraded then faulted.
I have run full long SMART tests on all the drives multiple times and they all show up with 0 errors. Endurance use is also in the 2-3%...
It appears that the import /dev/disk/by-id works. Am currently using an encrypted ZFS volume that is decrypted by keyfile on system boot. Is this the right output?
# update-initramfs -k all -u
update-initramfs: Generating /boot/initrd.img-5.3.10-1-pve
Running hook script...
So using the command zdb to look at the labels, the drives are indeed mapped wrongly.
devid: 'ata-<drive serial>-part1'...
For some reason Proxmox seems to have lost 2 of the drives in my zpool even though they are currently there. I noticed that the drives might have had a letter shifted (the disks currently exist as /dev/sdk and /dev/sdl).
# zpool status
status: One or more...
Would two layer encryption take a huge performance penalty or create any other problems? Even if I move encryption to VM level, I'll still need some form of base encryption to protect the data if I need to RMA inaccessible drives.
Thank you for the reply. Is there a way to implement boot time decryption without risking the key itself? It seems pretty problematic if the host reboots and everything stops working until someone SSHs in to decrypt the zpool.
I'm looking at the docs on ZOL and it seems a little dated? I've set up a new zpool via GUI and there was no option to add encryption. However, when running zpool get feature@encryption, the pool already has encryption=on. Does this mean there is already some sort of key encryption by default...
Tried to create OSD on unused disk, process failed with "can't open exclusively".
Now disk is marked as used by "Device Mapper". fdisk -l lists 3 OSDs (2 successfully created + the failed one).
Can't zap it to clear it.
I have a bunch of public IPs and 3 HV servers running a hyper-converged Proxmox cluster. Current network config: eth0 = public internet facing traffic, eth1 = private storage network for Ceph traffic.
Is it a good idea to bind the management traffic to a separate VLAN on the storage...
Will look into this
An IP gets passed during cloud-init, surely there would be a way to store that and bind it? How XCP-ng does it is not automatic either, they have a field under XO that allows you to assign an IP to a VM and lock traffic to only that IP.
How would the IP tables work? I assume it would reside on the HV level? I know how to configure it if there was a device sitting in between functioning as a gateway but not sure how to do so if the VMs are exposed directly as the next hop on route.
Consider implementing such a feature as...
Still completely new to this ecosystem. I'm trying to create a PVE authenticated user with permissions to create other users with PVEVMUser permissions.
Steps I've taken:
1. Datacenter > Permissions > Users > Add (Realm: PVE)
2. Datacenter > Pools > Create
3. Pool_Name > Permissions >...
I'm looking for something that can be applied generally on the DC level so that once an IP has been assigned to a VM on setup, a change of the IP from within the VM would result in no connection being routed.
I do not even see an option to bind an IP to a VM in the GUI on setup, am I looking in...
Actually, I've noticed that an authenticated session actually persists across different nodes.
Anyway, I've also managed to find a workaround for the ticket issue, I've ditched HAProxy and setup a nginx roundrobin reverse proxy with keepalive 1.