Mostly this is a BIOS/UEFI Setting. If not, mostly the GPU in the first slot is used as default output.
If you can figure out, which output it is, you can make it available for passthrough anyway by setting the following grub parameter:
GRUB_CMDLINE_LINUX_DEFAULT="_OTHER PARAMETERS_...
It only needs to be on one disk (However that would be stupid in terms of disk failure).
Technically it can be on all disks in the pool, so the server can boot from all disks.
ZFS should handle the rest (I'm assuming your root in on ZFS too).
Hi, it doesn't really matter.
If ZFS finds the pool anyway, you don't need to worry.
Otherwise you can tell ZFS with the -d option in the import command where to find disks for the pool.
for example: zpool import -d /dev/disk/by-id/ rpool
You'll need to delete it.
Everything can be found here: https://pve.proxmox.com/wiki/Cluster_Manager
Under "Remove a Cluster Node"
If, for whatever reason, you want this server to join the same cluster again,
you have to
reinstall Proxmox VE on it from scratch
then join it, as explained in...
Or you could also boot an ubuntu live or something and try to mount the original proxmox on the disk. Then you'll have access to the files in /etc/pve. But there shouldn't be a lot of relevant config in your case.
I just want to interrupt and add a couple notes:
- Proxmox does only wipe disks that are selected for installation during the setup.
If you're not selecting your 4TB Backup drive, it will not wipe the data.
- All the data and the vm-configuration is supposed to be in the backup files (vzdump...
Hi Jamie,
generally using ZFS with thin provisioning works as designed. It only uses occupied block on the pool as "allocated" storage.
That means, in general, that you are able to overprovision your storage.
First of all though: I don't get step 1. What did you do with zpool2? Is it used for...
Hi Jay,
you can't just "Switch" from Seabios to UEFI by just changing the options for the VM.
Even after adding an EFI Partition to your VM, it'll not boot, since the bootloader and the OS is configured for BIOS.
Can you tell me which operating System the VM is using and which Version is...
I agree.
I posted it just for the users, that want to create a pool with > 2 min_size.
It's useful to know how to fix the state of having a 2/2 pool after wanting to create a 3/3 or 4/3 or 5/4 etc. Pool.
Just a last hint: changing the values with the ceph commands does not break proxmox. It...
I'm always good with dd'ing the first couple blocks and sometimes re-attaching the disk.
In your case: dd if=/dev/urandom of=/dev/sdb bs=4k count=4096 status=progress && sync
That's always working, when i try to add disks, that we're used before, to get them as an OSD into the CEPH Pool.
Also...
Hi Alwin,
thanks for the Update :)
I guess this is resolved then. Everyone that needs a fix in the meantime can either swap those code blocks i mentioned above (I do not recommend)
or just change the CEPH Pools values after creation with the ceph commands found here...
@RokaKen , i can also confirm the behaviour with the pool not being removed from storage.cfg when it is destroyed.
Update to this below.
Aaaaaand, i figured out at least, why i can't create a pool with min_size = 1. Or any pool size besides min_size > 2 as seen here...
Hello everyone,
i'm running a 3 Node Proxmox Cluster on Version 6.2-4.
I'm having CEPH configured as my primary storage system.
One of the Nodes is a VM acting as a tiebreaker / CEPH Monitor restricted from running VMs.
I used a private 10GbE Network as Cluster_network for CEPH for syncing the...
Hi avw,
i've also tried that. Manually creating partitions with new UUID, same sector size, making sure the OS sees the disk, nothing helped so far.
I've also read, that a zpool detach might help this "Bug". Sadly, since i'm running an RAID-Z i can't detach a drive (This is only possible in...
The issue with that here is, that we have multiple pools on the same disks with different partitions.
That's why i also attempted to copy the partition table.
/dev/sd[a,b,c,d]3 for example belongs to syspool, while the 4th partition belonges to storagepool.
And i agree about the mirror vdevs...
Hi there,
i'm having a problem that i don't really understand.
First of all i need to say a few things. As per right now, i'm unable to reboot any infrastructure.
We're running PVE 5.2-1.
Were using two zpools with partitions
We have a hot plug LSI SAS HBA
Our Storage is a RAID-Z2 based on 4...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.