A lot of information is missing here.
Please provide us with logs, when the NFS Share is not accessible.
A "dmesg" from the Proxmox Node usually helps.
"qemu-img resize file.qcow2 10G" <= This can't work like that.
You need to shrink the partitions in the OS first before you're resizing it in any case.
I don't know if it will be possible then, because i usually didn't care, because of thin provisioning, that you apparently have too.
So to...
I have my CEPH Cluster Setup as followed:
3 Servers with 2 RJ45 1GbE Connections in Bond-Mode Failover.
Those three Servers each have a Dual SPF+ Network card.
I connected them without a switch and used bond-mode broadcast for that (Maybe that's not a such good idea with 100GbE)
My Cluster...
I think this might help:
https://unix.stackexchange.com/a/398884
Do that for every card that you don't want to use as output. Maybe it'll work. Got no experience with it.
Usually i have a BIOS option for primary GPU.
I honestly don't understand any logic or question here.
The only thing i think i understand is you wanting 2 replicas which is generally a really stupid idea in case of any node failure,
since setting min_size=1 isn't really an option.
Can you please rephrase your questions and add...
You can check the newly attached disk with "dmesg".
It should show you the old disk disconnecting and the new one connecting and show you the new device name.
But you shouldn't use /dev/sd* names. Better is to go with the disks uuid.
Another methode is to execute:
ls -l /dev/disk/by-uuid/...
You need to take the disk offline, replace it with a new disk and then run:
zpool replace OLDDISK NEWDISK POOL
As seen here: https://docs.oracle.com/cd/E19253-01/819-5461/gazgd/index.html
You simply need to create a new backup storage target and choose it while making backups.
"Datacenter => Storage => Add" (You need to choose the type where to do backups directory, nfs, ceph for example)
But it looks like your storage is full anyway, so you need a network attached storage to...
If you want to we could talk in a meeting so we can actually try to fix it in that time.
You'd be up for that?
Could send you a private message to my conference system.
If you connected the shelf with two links to the LSI, try one.
You could also try a SATA drive.
Also you could do debugging with lsiutil
https://www.thomas-krenn.com/de/wikiDE/images/4/44/Lsi_userguide_2006_20130528.pdf
Well, if you have only 2 nodes i hope you don't run HA.
And i hope your CEPH Pool is configured with size=2 and min_size=2, otherwise you're playing a dangerous game.
Which Windows product did u have installed? 10? Server 2019?
I tested it in the same environment.
3 Node CEPH Cluster setup from the GUI with PVE6.1 when it was installed. Currently Version 6.3.3.
Created a VM with data and UEFI disk in CEPH Storage and installed debian 10 on it. Confirmed that it was UEFI.
My results we're than no matter what i tried...
Can i just ask about your cabeling, how you connected it, if IT Mode is enabled on the LSI Controller and which cable are you using?Are you using SATA or SAS drives?
I don't know if theres a specific documentation on how to set this specific permission.
PVEVMUser permission will allow people to logon to the pve Web-UI too and see a bunch of stuff.
(Obviously only their stuff)
You can create a new user in the pve realm and then assign this user with...
So it seems like you're mapping it correctly.
(Just a side note, if you don't need guest access, disable it.
But i can't seem to figure out the other thing. I'm quiet sure it's something simple though.
Maybe that's helping you debug the issue...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.