As @bbgeek17 said, thats because thats not an option. HOWEVER, it is possible to utilize the storage backend to accomplish this even if it will not be hosts aware. I'm not familiar with the Storwize productline but I have to imagine there is an API available for you to call and issue a hardware...
Wrong "full." your CONTAINER disk is full, not the host store. you need to either increase the container disk size, OR mount the disk manually (pct mount vmid) and clear up stuff.
Remote hands are quite adept in following instructions, but that can only be as successful as your hardware is in being identifiable. Most enterprise level servers have ways for you to identify and turn on id lights.
Unless you're using a PC with onboard sata, your sas hba has means to blink...
No. If you want to make use of your ram in your intended capacity, make sure your applications cache at their level, not at the disk subsystem.
No. Ceph is distributed, there's no "primary" storage server.
Correct. It wasnt useful in most cases.
I guess my first reply was for this question :)...
It does. the addition of tpm into the mix means updates to your bootloader can only be made inside the booted system or else they will not be allowed to boot. for home use, you probably want to just avoid it ever being on.
herein lies the issue; you're asking for help, but already decided the...
While that works, if you have more nodes available you'd be better served with 4-5 OSD nodes.
Spinning disks= poor performance. you'd be better off with a MUCH higher count of smaller disks, preferably SSD.
What you didnt mention:
multiple interfaces of high bandwidth/low latency network...
grub needs to be installed on a non zfs partition. UEFI is more flexible.
I didnt see any mention of tpm in your initial post. since BIOS more has no provision wrt to TPM, it would make sense that your host would boot in bios mode regardless of TPM.
Long story short- you need to be aware of...
as long as the vm is turned off, you can just drop to shell and
mv /etc/pve/nodes/currentnode/qemu-server/vmid.conf /etc/pve/nodes/destinationnode/qemu-server
understand that these ideas are mutually exclusive. A cluster requires shared storage to be effective. The reason I and others explained you be aware of iSCSI based shared storage limitations are NOT because shared storage is not required, but rather that this particular method has limitations...
rule of thumb on sizing:
1 core per daemon (mon/mgr/osd)
4GB ram per daemon (strictly speaking, monitors will need ram proportionate to the size of the cluster, anywhere from 2-16GB but 4 is probably safe)
Save 4 cores and ~4GB of ram for the underlying OS. it will probably be idle, but better...
I believe that old compellent stuff is EOL'd. Entry level Dell storage now is ME5024; the good news is that its WAY faster, and doesnt have any unlockable features (you get it all.)
As @Blueloop mentioned, before you go the iscsi route you should familiarize itself with its limitations in a pve...
Why are you using your btrfs backing store as a filestore? create a btrfs store instead, and move your disk over. After which, ask yourself what problem you're trying to solve- its a thin provisioned file system; what's the harm of leaving the logical fs at 128GB?
If you can think of a reason this would be desirable, you can submit that to the ceph development team.
If there is something you want to do that isnt provided in the existing toolset- by definition.
This is the reason what you suggest is not desireable.
I LOVE customers who pay for stuff they dont use! my favorite!
snapshot integration is a pretty big deal in any vsphere infrastructure I support. If op doesnt want/need/use it, then its not an issue. the good news is that implementing it (at least in a basic form) is a simple script that issues...
For the most part, once you set up your multipath config file, it should work the same way on all hosts using it. this isnt really a problem, usually even the default config works "well enough." mostly issues arise due to lvm and multipath race conditions at boot, which are solvable with the...
was expecting realtek given your symptoms ;)
its possible that alx drivers have issues with kernel ver 6.8; I dont have any hardware with this chip so I cant test, but I would see if the behavior is different on 6.5.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.