The "Dynamic Memory Management" Wiki page states, about ballooning :
Does that mean :
(a) that the unused memory is given back to the host to be used as it wishes, even for non-Proxmox/KVM related tasks
OR
(b) that the unused memory is given back to the host, but only to be used by other VMs
Yep, your are are right, I overlooked that one, sorry ! :oops:
PLP SSD's really seem to be the right thing to have in an unstable electrical environment
So, to sum it all up, the best choices would be :
- Either PLP SSD + local ZFS + PVE-zsync
- Or PLP SSD + shared Ceph/RBD
I'm still...
@spirit : thanks for the idea, I've never heard of supercapacitor SSD's before. It's worth a try.
As I side note, I am surprised that there are people experiencing ZFS problems after power loss (https://forum.proxmox.com/threads/zfs-pool-failure-after-power-outage.90400/ and...
OK thanks. Given that I have plain HDDs, the best would probably be to go with local storages
I guess my assumption that a GlusterFS solution based on a battery-backed HW-RAID would prove more resilient seems to be wrong...
What is the best storage choice for a Proxmox cluster in a hostile environment ? By hostile environment I mean an environment with frequent power outages... VERY frequent power outages ... Like 300 power outages in 2 years...
The best of course would be to eliminate these power outages by...
Almost there but not quite yet.
If I take HW RAID out of the picture, then ZFS over iSCSI becomes a candidate again.
So, between Ceph/RDB and ZFS over iSCSI, which one would you recommend in terms of :
- reliability
- performance
- learning curve (I know neither Ceph nor ZFS nor iSCSI, I did...
This seems to answer my question :
And my question has also been already answered by this staff member. So, basically RAID1 and RAID10 aren't recommended with Ceph.
But I guess RAID 0 wouldn't hurt and would improve performances.
Great explanation, thanks !
One last questions. At the end, there are only 3 options possible to set up a shared storage that supports snapshots for both VMs and LXC containers :
CephFS
Ceph/RBD
ZFS over iSCSI
According to this discussion, CephFS is not recommended to store VM or LXC drives...
"a shared LV is not allowed to be active on multiple nodes concurrently"
I'm still not sure if I understand
your storage page states, for LVM (not LVM-thin) :
"It is possible to use LVM on top of an iSCSI or FC-based storage. That way you get a shared LVM storage."
So, I understand that :
A...
Fabian you are right. But LVM-thin does
question : for LVM your storage page states as a note "It is possible to use LVM on top of an iSCSI or FC-based storage". I guess this statement is also valid for LVM-thin, right ?
If I understand what I read on your website, you provide a driver that's a lot faster than Ceph/RBD. That's good to know.
But that wasn't my question. Right now I'm only doing some lab tests, so speed is not (yet) a critical factor. My question was to validate my understanding that the 4...
Just a quick question to be sure my understanding of https://pve.proxmox.com/wiki/Storage#_storage_types is right
I am looking to set up a shared storage that supports snapshots for both VMs and LXC containers.
As I understand, the only options are :
CephFS
Ceph/RBD
ZFS over iSCSI
LVM over...
Would be great if Proxmox integrated that feature : I have a Proxmox laptop with a container that takes a few minutes to boot up, so "hibernating" it before I have to shutdown / reboot my laptop could save me some time at boot.
Right now the only solution is to manage the container directly...
@wbumiller, is your procedure still supposed to work in 2019 ?
I just followed your steps to shrink a 256GB raw file to 32GB (with less than 10GB of real data), it seemed to work fine, the container starts OK, but no login prompt on console (so I guess the OS doesn't boot).
I did manually a...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.