PVE's demands are really small - you can run it even on a N40L, given that you have enough RAM. The only thing I can advise is, that you re-consider about flashing your disk controller to IT-mode, since both Ceph and ZFS will welcome that. Other than that… that's quite a bunch of HW you're...
Well… no… even if the zpool is in heavy use, you should always be able to remove a cache device without issue. As that cache is "only" L2ARC, there can be nothing on it, which isn't already on stable storage on the vdevs.
Also, that rpool is obviously not made up of other parts of that NVMe...
Okay… das hier scheint das Problem zu sein - nur warum?
lxc-start 112 20200216180905.727 INFO conf - conf.c:run_script_argv:372 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "112", config section "lxc"
lxc-start 112 20200216180906.296 DEBUG conf -...
Ich versuche mich gerade an meinen ersten LXC containern und wollte eine kleine CentOS8 KVM als LX container nachbauen. Also habe ich das CentOS8 lxc template heruntergeladen und damit einen LX container erzeugt. Das geht auch alles, aber nachdem ich innerhalb des Centos8 Conainers ein yum...
I have just setup this very same scenario and created a 2-node cluster just for replication. Now, I wanted to perform some failure checks and and wanted to know, what steps would be necessary to re-create, in this case the 2nd replication node in case it's system got corrupted or broken.
So I...
I don't object to that! As long as you have managed to backup that data in time, this is a viable solution. I was wondering what the options were, if one hadn't done that.
…not that I intend on letting that slip, of course. ;) And yes, I will also create a disk image of my install disk…
Thanks!
Well yes, that will be an option, but what if you don't have that and are only left with the ZPOOL holding your raw disk images? Isn't there a way to get them active again? You can't create a new guest with the same ID as the disks, that's for sure…
I am still exploring the two most interesting storage solultions for my home/lab setup, which are Ceph and ZFS. I know I can make Ceph be fine with just 3 or 4 OSDs running on the same node, if the crush_map gets tweaked accordingly. However, I am already a ZFS old-fart and I also looked into...
So just for clarification… I'd install the base Ceph components, but don't configure any monitor, mds or mgr to run on any of those PVE nodes, which should just run the guests?
Thanks,
budy
Hi,
I am just starting out with Proxmox coming from Oracle VM, which has been EOL'ed. I installed a three node PVE 6.1 cluster with Ceph enabled. However, I do want to keep the Ceph storage nodes free of VMs and I was wondering, if I can just install PVE on another host and have it connect to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.