I can't recommend them pre se for Ceph. But I also never tried such a setup, or had to find a use-case for it.
DB/WAL get a lot of small writes/reads. While the block part of the OSD gets mostly big writes/reads. On separating those two types, the performance may be increased as a whole. The...
Es ist ja auch nicht geholfen, wenn wir den Sachverhalt NACH dem Kauf im Enterprise Support erörtern. :)
Naja, nicht zu unterschätzen. Backup und Live Migration laufen da auch über das Management Netz. Da würde sich sicherlich eine weitere 10/25 GbE Karte rentieren. Speziell bei der Live...
You could but we don't support it. This means, the first thing we tell you is to install our packages. ;)
And our packages are almost similar upstream, mostly some cherry-picks that didn't make the cut.
As with any software upgrade. :)
Persönlich würde ich auf einen Epyc setzen. Höherer Base-Takt und mehr PCIe Lanes. Beides good für hyper-konvergente Setups. Auch ist mit NUMA und den beiden Sockets, das Latenz optimieren schwerer. Da es durchaus sein kann, dass das Netzwerk und die NVMe nicht auf der selben Node oder Socket...
In ZFS terms, a raidz1.
ECC does error correction on bits in memory, ZFS most likely will not be able to catch those and write corrupt data from memory to disk. But that is in principle with any other filesystem. The good part is ZFS will complain about it, since it has data integrity features...
It's not a problem per se, it's how quorum works.
This means, that you have created a split brain. No side has more than 50%, and therefore has no quorum. Also the Proxmox VE nodes won't have it.
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_quorum
For Ceph and Proxmox VE (corosync)...
We can only support our ceph packages. A pveceph install will take care of installing the Ceph packages from our repository.
https://pve.proxmox.com/pve-docs/chapter-pvesm.html#ceph_rados_block_devices
What do you mean by that?
You will need to map (read-only) the snapshot on the Proxmox VE node and then copy the folder into the container.
Safe in what way?
And last but not least Proxmox VE 5.4 is long-time EoL.
Then can I ask, did you read the article?
The link points to the commit explaining that setting swappiness to 0 will not prevent swapping. It just makes it least aggressive but it will still swap if needed. And in general swap is a good thing but sometimes needs some tuning.
It seems that running NFS-Ganesha in a container will need some relaxation of security.
https://discuss.linuxcontainers.org/t/nfs-ganesha-in-lxc/2401
https://github.com/nfs-ganesha/nfs-ganesha/issues/420
Best to create a fresh privileged container. Or restore a container from backup by...
That means on data redundancy. If one disk goes 'by by' then you have lost the data.
Use ZFS, ECC is not a must to run ZFS. But it's recommended. Then you can create a mirrored setup for data redundancy.
Well the steep curve is a lot of linux to learn.
The new virtual hardware needs to mirror the current setup on Xen.
For reference: https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE
Another way to migrate the Win 2016 is to use the windows backup. Create the backup including the system state, install a new Win2016 on Proxmox and...
Wenns bloß zum spielen ist, dann kann ein Proxmox VE + Ceph cluster auch virtuell betrieben werden. Ceph services können auch unabhängig voneinander installiert werden. Aber ob das mit deiner Hardware so gut läuft ist eine andere Geschichte.
This could probably be done by a VM template and involving the snapshot feature (contrary to naming). If it should not have a state.
OFC, this will only be a good option if the VM isn't running for a long time. Since the temp. image is stored in /var/tmp/ (hardcoded) :/.
If state is acceptable...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.