Having the datastore mounted on two separate clusters can be really dangerous, since the clusters dont share file locking information. That said, if you are able to make sure they dont try to consume the same resources at the same time, what you...
Why is everyone using passive voice when recounting that "consumer drives are discouraged?" by whom? and under what circumstances? There is nothing INHERENTLY wrong with using non enterprise drives for zfs pools, as long as you understand the...
We did a deep dive on this feature in this article: https://kb.blockbridge.com/technote/proxmox-qcow-snapshots-on-lvm/
https://forum.proxmox.com/threads/inside-proxmox-ve-9-san-snapshot-support.169675
Blockbridge : Ultra low latency all-NVME...
a couple of thing:
1. if you do wish to deploy lvm on mdadm, you can. it just requires a bit of "linux" setup. but you shouldnt, because
2. a zfs/btrfs mirror is seamless, has filesystem integration, inline compression, snapshots, etc. it is a...
If memory serves, Windows XP allows a repair install. just boot the windows xp iso (sata/ide host bus for the existing virtual disk) and reinstall.
Its also possible to use universal recovery from Acronis to reset an install using new hardware.
since you're in a cluster, the conversation should expand to include provisioning in general.
measure your total provisioned RAM across your entire cluster. assuming your cluster nodes are identical and have no additional host load (eg...
this is due to the stats collector being in a hung state. make sure there are no vm's still referencing the missing datastore; if the question marks are still there:
1. check pvesm status. there should be no unknown datastores
2. systemctl...
the PVE install defaults are designed for "homelab." you can and should ignore them if you're using for production. you can change this at any time. and should.
No. You have 256GB of ram. swap can only slow down your VMs if its ever actually...
The placement logic is per node, not per drive. the only sane EC config possible with 3 nodes is 2+1. But bear in mind that while you CAN do this, its not really supportable; on node down you're operating without checksum at all, and under normal...
AFAIK Multipathing is the preferred method to address iscsi over bonded interfaces as it is completely agnostic to the network configuration and features- but bonding can work as well. if you DO want to use bonding make sure both ends (host and...
I think this is the part of the conversation where we investigate the WHY. If your intention is to live migrate without shared storage, any CoW backed storage (zfs, qcow, even lvm with snapshot) would work and doesnt require any additional steps...
in that case, the only difference is in load calculation. Since Zen4 and Rocket Lake have similar IPC performance, its a simple matter of:
Relative Performance, Xeon E-2388G (3.2GHz base, 8 core)= 25.6
Relative Performance, AMD EPYC 4244P...
It actually cant. keep in mind that ceph uses networking for both host and drive traffic, which means you need to double the bandwidth vs filesystem throughput.
"perform better" doesnt mean anything.
do your application require specific feature? do they benefit from scaling up or sideways (do your apps need clock speed?) Next, what are your IO needs? the Xeon part has 40 lanes of gen4 PCIe; the Epyc...
Your root file system doesnt really matter for the purposes of this discussion. only the vm storage does. Assuming you intend to use the same filesystem for your OS and payload, you cant use ZFS replication- but that doesnt mean you cant...
I've only ever used mesh in a lab scenario, and that was some time ago. I used broadcast bonds and it worked well enough. Im fairly certain that no logical topology will result in any meaningful difference in performance, but as for stability...