Yep - i know it would work...just wondered why the driver thought it was off but it was enabled in the Bios.
Im in the office today so ill do that when i get home tonight.
I don't think this is a good or stable approach. And I know its open source and we could contribute the technical side of the code. But this seems to be a pretty important architectural decision on the core of Proxmox and Backup Server and much...
a small follow up to anyone who might also run into this issue... here is how I fixed it for now:
Instead of passing through the entire PCIe SATA device to the TrueNAS VM I followed this guide Passthrough HDDs to VM
to pass my HDDs individually...
Just upgraded myself. Went just fine no issues. 3 OSDs. I got some interesting data after upgrading:
Ceph Squid → Tentacle Upgrade Benchmark Summary
Cluster: 3-node Proxmox (Intel NUC14, NVMe)
Pool: ceph-vms (replicated, size 2 / min_size 2)...
(I've dropped loads of terms that can be searched on here)
Proxmox SDN does not include a router. It looks to be designed for layer 2 and tunneled layer 2. To be honest I haven't bothered with it. However I have been fiddling with networks...
This is exactly the impression I was getting from the discussion as well. The OP wants a “simple” small-business solution that nonetheless delivers a fully featured, reliable DR setup with many characteristics typically associated with enterprise...
I think you may be obsessing about the "performance" and "cost" aspects entirely too much.
1. What is the makeup of your payload? (number of vms, function, ACTUAL performance in IOPs)
2. a backup device MUST be a separate device then the...
Wir sprechen über das große RAID - richtig?
/dev/sda kannst du entweder per fstab und einen Eintrag in der storage.conf oder per Directory (Disks, Verzeichnis) einbinden.
https://pve.proxmox.com/pve-docs/chapter-pvesm.html#storage_directory
seriously? no solution ?
i tried also those commands, seen on other thread
chmod 777 on the file
and then
Bash:
systemctl stop pve-cluster
rm -f /var/log/pve/tasks/active*
rm -f /var/log/pve/tasks/index*
systemctl start pve-cluster[/CODE]...
You enable HA for a VM, not for a Node. The node where a HA-VM is running will fence (hard reboot) itself when Quorum is lost. Other nodes without a HA resource will not do that.
Yes, that's the idea. The direction of the replication switches...
ich habe es jetzt mal so gemacht. DAs ist das Ergebnis. Aber wie komme ich jetzt an die Kapazität dran? Ich bekomme immer nur das 2.Raid vorgeschlagen.
Got the picture.
As stated above, on Proxmox host reboot, the mdadm raid is probably being picked up by the host & then terminated.
To avoid this, you could try to edit (host-side) :
nano /etc/mdadm/mdadm.conf #to specifically ignore these...
BTW- all that business is outdated I believe since, what ? 8.something ? PVE kernel has wg built-in so no need to install stuff in the host (sounds like you've got that going, but if you did install wireguard-dkms on the PVE host, maybe that's...
For me, this would be even worse.
My dev servers are 2 socket servers with 6 core CPUs, whereas the prod systems are single socket with 32 core CPUs.
Either way, all of them have subscriptions, dev community and prod standard, this should count...
Ja, ist auch so. Aber ich war der Meinung, das es andersherum auch nicht besser war. Ich werde es aber nochmal installieren und dann genau hinschauen, was ich evtl. falsch gemacht habe. Ich war der Meinung, das ich bei kompletter vergabe der...
Sieht so aus, als ob du nur 50GB vom ersten RAID vergeben hast. Bei der Installation solltest du entweder den Installer selbst die Aufteilung der Disk entscheiden lassen oder eben manuell vergeben. Wenn du aber 50GB als hdsize vergibst, dann...