Wir sind auf HW-Raid angewiesen da wir mit ZFS deutliche Performance probleme unserer Systeme hatten.
Wir hatten 2 monate lang ZFS im einsatz und bei unserem Anwendungsfall mit Bis zu 30 Windows VMs (Je Host) mit Datenbanken und Terminal Systemen...
Having spent the last few days fighting all the subtle parts of getting this working, I put together a quick guide on how to get Proxmox, running on a Strix Halo, machine, with working GPU passthrough into LXC containers. To be clear, this is a...
Absolutely. Therefore You can configure in my script what You want to backup in addition to the usual config files. And the snapshot (did I mention that ZFS is just great ? ) catches it all.
Ah .. okay. Gut, dann sind die ganzen Fehler verständlich und nicht zu beheben. Man kann keine VM von einem Storage auf ein anderes migrieren und gleichzeitig die Snapshots behalten.
Wie @Johannes S bereits gesagt hat solltet ihr zuerst mal ein...
I finally found the solution after a lot of debugging: the root cause was an improperly formatted ESP.
The partition /dev/nvme2n1p2 (ESP) only had a PARTUUID and no filesystem UUID, so proxmox-boot-tool skipped updating entries for the newer...
Unfortunely it's not that easy to implement a "one fit's it all"-solution. One big advantage of ProxmoxVE is it's open nature. Although normally you wouldn't install anything else on the host you are free to do so (it's a Debian after all). This...
It (obviously) aimed at small systems with only a few disks ;-)
Sure. And so does a RaidZ1 based pool with 10*4 drives = 10 vdevs ;-)
Correct! And it is great to have new possibilities.
Maybe. Probably there are some data hoarders with many...
Unfortunely it's not that easy to implement a "one fit's it all"-solution. One big advantage of ProxmoxVE is it's open nature. Although normally you wouldn't install anything else on the host you are free to do so (it's a Debian after all). This...
the post is a little bit too absolute when it says parity layouts give you only the IOPS of a single disk. That is roughly true for a single small RAIDZ vdev, but dRAID is not exactly the same. OpenZFS says dRAID performance is similar to RAIDZ...
Hi, @daves_nt_here,
Just a wild guess here - is it possible to have a scheduled periodic task not by cronjob but by systemd-timer(s)?
You could check it by:
# systemctl list-timers
and look for something suspicious.
Best regards,
NT
Ja, das frage ich mich auch...das kann nur beim Erstellen der VM passiert sein.
Klar..ist halt in nem Board so egal wo man ist....man fragt halt immer schnelle und will dann auch gleich ne Antwort....es gibt aber andere methoden heutzutage...
It (obviously) aimed at small systems with only a few disks ;-)
Sure. And so does a RaidZ1 based pool with 10*4 drives = 10 vdevs ;-)
Correct! And it is great to have new possibilities.
Maybe. Probably there are some data hoarders with many...
It (obviously) aimed at small systems with only a few disks ;-)
Sure. And so does a RaidZ1 based pool with 10*4 drives = 10 vdevs ;-)
Correct! And it is great to have new possibilities.
Maybe. Probably there are some data hoarders with many...
as you can see, linux got no porblems with network. VMs work fine too on node1-2, didnt add VMs to node3 yet
Nic driver bug ? Previously used intel nic, changed it to broadcom, still messages
Changed dac cables too
Removed all bondings on node3...
Hi Tchaikov, I think that the user setup is hyperconverged with proxmox/ceph on the 3 nodes. So, I think that the rbd client is able to handle it without hook . could you confirm this ? (I have seen other proxmox users doing it in hyperconverged...
Genau so - Googlen, nicht Youtuben!
Es gibt hier übrigens auch ein Wiki (https://pve.proxmox.com/wiki) - das steht hier ganz unten bei QUICK NAVIGATION. Sind viele gute Anleitungen drin !
Der Tipp sollte ja auch reichen sich damit zu...
it's look like to nic of the 3th is going down/up or flapping.
nic driver bug ? maybe bad cable ?
do you have any kernel log on the 3th node ? #dmesg ?
maybe also try without bonding/lacp with 2 corosync links