Der ist an einem USB 3 Port angeschlossen…. Der Fehler ist immer zu unterschiedlichen Zeiten…
Habe am Bootverhalten mit den Kerneln nichts verändert dazu kenn ich mich zu wenig aus…
Habe nur 2VMs laufen Homeassistant und den Open3E CanBus
For me? Any device with PLP.
Via a German search engine: https:// geizhals.de/?cat=hdssd&xf=7156_Power-Loss Protection~7525_SATA
Most of my SSDs are "Enterprise class". Most of my NVMe are not. This make no sense, necessarily. But especially...
Thanks for the info! Now if a drive did fail I would have backups since I usually backup my VMs on an external SSD weekly. I currently plan having a single mini pc which is why I’m limited to the m.2 and the SATA. Also what SSD would be a good...
I've used GlusterFS for years, and run 3 test labs and 3 production environments with PVE + GlusterFS that work beautifully and are stable and fast with a low-resource footprint. Removing GlusterFS support is a HUGE dissapointment for me.
Both (!) is suboptimal. (You know that, right?)
But if I am forced to choose: the PBS (or any backup system) should be independent from the source it will backup. You'll probably want the backup to be available and usable when (not: if!) the...
Both (!) is suboptimal. (You know that, right?)
But if I am forced to choose: the PBS (or any backup system) should be independent from the source it will backup. You'll probably want the backup to be available and usable when (not: if!) the...
A lot of guides suggest passing through physical disks to VMs when people want to run things like TrueNAS.
But what if you want to use your HDDs for more than just a NAS like for instance log devices to reduce "less important" writes to SSDs or...
I am trying to live migrate a vm on shared storage between two unclustered hosts using PDM.
Both hosts are sharing the vm using the identical NFS share.
The migration works, but the process takes several minutes. In addition to moving the...
Do you have news?
I have to restart the pvescheduler service to resolve the issue.
Now I have replication jobs that are failing because of new crashes:
Thanks.
A node on which the cluster is initially created can have guests. Any node that is joining the cluster needs to be empty.
Therefore create the cluster on the node with the PCI passthrough TrueNAS VM.
The other node needs to be freed. Either by...
Well... I have one cluster and a few standalone setups in my home that run proper PVE.
This specific one is going to be used as a lab setup only, using some hardware I don't have in the other setups and cannot really afford to buy for them...
You will get different (but valid!) answers to this question.
But my decision is: one service == one VM!
This especially means that I do separate docker containers in a single VM each. Yes, this seems to waste resources. But I like it this way...
I've seen several threads on the basic issues involved but being new to Proxmox, it's tough to decode. I'll try to get to what I'm really trying to accomplish and this specific problem.
Hardware: Cisco UCS-C220-M5SX (2 sockets, 384GB RAM, 10TB...
Sounds like every node lost quorum (with HA enabled) at the same time. This suggest a network hick-up of some kind, which appears to have been temporary. Check the system logs of the nodes (just before the reboot) and your switches.
Hi. This might be a blind shot, but I seem to remember a similar issue when the reason was the space in a temporary directory, not on the destination device.
Maybe you'll be able to "google" for a similar thread...
edit: Also fleecing rings...
Hey there!
I managed to get my 5700g passthrough working, however I am having some seg faults randomly in my VMs, causing either browsers to crash, or the whole VM to crash. I am a novice, so I am not sure where to go to troubleshoot for these...
Ideal? No!
Acceptable? Maybe! It always depends on your expectations!
I have some nodes (in a cluster) which boots from a single device. (Because of the restrictions of a Mini-PC in a $Homelab.) But the hosted VMs live on a redundant one...