During the last big break (lxc docker), I think it was dissuaded from us using containers to host mission critical workloads and to use VMs instead for anything that must not go down. But it is difficult due to ram shortages right now...
how come such bugs happen BEFORE releasing the new version? bloody hell, testing in proxmox seems to be nowhere ;-(
no, I am not complaing - just stating the fact
anyway, thanks to all who posted the workaround
how come such bugs happen BEFORE releasing the new version? bloody hell, testing in proxmox seems to be nowhere ;-(
no, I am not complaing - just stating the fact
anyway, thanks to all who posted the workaround
I don't know if this was implied, but also software defined storage is a requirement so that you can quota the used storage for your clients / customers / tenants.
What is missing to turn this into a fully fledged setup are:
RAM restrictions...
copy folder and all fiels from
/var/lib/pmg/templates
to
/etc/pmg/templates
Edit the files: /etc/pmg/templates/main.cf.in
at the smtpd_sender_restrictions part,
add below content:
check_sender_access regexp:/etc/postfix/Blocked_Sender_Domain...
Hey,
Just setting up PBS with a dell tl2000 with 2 sas lto5 drives connected to the server. I have updated the firmware of the TL2000 and rebooted the server in order to troubleshoot.
When I try to run proxmox-tape barcode-label --pool I get the...
That's what I figured but is not what happened here. Had a node already off, lost another for an unknown reason, booted back the node that I had previously shut down, another node went down, then when I turned on the node that originally went...
Wir können dir ja nicht sagen was dein Server oder deine VMs um 4 Uhr Nachts tun, das kannst nur Du selbst wissen. Du hast ja nicht einmal mitgeteilt um was für Container oder VMs es sich handelt, was darauf läuft usw. usf.
Fahre den doch nachts...
You can lose 2 nodes but the cluster does not quorate anymore and thus VMs "stand still" because noone "knows" if whats happening is what is supposed to happen in the cluster :) Thats the reason there is a quorum in the first place, so that...
I am currently in the process of doing that. Most of my VMs got corrupted in this incident so need to rebuild them all, some from backups, some I am able to repair. It's a pain. Still have no idea why this happened but from what I've been told on...
You didnt even notice that a node rebooted and rejoined your cluster (which after you deleted it should NEVER happen because this can ruin your cluster). IMHO there is more broken then i can see as of now. You could try to remove the node from...
I think a week is long enough for the drive to take in the data, lol.
I have heard it's an issue with Proxmox and SMB. But it could be something else, too. I think this just gives me an excuse to invest in a PBS. My current method only works...
I had a massive cluster failure, long story short after a series of failures one node went rogue and started up VMs that were already running so it ended up causing massive corruption across the board. All the VMs that are already running work...
You can mark a directory mapping as shared by appending ,shared=1 to the mp line like so:
mp0: /mnt/pve/nfs0/data,mp=/data,shared=1
Make sure that /mnt/pve/nfs0/data exists on all nodes.
References:
* man pct
*...