I get that its not great for VMs but the IT here used to have the servers on the baremetal with no redundancy for the data. I went with proxmox and RIAD Z1 because of redundancy. I can do hardware RAID as all the hosts are able to but went with...
Nein diese habe ich leider nicht mehr. Anbei befindet sich mein aktueller log des pve-clusters und corosync
Feb 25 14:34:33 pve2 systemd[1]: Starting pve-cluster.service - The Proxmox VE cluster filesystem...
Feb 25 14:34:33 pve2 pmxcfs[833]...
The only way to reclaim and free up allocation group inside of qcow2 files is by using qemu-img convert in the back, so if the UI is implementing this with the vm move command, then yes. This is one of the big downsides of using files with VMs...
Wenn PermitRootLogin yes schon gesetzt ist, prüf auch ob PasswordAuthentication wirklich auf yes steht — Debian Cloud-Images setzen das teilweise separat auf no:
grep -ri PasswordAuthentication /etc/ssh/sshd_config /etc/ssh/sshd_config.d/...
Hi
I found this thread: https://forum.proxmox.com/threads/hw-raid-strange-issue-megaraid_sas-fw-in-fault-state.143336/
I have the problem during installation, so i dont know where to add the item.
For who will have problem:
add pcie_aspm=off...
I'm embarrassed to admit, I had forgotten the root password. I had to mess around a bit but eventually got the right one and am now logged in.
Can you tell me what I need to do to remove the USB entry? Linux isn't my strong point and I'd prefer...
Today, we announce the availability of a new archive CDN dedicated to the long-term archival of our old and End-of-Life (EOL) releases.
Effective immediately, this archive hosts all repositories for releases based on Debian 10 (Buster) and older...
Das Clusterdateisystem in /etc/pve ist also auf jeden Fall in einem komischen Zustand. Hier wäre die gesamte Ausgabe des ersten fehlgeschlagenen Cluster-Joins interessant, alles danach können Folgefehler sein. Hast du die Ausgabe noch?
Hey all,
So I have a 3 node cluster in my environment:
PVE1 has 2 files servers and 2 containers, mostly read operations.
PVE2 has 2 file servers, a WSUS, a Data Backup server and a license manager, mostly write operations
PVE3 has the domain...
Entschuldigen Sie die falsche Schreibweiße, wie gesagt bin ich noch relativ neu.
Anbei die gefragten Inhalte:
pve:
root@pve:~# ls -l /etc/pve/nodes/
total 0
drwxr-xr-x 2 root www-data 0 Jan 18 15:37 pve
drwxr-xr-x 2 root www-data 0 Feb 24 16:14...
Same problem, but i got it a little more bad. Same three rules applied, but somehow it put summarily more than 2 points to the messages, and that's a problem, because it makes almost every message 3 or above points
WARNING: check: dns_block_rule...
I have been ignoring this message for a long time since creating a validity account and registering the IP of the DNS server doesn't make any difference for me. But today I noticed the spam score is much higher than 0.001...
Yes. A PVE cluster has normally an odd number of nodes, so 3 is the minimum. Yes we all know, VMware does it with two, but any proper clustering solution does it with an odd number.
No, the Q-device is only for voting (and only needs to be included in the corosync network).
If you are going for high-availability by way of redundancy then I think you should not go for the minimal amount of redundancy.
Single node, starting up a cluster. The node does have multiple NICs one of which is on the 10. network and the other 4 on the 192 network:
Cluster Join Information:
I'd prefer it be using the 192 network. The 10. network was the first...