From the file I read
In fact, depending on the backup speed, VMs are not slowed down a bit: they are slowed down a lot, freezed or even crashed.
On Windows machines, I receive the ESENT/508 error: svchost (1008) SoftwareUsageMetrics-Svc: A request to write to the file...
The main firewall problem on the node seems resolved using this patch.
EDIT: anyway, the vm/ct firewall still doesn't work
I found this error in /var/log/syslog:
/var/log# pve-firewall restart
Dec 30 17:02:36 pve1 systemd: Reloading Proxmox VE firewall.
Dec 30 17:02:36 pve1 pve-firewall: send HUP to 1278
Dec 30 17:02:36 pve1 pve-firewall: received signal HUP
Dec 30 17:02:36 pve1...
I'm looking again now to the firewall, and I see now that the firewall is completely open, not blocking anything for both the nodes and the VMs.
Probably I was wrong when I created the post, or something changed.
I tried restarting pve-firewall and dis/enable the firewall settings, without...
I have a pve on Internet, I want to block any traffic between VMs, and allow them to go to Internet only.
I enabled the firewall on datacenter, node and vm level.
The node firewall works, I can only connect to it from my office public IP address, but the VM pve firewall doesn't DROP...
I just subscribed to a nested PVE VDS from Contabo.
The first thing I did was upgrading to PVE 7.1, after that I imported some VM from my onprem server, just to see they don't start: linux vm starts with kernel supported virtualization = no, while Windows VMs get stuck at boot (blue...
Thank you for your really good answer. I have a pair of customers' ceph clusters with enterprise ssd, and they work with no problems.
As this is an internal/test cluster, I went with consumer SSD for economical reasons, but I have thrown on these too much time!
The thing is, the SSD connected...
I have a pair of HPE DL360 Gen8
dual Xeon, 64GB RAM, 2 hdd 10k sas for system (ZFS RAID1) and 4 consumer sata SSD
They're for internal use, and show absymal performances.
At first I had ceph on those SSD (with a third node), then I had to move everything to NAS temporarily.
I have a cluster with P420 RAID controllers, and very bad performance with SSD.
I know I can configure the controller in HBA mode, but then I will lose the system RAID1.
I would like to switch to HBA mode, reinstall the system in ZFS raid1 and then move the configuration from the old...
I didn't read correctly, sorry.
This is the first of the new servers:
root@NEWSERVER1:~# ls -al /etc/ceph/
drwxr-xr-x 2 root root 4096 Jul 15 17:12 .
drwxr-xr-x 92 root root 4096 Jul 15 22:31 ..
lrwxrwxrwx 1 root root 18 Jul 15 17:12 ceph.conf -> /etc/pve/ceph.conf
I added the second new node (it's the fifth), and used the pveceph install command.
The result is the same, "Got Timeout (500)".
The new nodes are a bit more updated, 5.4.15 against the 5.4.13 of the older ones, but there are no ceph packages to upgrade in those.
Also, the new ones are...
Before trying that, I did read this post which said to use pveceph init on new nodes too. It didn't do any harm, but I should have asked before doing that.
It's not a link, and neither in the other servers. The file is correctly synced.
-rw-r----- 1 root www-data 1038 Jul 15 17:12 ceph.conf...
I'm replacing two nodes on my PVE5.4 cluster. I will upgrade to 6.x after that.
I installed the first of the new nodes, joined to the cluster, reloaded the webgui, everything ok.
Then, from another node's webgui, I clicked in the new node's "Ceph" section.
It proposed to install ceph packages...
Manually editing the greylist' whitelist is worse :)
I already have SPF check enabled, but AFAIK it doesn't work with Greylisting.
SPF tells if a mail server can send for that domain AT ALL (or give a bad evaluation if SPF is "~" and not "-")
Graylisting stops the first connection of SPF...