From the file I read
In fact, depending on the backup speed, VMs are not slowed down a bit: they are slowed down a lot, freezed or even crashed.
On Windows machines, I receive the ESENT/508 error: svchost (1008) SoftwareUsageMetrics-Svc: A request to write to the file...
The main firewall problem on the node seems resolved using this patch.
Ref: https://forum.proxmox.com/threads/firewall-stuck-on-pending-changes.98418/
EDIT: anyway, the vm/ct firewall still doesn't work
I found this error in /var/log/syslog:
/var/log# pve-firewall restart
Dec 30 17:02:36 pve1 systemd[1]: Reloading Proxmox VE firewall.
Dec 30 17:02:36 pve1 pve-firewall[2448715]: send HUP to 1278
Dec 30 17:02:36 pve1 pve-firewall[1278]: received signal HUP
Dec 30 17:02:36 pve1...
I'm looking again now to the firewall, and I see now that the firewall is completely open, not blocking anything for both the nodes and the VMs.
Probably I was wrong when I created the post, or something changed.
I tried restarting pve-firewall and dis/enable the firewall settings, without...
Hi,
I have a pve on Internet, I want to block any traffic between VMs, and allow them to go to Internet only.
I enabled the firewall on datacenter, node and vm level.
The node firewall works, I can only connect to it from my office public IP address, but the VM pve firewall doesn't DROP...
Hi all,
I just subscribed to a nested PVE VDS from Contabo.
The first thing I did was upgrading to PVE 7.1, after that I imported some VM from my onprem server, just to see they don't start: linux vm starts with kernel supported virtualization = no, while Windows VMs get stuck at boot (blue...
Thank you for your really good answer. I have a pair of customers' ceph clusters with enterprise ssd, and they work with no problems.
As this is an internal/test cluster, I went with consumer SSD for economical reasons, but I have thrown on these too much time!
The thing is, the SSD connected...
I have a pair of HPE DL360 Gen8
dual Xeon, 64GB RAM, 2 hdd 10k sas for system (ZFS RAID1) and 4 consumer sata SSD
They're for internal use, and show absymal performances.
At first I had ceph on those SSD (with a third node), then I had to move everything to NAS temporarily.
Now I...
I have a cluster with P420 RAID controllers, and very bad performance with SSD.
I know I can configure the controller in HBA mode, but then I will lose the system RAID1.
I would like to switch to HBA mode, reinstall the system in ZFS raid1 and then move the configuration from the old...
I just hit this problem too. I think, as a suggestion, that the "Block encrypted archives and documents" really blocks it, the Heuristic Score thing is not very intuitive.
Thanks.
All nodes are reachable, and the cluster is ok:
These are software versions, I did an apt upgrade of OLD3 yesterday:
pveceph status in old nodes works, while on new nodes returns got timeout
I didn't read correctly, sorry.
This is the first of the new servers:
root@NEWSERVER1:~# ls -al /etc/ceph/
total 12
drwxr-xr-x 2 root root 4096 Jul 15 17:12 .
drwxr-xr-x 92 root root 4096 Jul 15 22:31 ..
lrwxrwxrwx 1 root root 18 Jul 15 17:12 ceph.conf -> /etc/pve/ceph.conf
-rw-r--r-- 1...
I added the second new node (it's the fifth), and used the pveceph install command.
The result is the same, "Got Timeout (500)".
The new nodes are a bit more updated, 5.4.15 against the 5.4.13 of the older ones, but there are no ceph packages to upgrade in those.
Also, the new ones are...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.