Recent content by Stefano Giunchi

  1. VMs freezing and unreachable when backup server is slow

    From the file I read In fact, depending on the backup speed, VMs are not slowed down a bit: they are slowed down a lot, freezed or even crashed. On Windows machines, I receive the ESENT/508 error: svchost (1008) SoftwareUsageMetrics-Svc: A request to write to the file...
  2. blocking traffic in-between VMs

    The main firewall problem on the node seems resolved using this patch. Ref: https://forum.proxmox.com/threads/firewall-stuck-on-pending-changes.98418/ EDIT: anyway, the vm/ct firewall still doesn't work
  3. blocking traffic in-between VMs

    I found this error in /var/log/syslog: /var/log# pve-firewall restart Dec 30 17:02:36 pve1 systemd[1]: Reloading Proxmox VE firewall. Dec 30 17:02:36 pve1 pve-firewall[2448715]: send HUP to 1278 Dec 30 17:02:36 pve1 pve-firewall[1278]: received signal HUP Dec 30 17:02:36 pve1...
  4. blocking traffic in-between VMs

    I'm looking again now to the firewall, and I see now that the firewall is completely open, not blocking anything for both the nodes and the VMs. Probably I was wrong when I created the post, or something changed. I tried restarting pve-firewall and dis/enable the firewall settings, without...
  5. blocking traffic in-between VMs

    Hi, I have a pve on Internet, I want to block any traffic between VMs, and allow them to go to Internet only. I enabled the firewall on datacenter, node and vm level. The node firewall works, I can only connect to it from my office public IP address, but the VM pve firewall doesn't DROP...
  6. nesting not working with newer kernels

    No, unfortunately I don't know that. I can bet that it's a 5.4, anyway. I'm using kernel pve-kernel 5.4 on Pve 7.1, without any problem so far.
  7. nesting not working with newer kernels

    Hi all, I just subscribed to a nested PVE VDS from Contabo. The first thing I did was upgrading to PVE 7.1, after that I imported some VM from my onprem server, just to see they don't start: linux vm starts with kernel supported virtualization = no, while Windows VMs get stuck at boot (blue...
  8. very poor performance with consumer SSD

    Thank you for your really good answer. I have a pair of customers' ceph clusters with enterprise ssd, and they work with no problems. As this is an internal/test cluster, I went with consumer SSD for economical reasons, but I have thrown on these too much time! The thing is, the SSD connected...
  9. very poor performance with consumer SSD

    I have a pair of HPE DL360 Gen8 dual Xeon, 64GB RAM, 2 hdd 10k sas for system (ZFS RAID1) and 4 consumer sata SSD They're for internal use, and show absymal performances. At first I had ceph on those SSD (with a third node), then I had to move everything to NAS temporarily. Now I...
  10. move cluster node configuration

    I have a cluster with P420 RAID controllers, and very bad performance with SSD. I know I can configure the controller in HBA mode, but then I will lose the system RAID1. I would like to switch to HBA mode, reinstall the system in ZFS raid1 and then move the configuration from the old...
  11. encrypted archives still coming through

    I just hit this problem too. I think, as a suggestion, that the "Block encrypted archives and documents" really blocks it, the Heuristic Score thing is not very intuitive. Thanks.
  12. ceph install broken on new node

    Do'h! I still didn't add the new nodes to the ceph network. I feel so stupid...
  13. ceph install broken on new node

    All nodes are reachable, and the cluster is ok: These are software versions, I did an apt upgrade of OLD3 yesterday: pveceph status in old nodes works, while on new nodes returns got timeout
  14. ceph install broken on new node

    I didn't read correctly, sorry. This is the first of the new servers: root@NEWSERVER1:~# ls -al /etc/ceph/ total 12 drwxr-xr-x 2 root root 4096 Jul 15 17:12 . drwxr-xr-x 92 root root 4096 Jul 15 22:31 .. lrwxrwxrwx 1 root root 18 Jul 15 17:12 ceph.conf -> /etc/pve/ceph.conf -rw-r--r-- 1...
  15. ceph install broken on new node

    I added the second new node (it's the fifth), and used the pveceph install command. The result is the same, "Got Timeout (500)". The new nodes are a bit more updated, 5.4.15 against the 5.4.13 of the older ones, but there are no ceph packages to upgrade in those. Also, the new ones are...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!