Thanks. That could be the reason. But the network interfaces naming scheme is the cause of the problem. Any way to avoid it? I've seen that I can pin the network name (https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names) but, does it make sense?
Regards,
Manuel
Hi,
I've experimented a problem after upgrading kernel to 6.8 on Proxmox 8.2 on a Dell C6420 node.
After renaming interfaces as described in Kernel 6.8 - "Known Issues & Breaking Changes" (https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_8.2), I've been able to use almost every network defined...
Hi,
I have a similar problem with LXC templates Debian 11 and Debian 12 on ARM64. When I choose these templates I and before completing the container creation I get the message:
extracting archive '/var/lib/vz/template/cache/debian-bullseye-20231124_arm64.tar.xz'
Total bytes read: 344780800...
I'm a bit confused about this subject.
I understand that if one node has 2 physical sockets with 2 cores each, and you set up a VM with 4 vCPU, then it is recommended to use NUMA and configure the VM as the node, with 2 sockets and 2 cores. This way, the VM is aware of the physical specs of the...
Using a dedicated swap disk on the VM would be a good way also to reduce the size and time of incremental backups using PBS, so it seems a win win choice.
Setting the VM disk options to cache=unsafe can cause to be swapped on the host? I wasn't aware of that. What other things can cause the use...
Hi,
I'd like to ask about the best practice on KVM guest configuration to avoid host swap usage and have the best performance.
On virtualitzation systems I liked to remove the swap partition on my VMs and give enought resources to the VMs and Hosts, but this can be sometimes problematic, so I...
We have experimented similar problems lately, using Proxmox 7.1 with kernel 5.13.
We use both Ceph and NFS (FreeNAS 11) over dedicated 10Gbps networks (one network for each type of storage).
We have experienced some disk corruption on different VM, always on NFS FreeNAS storage. There's nothing...
No, I didn't. We decided to change it for a C6420 with a PERC H330. It performs better than C6320 and in most cases better than the C6220. The C6220 still outperforms the C6420 in write cache, as it has a LSI9265 with write cache and the H330 on the C6420 does not have write cache. We solve this...
To limit access to vmbridges maybe an option could be to use pools as in VM or storage access control. That could be an easy way to do it and coherent with current user management.
What do you think about this?
Hi,
To use Proxmox at a school and let students manage their virtual machines and containers with a limited role, it would be nice to have a way to filter the network vlan configuration settings available to some network role.
The goal is to be able to do some network exercices like creating...
Hello,
I've recently set up a C6230 with an LSI2008 Controller and doing some performance tests I've seen that it performs worse than older C6220 with LSI9265-8i controller.
Now we are doing some tests changing some SAS 10k disks by SSD SATA disk and we have problems with the disk order. We...
No, sorry for the confusion. It is not on ZFS, the physical server is an HP DL380 Gen7 and the containers run on a hardware raid mounted via fstab and registered as a directory in proxmox.
I've been using docker on LXC with Proxmox (6.4 and earlier versions) for 3 years. It works fine. The purpose of using Debian/ubuntu LXC containers is that I need to give many machines to a group of about 20 students with just one fisical server and it would not be able to run 20 (or even more)...
I can ensure that I upgraded Ceph on all Ceph nodes. Some of the afected VM were stopped during the backup so the only possibility left is that there is some PVE service accessing Ceph on the failing nodes that I have not restarted.
I'll try again and check it.
Thanks
Hello,
We updated our cluster from 6.2 to 6.4 a few months ago. After that, we had the warning message "mons are allowing insecure global_id reclaim". We found information about this issue in the forum...
Hi,
Maybe you can just check at the HA Status to see if the nodes keep quorum during the backup. If I'm not wrong, if one node gets out of sync it simply reboots (if you use HA).
You can also try to reduce the network load during the backups using the bwlimit option in /etc/vzdump.conf.
Hope...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.