The cloud images for Debian 12 Bookworm also do this mess out of the box, I've spent the last week trying to figure out why the qemu-guest-agent kept getting turned off.
@rwithd Did you ever create a feature request for this? I've been running into this issue almost as long as I've been running PBS. It makes it hard to know if a PBS sync has a more serious problem when you get several of emails about this every day.
UPDATE
Issue seems to be related to the 6.1 Kernel opt in and 4Kb sector sizing on the NVME. Another system with 512b sector sizing works fine on the 6.1 kernel and reverting this machine 5.15.85-1-pve causes storage to go back to working.
ORIGINAL
After PVE updates, VMs running on a specific...
I have a relatively small nvme Ceph cluster running on a dedicated 10gb network. RBD and CephFS performance seems to be pretty good at around 500MB per second in various synthetic benchmarks. Performance uploading a 16gb test file to S3 (RadosGW) from a VM is terrible at only 25MB or so...
Trying to figure out if this a bug or just a misunderstanding on how live migrations are supposed to work with the latest enhancements. I'm on the 7.1-10 release and I'm trying to live migrate a vm from one node to another. When you do a live migrate it gives you an option to use a different zfs...
I filed a bug report on it and it's something related with changes to the SRV-IO exception handling in the kernel module. See https://bugzilla.proxmox.com/show_bug.cgi?id=3558 for all of the logs and such.
I'm running the latest pve-kernel that I see 'pve-kernel-5.11.22-3-pve/stable,now...
After patching this morning eno3 and eno4 disappeared on one of my Dell R720. The card is based on the BCM57800 chipset and has two SFP+ and two gigabit ports on the same card and was working prior to this mornings update. Not really sure what to check. Commands like "ip link show" no longer...
It looks like this might be the ultimate cause so maybe there is a hardware failure some where but I haven't found it.
root@cloud4:~# journalctl -u systemd-udev-settle
-- Logs begin at Thu 2020-08-13 08:23:01 CDT, end at Thu 2020-08-13 09:14:33 CDT. --
Aug 13 08:23:02 cloud4.example.com...
One of my PVE nodes is failing to start networking on reboot despite having identical configuration as other working nodes. I've seen https://forum.proxmox.com/threads/proxmox-6-network-wont-start.56362/ but I'm not sure why you'd want to mask the service . If I restart ifupdown2-pre.service the...
The capacity under Data Center is raw capacity of the disks, with redudancy you'd divide that by 3 if you have 3x replication. The only way to restrict the size of a ceph pool is via quotas and you'd have had to do that yourself. It's probably just messing up the math because you don't have...
I'm wondering if that screen isn't incorrectly calculating the free space for some reason because ceph status shows that your only at around 50% total usage. If you click on Ceph under Data Center what does it show under usage? Another option to potentially get you more space is to create a pool...
If you look at this line objects: 796.40k objects, 3.0 TiB it's showing that you have 3 TB of actual data in the pool. It sounds like Promox and Ceph are reporting the actual usage so now it's just a matter of figuring out what's using more space than our expecting. Ceph pools by default use...
Thanks, it looks like I just need to remove the pam_deny.so from common-account. Reading some documentation from Debian also shows that removed from common-account https://wiki.debian.org/LDAP/PAM so that must be it.
I'm trying to get Promox PAM Authentication working against FreeIPA. I've joined the Promox nodes to FreeIPA and I'm able to ssh into each of the nodes using both my password and ssh keys from FreeIPA. What seems to be going on is the order of operations in the PAM modules.
Here are two...
I'm also seeing some systemd instability in CentOS 8.2 containers. In my case it's systemd-tmpfiles-setup that's not behaving well but I've had several occasions where systemctl just quits working altogether returning a dbus error. Are you using an unprivileged container or a privileged one?
Testing out disk passthrough to a VM from a SAS HBA and I'm noticing a 50% reduction in speed from writes on the Promox Host vs the VM. This is with the default no cache option as I'm trying to test out direct sync speed.
pve-manager/6.2-4/9824574a (running kernel: 5.4.34-1-pve)
Here is my VM...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.