Recent content by absolutesantaja

  1. Live Migration Fails When Changing ZFS Pools

    Trying to figure out if this a bug or just a misunderstanding on how live migrations are supposed to work with the latest enhancements. I'm on the 7.1-10 release and I'm trying to live migrate a vm from one node to another. When you do a live migrate it gives you an option to use a different zfs...
  2. [SOLVED] xterm.js cutting off characters in Firefox

    I'm seeing the same thing. Changing the font size or line height doesn't keep it from cutting off the bottom of the text.
  3. Dell R720 BCM57800 Quad 1/10 Gigabit Missing Interfaces

    I filed a bug report on it and it's something related with changes to the SRV-IO exception handling in the kernel module. See for all of the logs and such. I'm running the latest pve-kernel that I see 'pve-kernel-5.11.22-3-pve/stable,now...
  4. Dell R720 BCM57800 Quad 1/10 Gigabit Missing Interfaces

    After patching this morning eno3 and eno4 disappeared on one of my Dell R720. The card is based on the BCM57800 chipset and has two SFP+ and two gigabit ports on the same card and was working prior to this mornings update. Not really sure what to check. Commands like "ip link show" no longer...
  5. [SOLVED] Network doesn't come up - ifupdown2-pre.service: Failed with result 'exit-code'.

    Turns out I have a disk that died and it's causing things to timeout during boot.
  6. [SOLVED] Network doesn't come up - ifupdown2-pre.service: Failed with result 'exit-code'.

    It looks like this might be the ultimate cause so maybe there is a hardware failure some where but I haven't found it. root@cloud4:~# journalctl -u systemd-udev-settle -- Logs begin at Thu 2020-08-13 08:23:01 CDT, end at Thu 2020-08-13 09:14:33 CDT. -- Aug 13 08:23:02
  7. [SOLVED] Network doesn't come up - ifupdown2-pre.service: Failed with result 'exit-code'.

    One of my PVE nodes is failing to start networking on reboot despite having identical configuration as other working nodes. I've seen but I'm not sure why you'd want to mask the service . If I restart ifupdown2-pre.service the...
  8. RBD pool size

    The capacity under Data Center is raw capacity of the disks, with redudancy you'd divide that by 3 if you have 3x replication. The only way to restrict the size of a ceph pool is via quotas and you'd have had to do that yourself. It's probably just messing up the math because you don't have...
  9. RBD pool size

    I'm wondering if that screen isn't incorrectly calculating the free space for some reason because ceph status shows that your only at around 50% total usage. If you click on Ceph under Data Center what does it show under usage? Another option to potentially get you more space is to create a pool...
  10. RBD pool size

    If you look at this line objects: 796.40k objects, 3.0 TiB it's showing that you have 3 TB of actual data in the pool. It sounds like Promox and Ceph are reporting the actual usage so now it's just a matter of figuring out what's using more space than our expecting. Ceph pools by default use...
  11. RBD pool size

    Typically Ceph uses 3 way replication so you should have had around 6 TB or so. What does ceph status show and what does ceph osd tree show?
  12. [SOLVED] Promox PAM Authentication not working against SSSD

    Thanks, it looks like I just need to remove the from common-account. Reading some documentation from Debian also shows that removed from common-account so that must be it.
  13. [SOLVED] Promox PAM Authentication not working against SSSD

    I'm trying to get Promox PAM Authentication working against FreeIPA. I've joined the Promox nodes to FreeIPA and I'm able to ssh into each of the nodes using both my password and ssh keys from FreeIPA. What seems to be going on is the order of operations in the PAM modules. Here are two...
  14. Weird behavior with CentOS 8.2 container

    I'm also seeing some systemd instability in CentOS 8.2 containers. In my case it's systemd-tmpfiles-setup that's not behaving well but I've had several occasions where systemctl just quits working altogether returning a dbus error. Are you using an unprivileged container or a privileged one?
  15. Disk Passthrough to VM - 50% Performance Reduction

    Testing out disk passthrough to a VM from a SAS HBA and I'm noticing a 50% reduction in speed from writes on the Promox Host vs the VM. This is with the default no cache option as I'm trying to test out direct sync speed. pve-manager/6.2-4/9824574a (running kernel: 5.4.34-1-pve) Here is my VM...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!