Recent content by Adam Koczarski

  1. A

    [SOLVED] Upgrade to Proxmox 7 - Bond (LACP) Interface not working anymore

    Same issue here upgrading from 6.4 to 7.2. Needed to install ifupdown2 prior to rebooting. Miss that step and you have to physically get onto the node to comment out the 'auto ...' lines in the interfaces file to regain access via the VMBRx interface. The upgrade documentation states, "The...
  2. A

    [SOLVED] slow migrations

    I've noticed something similar on our 5 node cluster with a 10Gbps migration network. When I update my nodes I use the following process. Migrate all VMs from node 1 to node 2. Update and reboot node 1, then migrate the VMs back to node 1. If my nodes have been running for a while I see slow...
  3. A

    All VMs locking up after latest PVE update

    Thank you! The first command yields the same on my production versus updated POC cluster. The second command does show 5.2.0-4 versus 5.2.0-3 so it appears to have worked. I live migrated the VMs back and forth on the POC. I'll update and do the same migrating on the production cluster now.
  4. A

    All VMs locking up after latest PVE update

    How do I tell if the test pve-qemu-kvm has successfully been install? Thx!
  5. A

    All VMs locking up after latest PVE update

    I've noticed the same phenomenon with one of our 2012R2 servers. It happened at the same time we had the issue reported in this thread. It has also happened before the issue in this thread came to be. I noticed it the first time after a Windows update.
  6. A

    All VMs locking up after latest PVE update

    We were affected too. 5 node cluster running ceph on servers about 18 months old. 600TB of spinners. I've disabled Proxmox backups to NFS and a PBS until the issue is resolved as it appeared to have been trigger during a backup.
  7. A

    bluestore_default_buffered_write = true

    Thank you for the reply. Doesn't sound like something I'd use. Just for reference, is there a way to tell if it's actually running? I added the following to my ceph.conf file on a POC cluster but wasn't sure if it was truly enabled?? I didn't see any increase in RAM usage beyond what was...
  8. A

    bluestore_default_buffered_write = true

    Has anyone ever tried using this feature? I've added it to the [global] section of the ceph.conf on my POC cluster but I'm not sure how to tell if it's actually working. TIA
  9. A

    Latest Ceph release 14.2.6

    I had heard there were issues with 14.2.5 so I've been waiting for a new release. I see Nautilus 14.2.6 has been released. Will there be a corresponding release from Proxmox? TIA
  10. A

    After updating nodes - Use of uninitialized value $val in pattern match

    FYI - Last night I applied the latest updates from the no-cost repository. Now I get the following when I bulk migrate VMs from node to node. Check VM 309: precondition check passed Migrating VM 309 Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm...
  11. A

    OSD keeps going down and out

    New drive installed. Since the osd was already down and out I destroyed it, shut down the node and replaced this non-hot swapable drive in the mid-bay of the server. Booted it back up, tested the drive and recreated the osd and associated it with the VNMe for db/wal. Worked like a charm! Thx...
  12. A

    OSD keeps going down and out

    Can anyone confirm if destroying an osd via the GUI will also destroy the associated db/wal I initially created on the VNMe? Then just create the replacement osd on the new drive referencing the NVMe as before?? TIA!
  13. A

    OSD keeps going down and out

    Dell confirmed the failing drive via iDRAC. The replacement is on the way. Is the process for replacing a drive with an associated DB/WAL via the version 6 GUI documented somewhere?
  14. A

    OSD keeps going down and out

    I'm in pre-production of Proxmox/Ceph now. As for running disk tests, what would you recommend for accomplishing this?