Search results

  1. M

    Many 401 Unauthorized messages logged

    I need to plan and have some time to do the upgrade, it might take a little while before I'm able to do so.
  2. M

    Many 401 Unauthorized messages logged

    Thanks! I'll see if I can upgrade one of our machines soon and report back once I have.
  3. M

    Many 401 Unauthorized messages logged

    It seems pretty random, here are some logs from a few PVE nodes, the backup_1a datastore was only added yesterday and not in use yet. syslog.13.gz:Aug 19 21:26:25 pve01-dlf pvestatd[4266]: backup_vms: error fetching datastores - 401 Unauthorized syslog.18.gz:Aug 14 08:38:19 pve01-dlf...
  4. M

    Many 401 Unauthorized messages logged

    Hi, We’re seeing quite a few (~10-20 a day) ‘401 Unauthorized’ messaged logged on 4 different clusters but it doesn’t seem to affect services. The PBS nodes would log messages such as: GET /api2/json/admin/datastore: 401 Unauthorized: [client [::ffff:172.18.xxx.xxx]:xxxxx] authentication...
  5. M

    Freez issue with latest Proxmox 6.3-4 and AMD CPU

    Just wanted to add a 'me too' to the list, hopefully to help pinpoint the issue. We ran into this issue as well on our development cluster running from the no-subscription repository. For us the problem was reproducible with a snapshot rollback with memory, but only if our storage was on Ceph...
  6. M

    NUMA misses performance implications

    Thanks Stefan, just to update in case anyone else finds this useful, we've tested this on a Proxmox machine that had quite a few NUMA misses according to numastat but after enabling numa emulation in Proxmox for most of the VMs on it we did not see these misses go down. For now we've decided...
  7. M

    NUMA misses performance implications

    Hello, I'm experiencing many NUMA misses on a few of our Proxmox servers and reading the documentation[0] this might be resolved by enabling NUMA emulation for the VMs so the resources are properly distributed. Before changing all the VM configs I'd like to know if this is safe to do on...
  8. M

    VMs remounting partition read-only and (Buffer) I/O errors since qemu 3.0

    No solution that I know of, we worked around it by turning off discard for the VM disks.
  9. M

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    I'd very much like to know as well, I'm having this issue as well and am about to downgrade 10 nodes to Proxmox VE 5 because of this issue. Also dual ring configuration, one ring mtu 1500 and one mtu 9000.
  10. M

    VMs remounting partition read-only and (Buffer) I/O errors since qemu 3.0

    I can confirm that with Proxmox VE 6.0, Ceph + KRBD and virtio disks with discard=on the issue remains. So to recap (please correct me if I'm wrong), when using PVE 5.4 and 6.0 with Ceph and KRBD turning on discard results in data loss in the VM regardless of the OS used and seems to be caused...
  11. M

    VMs remounting partition read-only and (Buffer) I/O errors since qemu 3.0

    Thanks for the test case, testing it now to see if it's faster than my dd loop. The lack of response is bothering me as well though my initial report was somewhat incomplete but it seems many are using Ceph with librbd instead of KRBD, I was not able to reproduce the issue with librbd. I see...
  12. M

    VMs remounting partition read-only and (Buffer) I/O errors since qemu 3.0

    Are your virtual machines using scsi or virtio disks? Mine are scsi with a VirtIO controller so I can use discard but I'm currently testing with virtio disks to see if I can still trigger the issue, so far 24 hours without any issues. Do you have a way to (quickly) trigger the issue? Usually...
  13. M

    VMs remounting partition read-only and (Buffer) I/O errors since qemu 3.0

    The issue is also present in qemu 4.0 (Proxmox 6.0). edit: changed the title of the topic as it doesn't seem to be related to the kernel.
  14. M

    VMs remounting partition read-only and (Buffer) I/O errors since qemu 3.0

    An update regarding this issue, my initial findings were wrong as it was not the kernel update causing the issue but rather the update from qemu 2.12 to 3.0, but only because we're using Ceph with KRBD instead of librbd. I've tested qemu 3.0 with librbd for a week and the issue did not return...
  15. M

    VMs remounting partition read-only and (Buffer) I/O errors since qemu 3.0

    On our Proxmox/Ceph clusters we're seeing mutliple Debian Jessie VMs remounting their /tmp partition readonly after some logged (Buffer) I/O errors in syslog. From what I can tell this happened after updating to pve-kernel-4.15.18-16-pve as we've not seen this issue before and we've not had...
  16. M

    Crash any ProxMox, if you can open TCP session

    We have a subscription and are using the enterprise repository but it seems the updated kernel is not yet available yet? [edit] Sorry, I spoke too soon, it's available for me.
  17. M

    [SOLVED] issues after HA test

    Glad you got it sorted! I still wonder if something in the update caused it and what would be the appropriate way of installing updates on a cluster, I always assumed updating nodes one by one would be the safest way.
  18. M

    [SOLVED] issues after HA test

    I updated my Proxmox cluster today and ended up in a broken cluster state which looked similar to what you describe, coincidentally it happened after updating to the same version pve-manager and pve-kernel. I've updated a few nodes one by one with no VM's and rebooted them without an issue...
  19. M

    Documentation bug and I'm unable to create CephFS

    I can confirm the error wylde is getting on a new cluster I'm building, a previous installation did not give this error, we update the machine every week and for today many ceph packages are about to be updated. cephfs_data pool is created however the metadata pool is not. pveversion -v output...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!