Recent content by jasgripen

  1. VMs remounting partition read-only and (Buffer) I/O errors since qemu 3.0

    I think the problem lies in the combination detect_zeroes and discard. I changed one of my vm:s config-file in /etc/pve... to “detect_zeroes=off,discard=on” and I haven't had any problem with that vm for several days. Can anybody else confirm this? I don't have so high load on my cluster now...
  2. Proxmox 6.2 with older VMs

    Ok, I found out that my problem seems to be this bug When a vm is running with CEPH/KRBD it uses a block device for the vms image. I’ve not tried to run Proxmox with ZFS but my guess is that it also uses a block-device for the vm image. If you...
  3. Ceph with krbd is unstable on Proxmox 6.2

    Yes, I'm running the newest kernel now on all nodes. Yes, I upgraded BIOS on all nodes. I’m pretty sure it’s the same bug as described in this thread and bug reported here...
  4. VMs remounting partition read-only and (Buffer) I/O errors since qemu 3.0

    Anyone solutions to this. I started to get the same problems after upgrading to Proxmox 6.2 from 6.1. But I also migrated some vms from another systems to my Proxmox system at the same time so it could be that I didn’t notice it earlier because it was not as high load on the system before that...
  5. Ceph with krbd is unstable on Proxmox 6.2

    Seems like I have the same problem as descirbed in this thread:
  6. Proxmox 6.2 with older VMs

    Yes, I've started a thread about this last week. I'm running some old CentOS 5 and a FreeBSD with problems. I only have problems when I use Ceph with KRBD. What storage do you use?
  7. Ceph with krbd is unstable on Proxmox 6.2

    I thought I had the last BIOS, but there was a new version out. Anyway, I applied it and the lastest microcode updates and the problem still persists. 5.3.18-3-pve Of course, I’m just trying to narrow down where the problem is. Any other thought of what I can try?
  8. Ceph with krbd is unstable on Proxmox 6.2

    Running vms on Ceph with krbd is unstable since we upgraded to Proxmox 6.2 We are running Proxmox on 5 nodes with 2 nvme disks for Ceph in each node and around 50 vms, we don’t use lxc. We use krbd for our Ceph storage pool and VirtIO-scsi as vm disk controller. After upgrading to Proxmox 6.2...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!