Recent content by Karistea

  1. K

    Log files full of nfs messages

    Hi, one of my nodes occasionally generate a lot of the following messages: [Wed Jan 15 21:05:03 2021] nfs4_free_slot: slotid 1 highest_used_slotid 0 [Wed Jan 15 21:05:03 2021] nfs41_sequence_process: Error 0 free the slot [Wed Jan 15 21:05:03 2021] nfs4_free_slot: slotid 0...
  2. K

    LVM commands hang and node is marked with a questiomark

    I'm not sure if this kvm trace is related: [Tue Jan 19 22:39:41 2021] INFO: task kvm:28271 blocked for more than 604 seconds. [Tue Jan 19 22:39:41 2021] Tainted: P IO 5.4.78-2-pve #1 [Tue Jan 19 22:39:41 2021] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this...
  3. K

    nfs based VM storage grows more than its size

    # qemu-img info vm-132-disk-0.qcow2 image: vm-132-disk-0.qcow2 file format: qcow2 virtual size: 76 GiB (81604378624 bytes) disk size: 101 GiB cluster_size: 65536 Format specific information: compat: 1.1 compression type: zlib lazy refcounts: false refcount bits: 16 corrupt...
  4. K

    nfs based VM storage grows more than its size

    I understand how thin provisioning works but I still cannot get why my OS disk is 76Gb while the qcow2 is 102G: fdisk -l /dev/sda Disk /dev/sda: 76 GiB, 81604378624 bytes, 159383552 sectors
  5. K

    LVM commands hang and node is marked with a questiomark

    Hi Oguz, here is the info you have asked for: # pveversion -v proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve) pve-manager: 6.3-3 (running version: 6.3-3/eee5f901) pve-kernel-5.4: 6.3-3 pve-kernel-helper: 6.3-3 pve-kernel-5.4.78-2-pve: 5.4.78-2 pve-kernel-5.4.34-1-pve: 5.4.34-2 ceph-fuse...
  6. K

    nfs based VM storage grows more than its size

    Hi, I have a VM with its storage located on an nfs server and I recently noticed that its size grew to 102Gb instead of 76Gb which was the initially allocated space. The image format is qcow2: -rw-r----- 1 root root 102G Jan 20 5:08 /mnt/pve/nfs/images/132/vm-132-disk-0.qcow2 I currently have...
  7. K

    LVM commands hang and node is marked with a questiomark

    Hi, in a 12 node proxmoc 6.2 cluster, I often excerience problems with some hosts that turn to grey. This usually happen during migrations or after failed migrations. During that time I am able to browse the VMs running on this node. Trying to investigate this, I've noticed that commands that...