Search results

  1. S

    LVM devices still shown in pvesm list after deleting VM

    I've found the solution, besides checking fuser and lsof you need to check dmsetup info too. If the open count is > 0 check e.g. ls -la /sys/dev/block/251\:19/holders/ like mentioned here https://forum.proxmox.com/threads/lvm-dmsetup-nightmares.58182/. Looking at lsblk might also be a good idea...
  2. S

    Swap usage on Proxmox node

    Hi, all server run with vm.swappiness = 60. Memory isn't overcommited. We do not run zfs. It's not a big problem since we are going to extend the cluster anyway to get more ressources. I was just curious cause i didn't see this before. Thanks & Cheers
  3. S

    Swap usage on Proxmox node

    Hello, i've got 6 Proxmox nodes in a cluster. 5 of them do not use swap space at all. One node uses all its swap space even when enough RAM is free. root@pm-04:~# free -m total used free shared buff/cache available Mem: 385591 133802 250179...
  4. S

    LVM devices still shown in pvesm list after deleting VM

    I'm currently on lvm2 2.02.168-pve6, apt-get dist-upgrade does not show any new lvm2 package. Guess this is Proxmox 6.x? I played with the global_filter, before it was like #global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/mapper/vg_.*-brick_.*|", "r|/dev/mapper/vg_.*-tp_.*|"...
  5. S

    LVM devices still shown in pvesm list after deleting VM

    Yes it also throws an error :( root@pm-01:~# lvchange -an vg-cluster01-storage01/vm-201-disk-1 Logical volume vg-cluster01-storage01/vm-201-disk-1 is used by another device. root@pm-01:~# lvchange -an vg-cluster01-storage01/vm-199-disk-1 Logical volume vg-cluster01-storage01/vm-199-disk-1...
  6. S

    LVM devices still shown in pvesm list after deleting VM

    It's a 6 server setup each one is connected to an SAN via FC (multipath). As backend we are using LVM shared. Working like a charm expect lvm in lvm seems to make some trouble sometimes. The affected volume is 30T in size and hosting ~100 VM disks. I have to a admit that i'm not on the latest...
  7. S

    LVM devices still shown in pvesm list after deleting VM

    fuser and lsof do not show anything accessing those devices. Adding --force didn't help. Same error device busy. I'm unsure if a reboot of the system would help since it is shared LVM.
  8. S

    LVM devices still shown in pvesm list after deleting VM

    Sadly that didn't help. root@pm-01:~# lvremove /dev/mapper/vg--cluster01--storage01-vm--199--disk--1 Logical volume vg-cluster01-storage01/vm-199-disk-1 is used by another device. root@pm-01:~# lvremove /dev/mapper/vg--cluster01--storage01-vm--201--disk--1 Logical volume...
  9. S

    LVM devices still shown in pvesm list after deleting VM

    Hi, sorry for the late response. fuser -vam /dev/mapper/vg--cluster01--storage01-vm--201--disk--1 USER PID ACCESS COMMAND /dev/dm-108: fuser -vam /dev/mapper/vg--cluster01--storage01-vm--199--disk--1 USER PID ACCESS COMMAND /dev/dm-107...
  10. S

    LVM devices still shown in pvesm list after deleting VM

    Hi, i've deleted two VMs (VMID 199 and 201) and the devices are still there. root@pm-01:~# pvesm list vg-cluster01-storage01|grep vm-201 vg-cluster01-storage01:vm-201-disk-1 raw 106300440576 201 root@pm-01:~# pvesm list vg-cluster01-storage01|grep vm-199 vg-cluster01-storage01:vm-199-disk-1...
  11. S

    LVM: devices in /dev/mapper/ still shown but no corresponding VM with that ID

    lvscan does not show them, "dmsetup remove vg--cluster01--storage01-vm--171--disk--0" however worked. Thanks for your help!
  12. S

    LVM: devices in /dev/mapper/ still shown but no corresponding VM with that ID

    All my pv used for virtual machines are on a SAN which is connected via Fiber Channel to the proxmox cluster. First Cluster node (pm-01): sdg 8:96 0 20T 0 disk └─pm-cluster01-storage01 253:5 0 20T...
  13. S

    LVM: devices in /dev/mapper/ still shown but no corresponding VM with that ID

    root@pm-01:~# dmsetup info /dev/mapper/vg--cluster01--storage01-vm--171--disk--0 Name: vg--cluster01--storage01-vm--171--disk--0 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 0 Event number: 0 Major, minor: 253, 72 Number of...
  14. S

    LVM: devices in /dev/mapper/ still shown but no corresponding VM with that ID

    root@pm-01:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data pve twi-a-tz-- 75,87g 0,00 0,04 root...
  15. S

    LVM: devices in /dev/mapper/ still shown but no corresponding VM with that ID

    Hi, TL;DR: No VM with ID 171 present but raw devices in /dev/mapper still shown ~$ ssh root@pm-01 qm config 171 Configuration file 'nodes/pm-01/qemu-server/171.conf' does not exist ~$ ssh root@pm-02 qm config 171 Configuration file 'nodes/pm-02/qemu-server/171.conf' does not exist ~$ ssh...
  16. S

    LVM: "vgs" takes 5 minutes on one cluster node

    Sadly that didn't change the output of lvs/pvs/vgs. But i'll read up on filter syntax and try a few things. Thanks for pointing me in the right direction! Proxmox is an awesome project!
  17. S

    LVM: "vgs" takes 5 minutes on one cluster node

    Thanks, this speeds up things. The line now looks like: global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/mapper/pm-cluster01*|", "r|/dev/vg-cluster01*|" ] I can confim that it speeds up vgs. Especially "r|/dev/vg-cluster01*|". But now i'm unable to see content in the...
  18. S

    LVM: "vgs" takes 5 minutes on one cluster node

    I could not migrate the VM with ID 179 to another host when it was shut down. 2019-06-26 16:44:50 starting migration of VM 179 to node 'pm-05' (192.168.52.87) 2019-06-26 16:44:50 copying disk images can't deactivate LV '/dev/vg-cluster01-s4h/vm-179-disk-1': Logical volume...
  19. S

    LVM: "vgs" takes 5 minutes on one cluster node

    If i shut down one of the VM's one [unkown] device disappears. It reappears when i start the VM. root@pm-05:~# pvs Couldn't find device with uuid ladH8H-TTYn-oFVY-WJ0V-ZwhU-bh5K-jHPKNa. PV VG Fmt Attr PSize PFree...
  20. S

    LVM: "vgs" takes 5 minutes on one cluster node

    Both VMs running and using the disks. I need to talk to our SAP guy if i can recreate this VMs or if he is still doing tests on them. From inside the VMs: 245 root@smtsrv: ~# vgs VG #PV #LV #SN Attr VSize VFree rhel 2 2 0 wz--n- 548,99g <14,00g 246 root@smtsrv: ~# pvs PV...