why would pvesm status -storage local be so slow?

MACscr

Member
Mar 19, 2013
95
3
8
I have a 8 node cluster with ceph, nfs, and local storage (just for the OS). All the cluster members are online. Why would 'pvesm status -storage local' be so slow (takes like 5 minutes to run) when all it should be doing is querying local storage on that node. No networking should be involved. I have zero virtual machines installed on this cluster so far, so we don't have a load issue. Any suggestions? This is a new proxmox 4 cluster that was just setup in the last 24 hours.
 
I have a 8 node cluster with ceph, nfs, and local storage (just for the OS). All the cluster members are online. Why would 'pvesm status -storage local' be so slow (takes like 5 minutes to run) when all it should be doing is querying local storage on that node. No networking should be involved. I have zero virtual machines installed on this cluster so far, so we don't have a load issue. Any suggestions? This is a new proxmox 4 cluster that was just setup in the last 24 hours.

Hi,
on my installation it's quite fast:
Code:
time pvesm status -storage local 
local    dir 1        30836604        28477020         2343200 92.90%

real    0m0.695s
user    0m0.392s
sys     0m0.052s
Any problems with other storage? How long take a df?
Code:
time df -h
Any hints if you look with strace?
Code:
strace pvesm status -storage local
Udo
 
root@host1:~# time df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 9.5G 9.0M 9.5G 1% /run
/dev/sda1 15G 1.5G 13G 11% /
tmpfs 24G 60M 24G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 24G 0 24G 0% /sys/fs/cgroup
none 512M 480K 512M 1% /var/log
cgmfs 100K 0 100K 0% /run/cgmanager/fs
10.10.0.107:/backups/nfs/general 3.6T 1.3T 2.2T 37% /mnt/pve/nfs-general
/dev/fuse 30M 48K 30M 1% /etc/pve


real 0m0.013s
user 0m0.000s
sys 0m0.000s
 
root@host1:~# time pvesm status -storage local
local dir 1 15386000 1557020 13047412 11.16%


real 5m0.617s
user 0m0.620s
sys 0m0.152s
 
I think there was an issue with my ceph storage. I removed it and it worked fine. now to figure out why that system wouldnt come back up properly. Gotta love the ceph fault errors without great details. Another post for that maybe.
 
Same thing here with one of the compute node. On other nodes is working fine... How things worked out?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!