VM shows 0.0% disk usage

Amadex

New Member
Aug 29, 2022
4
0
1
Hello, my VM shows 0.0% disk usage and it's a 50GB VM in production (sites, emails, etc.). Qemu Guest Agent is installed. What could it be wrong? It's on a rented dedicated server.

Capture.PNG
 
Last edited:
Work-Around Fix for Ceph-Storage Usage for VM Usage in Proxmox Virtual Environment (PVE) Cluster.
Would submit the code in bugzilla too (will have to check Development guideline too)
and would be working on ZFS storage too, soon ; as its another popular in PVE Cluster setup,
sharing it if any one want to refer and want to use and want to give feedback.

Introduction:

This document provides step-by-step instructions on implementing a
work-around fix for Ceph-storage utilization in a Proxmox Virtual Environment (PVE) cluster.
Specifically, this fix addresses the issue of displaying boot disk usage for virtual machines (VMs),
including snapshot space, in a Ceph storage configuration.


Procedure:

1. Backup the Original Perl File:
Before making any changes, it is crucial to create a backup of the original Perl file for safety purposes.

Code:
cp /usr/share/perl5/PVE/QemuServer.pm /opt/QemuServer-original-`date +%Y-%m-%d-%s`.pm

2. Edit the Perl File:
Open the Perl file for editing. You can use any text editor, but we will use the example of using the vi editor.

Code:
vi /usr/share/perl5/PVE/QemuServer.pm

Inside the editor, locate the line containing "{disk}" (approximately around line 2944).

3. Modify the Perl Code:

You will see the following code block:
Code:
my $size = PVE::QemuServer::Drive::bootdisk_size($storecfg, $conf);
if (defined($size)) {
    $d->{disk} = 0; # no info available
    $d->{maxdisk} = $size;
} else {
    $d->{disk} = 0;
    $d->{maxdisk} = 0;
}

After the comment line "# no info available," add the following code to retrieve disk usage from the Ceph pool for VMs:


Code:
##### CODE TO FETCH VM DISK USAGE FROM CEPH POOL START #####
my @bootdiskorder = split('=', $conf->{boot});
my @bootdiskname = split(';', $bootdiskorder[1]);
my @bootdiskinfo = split(",", $conf->{$bootdiskname[0]});
my @bootdiskdetail = split(":", $bootdiskinfo[0]);
my $bootdiskstorage = $bootdiskdetail[0];
my $bootdiskimage = $bootdiskdetail[1];

if (defined $storecfg->{ids}->{$bootdiskstorage}->{type}) {
    my $bootdisktype = $storecfg->{ids}->{$bootdiskstorage}->{type};
    my $bootdiskpool = $storecfg->{ids}->{$bootdiskstorage}->{pool};

    if ($bootdisktype eq "rbd") {
        my $cephrbddiskinfocmd = "rbd disk-usage -p " . $bootdiskpool . " " . $bootdiskimage . " --format=json";
        my $cephrbddiskinfo = `$cephrbddiskinfocmd`;
        $cephrbddiskinfo =~ s/\n/""/eg;
        $cephrbddiskinfo =~ s/\r/""/eg;
        $cephrbddiskinfo =~ s/\t/""/eg;
        $cephrbddiskinfo =~ s/\0/""/eg;
        $cephrbddiskinfo =~ s/^[a-zA-z0-9,]//g;
        my $total_used_size = 0;

        if ($cephrbddiskinfo =~ /$bootdiskimage/) {
            my $cephrbddiskinfoarray = decode_json($cephrbddiskinfo);
            foreach my $image (@{$cephrbddiskinfoarray->{'images'}}) {
                if (defined $image->{'used_size'}) {
                    $total_used_size += $image->{'used_size'};
                }
            }
            $d->{disk} = $total_used_size;
        }
    }
}
##### CODE TO FETCH VM DISK USAGE FROM CEPH POOL END #####

4. Restart the pvestatd Service:

After making the necessary changes, restart the pvestatd service to apply the modifications.
Code:
systemctl restart pvestatd.service

5. Check for Errors:

Monitor the system logs for any potential errors to ensure that the changes were applied without issues.

Code:
tail -f /var/log/syslog

6. Verify Disk Usage:

If everything is functioning correctly, you should now be able to see the disk usage, including boot disk usage and percentage used, for VMs when they are in the "ON" state.
Screenshot 2023-09-08 at 6.50.01 PM.png

I hope it help some, and we get feedback too.
Thanks in advance.
-Deepen.
 
Last edited:
Hello everybody,

Thank you very much for the post and the solution.
But after seriously following it step by step, it didn't work for me.

I'm digging up this post! !
here are 3 screenshots:
- the code inserted in the "QemuServer.pm" file

1707332688594.png

- the table with all the vms where we can clearly see that disk usage is anamorally at 0%.
we see that on the other hand it still works correctly for the “LXC” container :

1707332444703.png


- the results of "tail -f /var/log/syslog" which shows the absence of log files and therefore no bug! ! !

1707332483103.png

Thank you in advance for your help.

Sincerely
@+++
 
I hope your storage uses Ceph RBD, as the code is currently designed exclusively for Ceph Storage (I will publish ZFS Storage compatibility soon), authored by me. Additionally, for syslog, which is not installed by default in PVE8, you will need to install the rsyslog package.
 
  • Like
Reactions: jmcruvellier
I hope your storage uses Ceph RBD, as the code is currently designed exclusively for Ceph Storage (I will publish ZFS Storage compatibility soon), authored by me. Additionally, for syslog, which is not installed by default in PVE8, you will need to install the rsyslog package.

Hi, have you been able to publish the compatibility code somewhere? I've searched for it but without success.
Thanks for you great work!
 
Complete code with ceph rbd and ZFS is as below , keep in mind zfs would show more size than actual allocated , example 100GB is take as 155GB , is snapshot is attached.

Code:
##### CODE TO FETCH VM DISK USAGE FROM CEPH + ZFS POOL START #####
my @bootdiskorder = split('=', $conf->{boot});
my @bootdiskname = split(';', $bootdiskorder[1]);
my @bootdiskinfo = split(",", $conf->{$bootdiskname[0]});
my @bootdiskdetail = split(":", $bootdiskinfo[0]);
my $bootdiskstorage = $bootdiskdetail[0];
my $bootdiskimage = $bootdiskdetail[1];

if (defined $storecfg->{ids}->{$bootdiskstorage}->{type}) {
    my $bootdisktype = $storecfg->{ids}->{$bootdiskstorage}->{type};
    my $bootdiskpool = $storecfg->{ids}->{$bootdiskstorage}->{pool};
    if ($bootdisktype eq "zfspool") {
        my $zfsdiskinfocmd ="zfs get -H -p -oname,value  used ".$bootdiskpool."/".$bootdiskimage;
        my $zfsdiskinfo=`$zfsdiskinfocmd`;
        $zfsdiskinfo =~ s/\n/""/eg;
        $zfsdiskinfo =~ s/\r/""/eg;
        my $total_used_size = 0;
        if ($zfsdiskinfo =~ /$bootdiskimage/) {
                my @zfsdiskbytes=split("\t",$zfsdiskinfo);
                $total_used_size=$zfsdiskbytes[1];
                }
        $d->{disk} = $total_used_size;
        }
   if ($bootdisktype eq "rbd") {
        my $cephrbddiskinfocmd = "rbd disk-usage -p " . $bootdiskpool . " " . $bootdiskimage . " --format=json";
        my $cephrbddiskinfo = `$cephrbddiskinfocmd`;
        $cephrbddiskinfo =~ s/\n/""/eg;
        $cephrbddiskinfo =~ s/\r/""/eg;
        $cephrbddiskinfo =~ s/\t/""/eg;
        $cephrbddiskinfo =~ s/\0/""/eg;
        $cephrbddiskinfo =~ s/^[a-zA-z0-9,]//g;
        my $total_used_size = 0;
        if ($cephrbddiskinfo =~ /$bootdiskimage/) {
            my $cephrbddiskinfoarray = decode_json($cephrbddiskinfo);
            foreach my $image (@{$cephrbddiskinfoarray->{'images'}}) {
                if (defined $image->{'used_size'}) {
                    $total_used_size += $image->{'used_size'};
                }
            }
            $d->{disk} = $total_used_size;
        }
    }
}
##### CODE TO FETCH VM DISK USAGE FROM CEPH POOL END #####
 
So looking through this and other threads, and at the bug report it seems this issue has been going on for 5 years. For such a basic hypervisor functionality is there any way to get this looked at with more priority? Not to be obtuse, but we're paying full licensing for 7 servers and this being outstanding and a weird workaround being the only fix is disappointing.
 
So looking through this and other threads, and at the bug report it seems this issue has been going on for 5 years. For such a basic hypervisor functionality is there any way to get this looked at with more priority? Not to be obtuse, but we're paying full licensing for 7 servers and this being outstanding and a weird workaround being the only fix is disappointing.
Maybe the clue that it's not implemented already should tip you off that it is not as easy as it seems to. Using the disk usage from qemu is just wrong, as pointed out here. It's the same with the memory usage.
 
  • Like
Reactions: Johannes S
Maybe the clue that it's not implemented already should tip you off that it is not as easy as it seems to. Using the disk usage from qemu is just wrong, as pointed out here. It's the same with the memory usage.
If there is a recommended workaround that customers are expected to generally perform then I don't think that "it's not as easy as it seems" holds much water. That could be automated behind the scenes.
 
If there is a recommended workaround that customers are expected to generally perform then I don't think that "it's not as easy as it seems" holds much water. That could be automated behind the scenes.
There is no workaround, that's the problem. For e.g. thick-LVM, there will always be 100% disk usage. I don't think that "information" will help anyone.
 
Complete code with ceph rbd and ZFS is as below , keep in mind zfs would show more size than actual allocated , example 100GB is take as 155GB , is snapshot is attached.

Code:
##### CODE TO FETCH VM DISK USAGE FROM CEPH + ZFS POOL START #####
my @bootdiskorder = split('=', $conf->{boot});
my @bootdiskname = split(';', $bootdiskorder[1]);
my @bootdiskinfo = split(",", $conf->{$bootdiskname[0]});
my @bootdiskdetail = split(":", $bootdiskinfo[0]);
my $bootdiskstorage = $bootdiskdetail[0];
my $bootdiskimage = $bootdiskdetail[1];

if (defined $storecfg->{ids}->{$bootdiskstorage}->{type}) {
    my $bootdisktype = $storecfg->{ids}->{$bootdiskstorage}->{type};
    my $bootdiskpool = $storecfg->{ids}->{$bootdiskstorage}->{pool};
    if ($bootdisktype eq "zfspool") {
        my $zfsdiskinfocmd ="zfs get -H -p -oname,value  used ".$bootdiskpool."/".$bootdiskimage;
        my $zfsdiskinfo=`$zfsdiskinfocmd`;
        $zfsdiskinfo =~ s/\n/""/eg;
        $zfsdiskinfo =~ s/\r/""/eg;
        my $total_used_size = 0;
        if ($zfsdiskinfo =~ /$bootdiskimage/) {
                my @zfsdiskbytes=split("\t",$zfsdiskinfo);
                $total_used_size=$zfsdiskbytes[1];
                }
        $d->{disk} = $total_used_size;
        }
   if ($bootdisktype eq "rbd") {
        my $cephrbddiskinfocmd = "rbd disk-usage -p " . $bootdiskpool . " " . $bootdiskimage . " --format=json";
        my $cephrbddiskinfo = `$cephrbddiskinfocmd`;
        $cephrbddiskinfo =~ s/\n/""/eg;
        $cephrbddiskinfo =~ s/\r/""/eg;
        $cephrbddiskinfo =~ s/\t/""/eg;
        $cephrbddiskinfo =~ s/\0/""/eg;
        $cephrbddiskinfo =~ s/^[a-zA-z0-9,]//g;
        my $total_used_size = 0;
        if ($cephrbddiskinfo =~ /$bootdiskimage/) {
            my $cephrbddiskinfoarray = decode_json($cephrbddiskinfo);
            foreach my $image (@{$cephrbddiskinfoarray->{'images'}}) {
                if (defined $image->{'used_size'}) {
                    $total_used_size += $image->{'used_size'};
                }
            }
            $d->{disk} = $total_used_size;
        }
    }
}
##### CODE TO FETCH VM DISK USAGE FROM CEPH POOL END #####
I am not smart enough to understand what is going on yet as I am learning but I was unable to get the storage info for my redhat 9 vms specifically. all others seems to be fine. They all have the qemu-agent. Any way to troubleshoot this?

For example for the line
Code:
my $zfsdiskinfocmd ="zfs get -H -p -oname,value  used ".$bootdiskpool."/".$bootdiskimage;
All the VMs including the RedHat ones show used space value, but it is defaulting to 0 because maybe the earlier function
Code:
my $size = PVE::QemuServer::Drive::bootdisk_size($storecfg, $conf);
if (defined($size)) {
    $d->{disk} = 0; # no info available
    $d->{maxdisk} = $size;
} else {
    $d->{disk} = 0;
    $d->{maxdisk} = 0;
}


Unless somehow the storage config part from here
Code:
if (defined $storecfg->{ids}->{$bootdiskstorage}->{type}) {
is not being read correctly as well. I am just playing with things for now but hope someone can offer some help



EDIT:

ISSUE WAS BOOT ORDER
so cdrom was first boot item, which it should be the case normally right? But i guess changing it to be after disk or disabled will fix this.
 
Last edited: