"system" full?

kamzata

Renowned Member
Jan 21, 2011
217
9
83
Italy
Code:
root@myhost:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda5
  VG Name               system
  PV Size               71.59 GiB / not usable 0
  [B]Allocatable           yes (but full)[/B]
  PE Size               4.00 MiB
  Total PE              18328
  Free PE               0
  Allocated PE          18328
  PV UUID               qOaCR9-FAXK-pjKv-pr70-6Vas-btOr-6R5jqS


  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               system
  PV Size               840.33 GiB / not usable 2.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              215124
  Free PE               23124
  Allocated PE          192000
  PV UUID               y8Ekfn-2c6i-2mXK-DoV1-Cee3-s7ir-yDEp6U


root@myhost:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/system/root
  LV Name                root
  VG Name                system
  LV UUID                PSW62Z-WMcK-nEMd-zxL4-1jNV-mRyS-htw9HP
  LV Write Access        read/write
  LV Creation host, time mail, 2014-09-30 05:58:46 +0200
  LV Status              available
  # open                 1
  LV Size                71.59 GiB
  Current LE             18328
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0


  --- Logical volume ---
  LV Path                /dev/system/pve-vz
  LV Name                pve-vz
  VG Name                system
  LV UUID                ItxDOY-e0CH-pwO1-BzHv-kVfc-mq1C-SwFzOv
  LV Write Access        read/write
  LV Creation host, time srvlive, 2014-09-30 13:24:00 +0200
  LV Status              available
  # open                 1
  LV Size                500.00 GiB
  Current LE             128000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1


  --- Logical volume ---
  LV Path                /dev/system/pve-backup
  LV Name                pve-backup
  VG Name                system
  LV UUID                rXo3vN-YMQa-zoLG-bPcz-BASp-DNb2-2CPDe2
  LV Write Access        read/write
  LV Creation host, time srvlive, 2014-09-30 13:25:04 +0200
  LV Status              available
  # open                 1
  LV Size                250.00 GiB
  Current LE             64000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

root@myhost:~# vgdisplay
  --- Volume group ---
  VG Name               system
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               911.92 GiB
  PE Size               4.00 MiB
  Total PE              233452
  Alloc PE / Size       210328 / 821.59 GiB
  Free  PE / Size       23124 / 90.33 GiB
  VG UUID               LD2b34-KMrf-uSWb-AWwr-hLZS-LDFc-vJ7TUa

Is sda5 system full? I've just done a brand new installation.
 
For completeness:

Code:
root@myhost:~# df -hFilesystem                                  Size  Used Avail Use% Mounted on
udev                                         10M     0   10M   0% /dev
tmpfs                                       1.6G  336K  1.6G   1% /run
[B]/dev/mapper/system-root                      71G  1.3G   66G   2% /[/B]
tmpfs                                       5.0M     0  5.0M   0% /run/lock
tmpfs                                       6.2G   28M  6.2G   1% /run/shm
/dev/sda1                                   3.8G   97M  3.5G   3% /boot
/dev/mapper/system-pve--vz                  493G   17G  476G   4% /var/lib/vz
/dev/mapper/system-pve--backup              247G   64G  183G  26% /backup
curlftpfs#ftp://xxx.net/  7.5T     0  7.5T   0% /mnt/xxxx
curlftpfs#ftp://xxx.net/  7.5T     0  7.5T   0% /mnt/xxxx
/dev/fuse                                    30M   24K   30M   1% /etc/pve
/var/lib/vz/private/100                     4.0G  619M  3.4G  16% /var/lib/vz/root/100
tmpfs                                       1.0G     0  1.0G   0% /var/lib/vz/root/100/lib/init/rw
tmpfs                                       1.0G     0  1.0G   0% /var/lib/vz/root/100/dev/shm
/var/lib/vz/private/101                     4.0G  768M  3.3G  19% /var/lib/vz/root/101
/var/lib/vz/private/102                     4.0G  1.8G  2.3G  45% /var/lib/vz/root/102
tmpfs                                       205M   44K  205M   1% /var/lib/vz/root/101/run
tmpfs                                       5.0M     0  5.0M   0% /var/lib/vz/root/101/run/lock
tmpfs                                       615M     0  615M   0% /var/lib/vz/root/101/run/shm
tmpfs                                       410M   44K  410M   1% /var/lib/vz/root/102/run
tmpfs                                       5.0M     0  5.0M   0% /var/lib/vz/root/102/run/lock
tmpfs                                       1.2G     0  1.2G   0% /var/lib/vz/root/102/run/shm
/var/lib/vz/private/107                     4.0G  908M  3.2G  23% /var/lib/vz/root/107
tmpfs                                       410M   44K  410M   1% /var/lib/vz/root/107/run
tmpfs                                       5.0M     0  5.0M   0% /var/lib/vz/root/107/run/lock
/var/lib/vz/private/108                     4.0G  832M  3.2G  21% /var/lib/vz/root/108
tmpfs                                       1.2G     0  1.2G   0% /var/lib/vz/root/107/run/shm
tmpfs                                       205M   44K  205M   1% /var/lib/vz/root/108/run
/var/lib/vz/private/208                     4.0G  856M  3.2G  21% /var/lib/vz/root/208
tmpfs                                       5.0M     0  5.0M   0% /var/lib/vz/root/108/run/lock
tmpfs                                       615M     0  615M   0% /var/lib/vz/root/108/run/shm
tmpfs                                       205M   44K  205M   1% /var/lib/vz/root/208/run
/var/lib/vz/private/500                     4.0G  912M  3.2G  23% /var/lib/vz/root/500
tmpfs                                       5.0M     0  5.0M   0% /var/lib/vz/root/208/run/lock
tmpfs                                       615M     0  615M   0% /var/lib/vz/root/208/run/shm
none                                        2.0G  4.0K  2.0G   1% /var/lib/vz/root/500/dev
/var/lib/vz/private/501                     4.0G  1.1G  3.0G  26% /var/lib/vz/root/501
none                                        410M  1.1M  409M   1% /var/lib/vz/root/500/run
none                                        5.0M     0  5.0M   0% /var/lib/vz/root/500/run/lock
none                                        2.0G     0  2.0G   0% /var/lib/vz/root/500/run/shm
none                                        100M     0  100M   0% /var/lib/vz/root/500/run/user
tmpfs                                       205M   44K  205M   1% /var/lib/vz/root/501/run
tmpfs                                       5.0M     0  5.0M   0% /var/lib/vz/root/501/run/lock
tmpfs                                       615M     0  615M   0% /var/lib/vz/root/501/run/shm
/var/lib/vz/private/504                     4.0G  732M  3.3G  18% /var/lib/vz/root/504
none                                        1.0G  4.0K  1.0G   1% /var/lib/vz/root/504/dev
none                                        205M 1020K  204M   1% /var/lib/vz/root/504/run
none                                        5.0M     0  5.0M   0% /var/lib/vz/root/504/run/lock
none                                        1.0G     0  1.0G   0% /var/lib/vz/root/504/run/shm
/var/lib/vz/private/502                      50G  7.9G   43G  16% /var/lib/vz/root/502
none                                        3.0G  4.0K  3.0G   1% /var/lib/vz/root/502/dev
none                                        615M  1.1M  614M   1% /var/lib/vz/root/502/run
none                                        5.0M     0  5.0M   0% /var/lib/vz/root/502/run/lock
none                                        3.0G     0  3.0G   0% /var/lib/vz/root/502/run/shm
 
My guess would be that pvdisplay reports that all of the pv space on /dev/sda5 is allocated to your root partition so LVM wise you cannot allocate more space on it. The actual filesystem usage on root is another question which involves the filesystem on the logical volume and not the physical volume underneath it.
 
My guess would be that pvdisplay reports that all of the pv space on /dev/sda5 is allocated to your root partition so LVM wise you cannot allocate more space on it. The actual filesystem usage on root is another question which involves the filesystem on the logical volume and not the physical volume underneath it.

It's my guess too. But few days ago I had space problems with same configuration and I wasn't able to write 1 more bit on it (after around 2 months) and I had to do this brand new installation. Checking space with dh -f I saw that there was enough free space and even deleting some big I wasn't able to write. So... I don't want the same problem reoccur.
 
I think I took 2 birds with 1 stone. I found this very annoying problem: software inside my CT102 is creating 2/3 small session files (about 1KB) every second and these are never erased by php garbage collection because it never start.

So I need to set in php.ini:

Code:
session.gc_probability = 1

and check this value:

Code:
session.gc_maxlifetime = 1440
session.gc_divisor = 1000

and now backups and other drive space problems I think are ok.

Usually, other possible problems are with apache (nginx, lighttpd or what you are using) webserver logs. Apache, for example, hold 52 log files by default (such as 1 year of log). You can check apache logs size on /var/log/apache2/ and change the value to 10 (i.e.) editing /etc/logrotate.d/apache2 :

Code:
/var/log/apache2/*.log {
        weekly
        missingok
[B]          rotate 10[/B]
        compress
        delaycompress
        .........
}


Just for info:

I run this command on both CT:
Code:
[COLOR=#444444][FONT=monospace]find . -type f | wc -l[/FONT][/COLOR]
to count how many files there are in. This is result:

CT102: 226469
CT107: 48576

So, huge difference, around 180MB of 1KB files more (after just few hours!) to handle and backup.

To check disk space I used this useful tool too: ncdu (Ubuntu/Debian apt-get install ncdu)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!