No Storage will be shown / usable after 3to5min.

nade

New Member
Aug 26, 2015
16
1
3
Hi, i have a problem on an Fresh proxmox installation.
I used the complete proxmox image in version: 4.3-10
I used the autoamtical zfs install of proxmox, so zfs is used on all disks.

After boot, the Proxmox is fully working for about 3 to 5 Minutes.
After that i cant choce an storage.

It doesnt depend on the type of the storage, so its not avaible on add harddisk, and also not avaible on backup or cd.
But if i click the storage on the left site, i can see the content etc.

I cant find log messages dependign on that problem.

Anyone have an idea?

syslog:
Code:
Nov 25 20:55:50 main pvestatd[4558]: status update time (40.050 seconds)
Nov 25 20:55:58 main pvedaemon[4588]: <root@pam> successful auth for user 'nade@pam'
Nov 25 20:56:30 main pvestatd[4558]: status update time (40.049 seconds)
Nov 25 20:56:36 main pveproxy[6788]: proxy detected vanished client connection
Nov 25 20:56:46 main pvedaemon[4588]: mkdir /mnt/backup: File exists at /usr/share/perl5/PVE/Storage/DirPlugin.pm line 59.
Nov 25 20:57:10 main pvestatd[4558]: status update time (40.047 seconds)
 

Attachments

  • pve.jpg
    pve.jpg
    185.4 KB · Views: 12
  • pve2.jpg
    pve2.jpg
    40.6 KB · Views: 11
Hey fireon, the problem is static.
it Appears 3 to 5 minutes after reboot, and from then the problem is there till i reboot again.

pveperf:

CPU BOGOMIPS: 76803.12
REGEX/SECOND: 1208377
HD SIZE: 204.41 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 359.51
DNS EXT: 32.63 ms
DNS INT: 48.62 ms (nmtp.de)

regards
 
FSYNCS/SECOND: 359.51
Looks likes the server has poor performance. The FSYNCS should have at least 2000+ Can you tell me what hardware to you use? HDDs/SSDs, CPU, Memory, Cache, SAS/SATA or Raidcontroller....
Exampleoutput from my zfsmachine:
Code:
pveperf /v-machines/home  
CPU BOGOMIPS:  40002.32 
REGEX/SECOND:  2860523 
HD SIZE:  5131.42 GB (v-machines/home) 
FSYNCS/SECOND:  5230.19
 
Intel Core i7-3930
sw raid1 2x Samsung MZ7LM240 ssd
sw raid1 2x 3tb hdd (toshiba and seagate)
8x RAM 8192 MB DDR3

System is installed on the ssd raid.
So peformance shouldnt be the problem i think.
all disk´s passed smart check.
 
Can you post also "zpool status" please?

But on thing. You have too little RAM. For ZFS much more RAM is needed. From my experience I can say for normal performance at least 32GB. It will be working with 8GB too but i think very poor. With 4GB for example the server crashes without VMs after some time. https://pve.proxmox.com/wiki/ZFS_on_Linux#_hardware

But i think there can be another problem.
 
You had read Wrong, its 8x RAM 8192 MB DDR3, so 64gb ram complete.

Code:
zpool status
  pool: rpool
state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0

errors: No known data errors

  pool: storage
state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0

errors: No known data errors
 
You had read Wrong, its 8x RAM 8192 MB DDR3, so 64gb ram complete.
:D Yeah sorry, you are right!
Status looks fine. hmm... What is when you copy a file from one Raid to the other raid, some ISO with about 8GB. How fast you're copy the file? SSD is Enterprise, should also working fine.

Please post
Code:
pvesm status
 
Hey, yes ssd is enterprise so it shouldnt be the problem i think..

I cant use the implemented transfer feature, cause how i said the storage wont appear after 5 minutes later then boot.

Also i dont think that the speed should be the problem, cause still if the speed is rly slow, the storage should still be seeable in pve.

Code:
pvesm status
mkdir /mnt/backup: File exists at /usr/share/perl5/PVE/Storage/DirPlugin.pm line 96.
Backup          dir 0               0               0               0 100.00%
images          dir 1      2741479808         2904704      2738575104 0.61%
local           dir 1       214339584         1241984       213097600 1.08%
local-zfs     zfspool 1       215106664         2009012       213097652 1.43%
local_backup    dir 1      2741479808         2904704      2738575104 0.61%
storage       zfspool 1      2828009472        89434360      2738575112 3.66%
 
you have some kind of storage misconfiguration - could you post the output of "zfs list" and "mount" and the content of "/etc/pve/storage.cfg"?
 
you have some kind of storage misconfiguration - could you post the output of "zfs list" and "mount" and the content of "/etc/pve/storage.cfg"?

Hey, here are the content u want to see.
Thankyou for your help.

Regards

Code:
zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
rpool                     11.8G   203G    96K  /rpool
rpool/ROOT                1.41G   203G    96K  /rpool/ROOT
rpool/ROOT/pve-1          1.41G   203G  1.18G  /
rpool/data                1.92G   203G    96K  /rpool/data
rpool/data/vm-100-disk-1  1.29G   203G  1.29G  -
rpool/data/vm-300-disk-1   642M   203G   642M  -
rpool/data/vm-300-disk-2   424K   203G   424K  -
rpool/swap                8.50G   212G    64K  -
storage                   85.3G  2.55T  2.77G  /storage
storage/vm-100-disk-1     51.6G  2.60T  14.2M  -
storage/vm-300-disk-1     30.9G  2.58T  3.61M  -

Code:
mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=8169199,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,relatime,size=13076564k,mode=755)
rpool/ROOT/pve-1 on / type zfs (rw,relatime,xattr,noacl)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=21,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
rpool on /rpool type zfs (rw,noatime,xattr,noacl)
rpool/ROOT on /rpool/ROOT type zfs (rw,noatime,xattr,noacl)
rpool/data on /rpool/data type zfs (rw,noatime,xattr,noacl)
storage on /storage type zfs (rw,relatime,xattr,noacl)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
//censored.de/backup on /mnt/backup type cifs (rw,relatime,vers=1.0,cache=strict,username=censored,domain=PUBLICBACKUP80,uid=0,forceuid,gid=0,forcegid,addr=2a01:04f8:0b21:4000:0000:0000:0000:0007,unix,posixpaths,serverino,mapposix,acl,rsize=1048576,wsize=65536,actimeo=1)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)

Code:
cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse

zfspool: storage
        pool storage
        content images,rootdir

dir: local_backup
        path /storage/local_backup
        maxfiles 100
        content backup

dir: images
        path /storage/images
        maxfiles 1
        content iso,images,vztmpl

dir: Backup
        path /mnt/backup/censored.nmtp.de_backup/
        maxfiles 30
        content iso,images,vztmpl,rootdir,backup
 
you should probably do the following:
  • create datasets on zfs for storage/local_backup and storage/images and set the is_mountpoint flag on the associated dir storages, so that the dir storages are not activated unless the zfs datasets are mounted
  • maybe move the "storage" zfspool storage to a dataset as well instead of using the pool directly, but this is just for easier differentation and not requried
  • set mkdir to "no" or 0 on the "Backup" storage - because you don't want to create the directories if they don't exist (e.g., because the underlying Samba server was not reachable/mountable)
  • cleanup any potential mess that was already caused by this - it's possible that you have content in those directory storages that was later hidden by the ZFS/CIFS mounts
 
Hey guys, Actually i just unmounted the Ciffs storage, and until now it wokrs fine (for about 40 minutes now).

Maybe only the ciffs storage had an mistake? And Fucked up the whole system?

Regards
 
if you unmount the samba share, you should also disable the directory storage that you configured on top - otherwise PVE will write to your local disk.
 
So, ive tested it for about 26 hours now, and still no problem.
But if i Mount the nfs / samba again, i nearly get the same problem instant.

Maybe u have a solution for this?

I mounted the samba / nfs like this:
(fstab)
Code:
//censored.your-storagebox.de/backup /mnt/backup cifs username=censored,password=censored 0 0

on my other pve host without zfs it works fine without any problem.
You have any idea?

regards
 
I would guess that your network storage is unstable - both CIFS and NFS don't cope well with losing the connection to the server..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!