LXC Container can't start

Discussion in 'Proxmox VE: Installation and configuration' started by TheMineGeek, Apr 15, 2019.

  1. TheMineGeek

    TheMineGeek New Member

    Joined:
    Sep 28, 2016
    Messages:
    15
    Likes Received:
    0
    Hi,

    After upgrading from 5.1 to 5.4, most of my container can't start anymore.

    First thing I tried :
    Code:
    root@pve:/var/lib/lxc/103# lxc-start -lDEBUG -o lxc-start.log -F -n 103
    lxc-start: 103: conf.c: run_buffer: 335 Script exited with status 2
    lxc-start: 103: start.c: lxc_init: 861 Failed to run lxc.hook.pre-start for container "103"
    lxc-start: 103: start.c: __lxc_start: 1944 Failed to initialize container "103"
    lxc-start: 103: tools/lxc_start.c: main: 330 The container failed to start
    lxc-start: 103: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
    
    And the logfile content :
    Code:
    lxc-start 103 20190415193053.195 INFO     lsm - lsm/lsm.c:lsm_init:50 - LSM security driver AppArmor
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "reject_force_umount  # comment this to allow umount -f;  not recommended"
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for reject_force_umount action 0(kill)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for reject_force_umount action 0(kill)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for reject_force_umount action 0(kill)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for reject_force_umount action 0(kill)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "[all]"
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "kexec_load errno 1"
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for kexec_load action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for kexec_load action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for kexec_load action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for kexec_load action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "open_by_handle_at errno 1"
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for open_by_handle_at action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for open_by_handle_at action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for open_by_handle_at action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for open_by_handle_at action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "init_module errno 1"
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for init_module action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for init_module action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for init_module action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for init_module action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "finit_module errno 1"
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for finit_module action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for finit_module action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for finit_module action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for finit_module action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "delete_module errno 1"
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for delete_module action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for delete_module action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for delete_module action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for delete_module action 327681(errno)
    lxc-start 103 20190415193053.195 INFO     seccomp - seccomp.c:parse_config_v2:970 - Merging compat seccomp contexts into main context
    lxc-start 103 20190415193053.195 INFO     conf - conf.c:run_script_argv:356 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "103", config section "lxc"
    lxc-start 103 20190415193053.697 DEBUG    conf - conf.c:run_buffer:326 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 103 lxc pre-start with output: unable to detect OS distribution
    
    lxc-start 103 20190415193053.705 ERROR    conf - conf.c:run_buffer:335 - Script exited with status 2
    lxc-start 103 20190415193053.705 ERROR    start - start.c:lxc_init:861 - Failed to run lxc.hook.pre-start for container "103"
    lxc-start 103 20190415193053.705 ERROR    start - start.c:__lxc_start:1944 - Failed to initialize container "103"
    lxc-start 103 20190415193053.705 ERROR    lxc_start - tools/lxc_start.c:main:330 - The container failed to start
    lxc-start 103 20190415193053.705 ERROR    lxc_start - tools/lxc_start.c:main:336 - Additional information can be obtained by setting the --logfile and --logpriority options
    
    If my understanding is great, container can't start because it doesn't detect the os (/etc/debian_version ?).

    So I tried to mount the container :
    Code:
    root@pve:/var/lib/lxc/103# pct mount 103
    mounted CT 103 in '/var/lib/lxc/103/rootfs'
    root@pve:/var/lib/lxc/103# ls rootfs/
    dev
    
    It might be useful too :
    Code:
    root@pve:/home/theophile# pveversion -v
    proxmox-ve: 5.4-1 (running kernel: 4.15.18-12-pve)
    pve-manager: 5.4-3 (running version: 5.4-3/0a6eaa62)
    pve-kernel-4.15: 5.3-3
    pve-kernel-4.15.18-12-pve: 4.15.18-35
    pve-kernel-4.13.13-4-pve: 4.13.13-35
    pve-kernel-4.13.13-3-pve: 4.13.13-34
    pve-kernel-4.4.98-3-pve: 4.4.98-102
    pve-kernel-4.4.6-1-pve: 4.4.6-48
    corosync: 2.4.4-pve1
    criu: 2.11.1-1~bpo90
    glusterfs-client: 3.8.8-1
    ksm-control-daemon: 1.2-2
    libjs-extjs: 6.0.1-2
    libpve-access-control: 5.1-8
    libpve-apiclient-perl: 2.0-5
    libpve-common-perl: 5.0-50
    libpve-guest-common-perl: 2.0-20
    libpve-http-server-perl: 2.0-13
    libpve-storage-perl: 5.0-41
    libqb0: 1.0.3-1~bpo9
    lvm2: 2.02.168-pve6
    lxc-pve: 3.1.0-3
    lxcfs: 3.0.3-pve1
    novnc-pve: 1.0.0-3
    proxmox-widget-toolkit: 1.0-25
    pve-cluster: 5.0-36
    pve-container: 2.0-37
    pve-docs: 5.4-2
    pve-edk2-firmware: 1.20190312-1
    pve-firewall: 3.0-19
    pve-firmware: 2.0-6
    pve-ha-manager: 2.0-9
    pve-i18n: 1.1-4
    pve-libspice-server1: 0.14.1-2
    pve-qemu-kvm: 2.12.1-3
    pve-xtermjs: 3.12.0-1
    qemu-server: 5.0-50
    smartmontools: 6.5+svn4324-1
    spiceterm: 3.0-5
    vncterm: 1.5-3
    zfsutils-linux: 0.7.13-pve1~bpo2
    
    And my rootfs seems to be empty. It looks very bad to me but I'm a beginner and have little knowledge so I'm looking for your help.

    Hope you can save me.
    Thank you
     
    #1 TheMineGeek, Apr 15, 2019
    Last edited: Apr 15, 2019
  2. oguz

    oguz Proxmox Staff Member
    Staff Member

    Joined:
    Nov 19, 2018
    Messages:
    638
    Likes Received:
    67
    Can we see your container configuration?
    Code:
    pct config CTID
    
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. TheMineGeek

    TheMineGeek New Member

    Joined:
    Sep 28, 2016
    Messages:
    15
    Likes Received:
    0
    Code:
    root@pve:~# pct config 103
    arch: amd64
    cpulimit: 2
    cpuunits: 1024
    hostname: Webmin
    lock: mounted
    memory: 2048
    net0: bridge=vmbr0,gw=192.168.0.1,hwaddr=36:35:38:37:34:30,ip=192.168.0.103/24,name=eth0,type=veth
    onboot: 1
    ostype: debian
    rootfs: ssd:subvol-103-disk-2,size=8G
    swap: 512
    
    I told container 103 was not working but about 80% of my containers have the exact same problem following upgrade.
     
  4. Stoiko Ivanov

    Stoiko Ivanov Proxmox Staff Member
    Staff Member

    Joined:
    May 2, 2018
    Messages:
    1,377
    Likes Received:
    129
    * the output would suggest that the container is on a ZFS-storage?
    * if so - does the directory '/$POOLNAME/subvol-103-disk-2' exist? ($POOLNAME needs to be replaced by the ZFS pool name of the ssd storage - see '/etc/pve/storage.cfg' - what's inside that directory? (`cat /$POOLNAME/subvol-103-disk-2/etc/os-release`)
    * if not - what's the output of: `pvesm list ssd`?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. TheMineGeek

    TheMineGeek New Member

    Joined:
    Sep 28, 2016
    Messages:
    15
    Likes Received:
    0
    Yes, container is on a ZFS pool. The directory exists but only contains dev directory :
    Code:
    root@pve:/ssd/subvol-103-disk-2# ls
    dev
    
    pvesm list ssd output :
    Code:
    root@pve:/ssd/subvol-103-disk-2# pvesm list ssd
    ssd:subvol-100-disk-1 subvol 8589934592 100
    ssd:subvol-102-disk-2 subvol 15032385536 102
    ssd:subvol-102-disk-3 subvol 8697136975709 102
    ssd:subvol-103-disk-2 subvol 8589934592 103
    ssd:subvol-104-disk-1 subvol 8589934592 104
    ssd:subvol-105-disk-2 subvol 21474836480 105
    ssd:subvol-107-disk-1 subvol 8589934592 107
    ssd:subvol-108-disk-1 subvol 34359738368 108
    ssd:subvol-110-disk-1 subvol 8589934592 110
    ssd:subvol-111-disk-1 subvol 8589934592 111
    ssd:subvol-112-disk-1 subvol 8589934592 112
    ssd:subvol-113-disk-2 subvol 8589934592 113
    ssd:subvol-113-disk-3 subvol 107374182400 113
    ssd:subvol-114-disk-1 subvol 6442450944 114
    ssd:subvol-115-disk-1 subvol 34359738368 115
    ssd:subvol-116-disk-2 subvol 21474836480 116
    ssd:subvol-119-disk-2 subvol 8589934592 119
    ssd:subvol-119-disk-3 subvol 1073741824000 119
    ssd:subvol-120-disk-1 subvol 8589934592 120
    ssd:subvol-121-disk-1 subvol 8589934592 121
    ssd:subvol-122-disk-2 subvol 9663676416 122
    ssd:subvol-122-disk-3 subvol 4299090464605 122
    ssd:subvol-125-disk-1 subvol 8589934592 125
    ssd:subvol-128-disk-1 subvol 8589934592 128
    ssd:subvol-129-disk-1 subvol 34359738368 129
    ssd:subvol-131-disk-1 subvol 8589934592 131
    ssd:subvol-132-disk-1 subvol 8589934592 132
    ssd:subvol-133-disk-1 subvol 8589934592 133
    ssd:vm-106-disk-1       raw 10737418240 106
    
     
  6. Stoiko Ivanov

    Stoiko Ivanov Proxmox Staff Member
    Staff Member

    Joined:
    May 2, 2018
    Messages:
    1,377
    Likes Received:
    129
    * Seems the container's data is gone?
    * Unless you moved it or migrated it, this seems odd?
    Please provide the output of:
    * `pvesm status`
    * `cat /etc/pve/storage.cfg`
    * `zpool status`
    * `zpool list`
    * `zfs list`
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. TheMineGeek

    TheMineGeek New Member

    Joined:
    Sep 28, 2016
    Messages:
    15
    Likes Received:
    0
    All I did was update my proxmox version from 5.1 to 5.4. That's why it seems very odd to me

    Code:
    root@pve:~# pvesm status
    Name             Type     Status           Total            Used       Available        %
    backup            dir     active        14834704         8634300         5423804   58.20%
    downloads     zfspool     active      9111757926       427305077      8684452849    4.69%
    gitlab        zfspool     active      8684902836          449986      8684452849    0.01%
    iso               dir     active        14834704         8634300         5423804   58.20%
    local             dir     active        14834704         8634300         5423804   58.20%
    local-lvm     lvmthin   disabled               0               0               0      N/A
    nextcloud     zfspool     active      9360735655       676282805      8684452849    7.22%
    other         zfspool     active      8743490307        59037457      8684452849    0.68%
    plex          zfspool     active     20450661753     11766208904      8684452849   57.53%
    ssd           zfspool     active       112754688        73943740        38810948   65.58%
    storage       zfspool     active     21964308480     13279855630      8684452849   60.46%
    

    Code:
    root@pve:~# cat /etc/pve/storage.cfg
    dir: local                         
            path /var/lib/vz           
            content vztmpl,iso,backup   
                                       
    lvmthin: local-lvm                 
            disable                     
            thinpool data               
            vgname pve                 
            content rootdir,images     
                                       
    zfspool: ssd                       
            pool ssd                   
            content rootdir,images     
                                       
    zfspool: storage                   
            pool storage               
            content rootdir,images     
                                       
    zfspool: plex                       
            pool storage/plex           
            content images,rootdir     
                                       
    zfspool: downloads                 
            pool storage/downloads     
            content rootdir,images     
                                       
    zfspool: gitlab                     
            pool storage/gitlab         
            content rootdir             
                                       
    zfspool: nextcloud                 
            pool storage/nextcloud     
            content images,rootdir     
                                       
    zfspool: other                     
            pool storage/other         
            content rootdir,images     
                                       
    dir: backup                         
            path /storage/backup       
            content backup             
            maxfiles 3                 
                                       
    dir: iso                           
            path /storage/iso           
            content images,vztmpl,iso   
            shared 0                   
    

    Code:
    root@pve:~# zpool status                                                           
      pool: ssd                                                                       
     state: ONLINE                                                                     
      scan: scrub repaired 0B in 0h5m with 0 errors on Sun Apr 14 00:29:35 2019       
    config:                                                                           
                                                                                       
            NAME                                             STATE     READ WRITE CKSUM
            ssd                                              ONLINE       0     0     0
              mirror-0                                       ONLINE       0     0     0
                ata-KINGSTON_SUV400S37120G_50026B76660105B7  ONLINE       0     0     0
                ata-KINGSTON_SUV400S37120G_50026B7666010533  ONLINE       0     0     0
                                                                                       
    errors: No known data errors                                                       
                                                                                       
      pool: storage                                                                   
     state: ONLINE                                                                     
    status: Some supported features are not enabled on the pool. The pool can         
            still be used, but some features are unavailable.                         
    action: Enable all features using 'zpool upgrade'. Once this is done,             
            the pool may no longer be accessible by software that does not support     
            the features. See zpool-features(5) for details.                           
      scan: resilvered 1.06G in 0h4m with 0 errors on Mon Apr 15 20:02:25 2019         
    config:                                                                           
                                                                                       
            NAME                                          STATE     READ WRITE CKSUM   
            storage                                       ONLINE       0     0     0   
              raidz1-0                                    ONLINE       0     0     0   
                ata-WDC_WD60EFRX-68MYMN1_WD-WX41D25L1VKJ  ONLINE       0     0     0   
                ata-WDC_WD60EFRX-68MYMN1_WD-WX41D25L1XFP  ONLINE       0     0     0   
                ata-WDC_WD60EFRX-68MYMN1_WD-WX61DA4HA3K2  ONLINE       0     0     0   
                ata-WDC_WD60EFRX-68MYMN1_WD-WX41D25L1LRR  ONLINE       0     0     0   
              raidz1-1                                    ONLINE       0     0     0   
                ata-WDC_WD20EARX-00PASB0_WD-WMAZA7006528  ONLINE       0     0     0   
                ata-WDC_WD20EZRX-00D8PB0_WD-WMC4M0406108  ONLINE       0     0     0   
                ata-WDC_WD20EARX-32PASB0_WD-WCAZAH771492  ONLINE       0     0     0   
                ata-WDC_WD20EARS-00MVWB0_WD-WCAZA5667846  ONLINE       0     0     0   
                                                                                       
    errors: No known data errors                                                     
    

    Code:
    root@pve:~# zpool list
    NAME      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
    ssd       111G  60.6G  50.4G         -    56%    54%  1.00x  ONLINE  -
    storage  29.1T  17.0T  12.0T         -    26%    58%  1.00x  ONLINE  -
    

    Code:
    root@pve:~# zpool list                                                                       
    NAME      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT                 
    ssd       111G  60.6G  50.4G         -    56%    54%  1.00x  ONLINE  -                       
    storage  29.1T  17.0T  12.0T         -    26%    58%  1.00x  ONLINE  -                       
    root@pve:~# zfs list                                                                         
    NAME                                  USED  AVAIL  REFER  MOUNTPOINT                         
    ssd                                  70.5G  37.0G  29.5M  /ssd                               
    ssd/subvol-100-disk-1                 593M  7.42G   593M  /ssd/subvol-100-disk-1             
    ssd/subvol-102-disk-2                5.98G  8.02G  5.98G  /ssd/subvol-102-disk-2             
    ssd/subvol-102-disk-3                  96K  37.0G    96K  /ssd/subvol-102-disk-3             
    ssd/subvol-103-disk-2                1.23G  6.77G  1.23G  /ssd/subvol-103-disk-2             
    ssd/subvol-104-disk-1                1.19G  6.81G  1.19G  /ssd/subvol-104-disk-1             
    ssd/subvol-105-disk-2                7.79G  12.2G  7.79G  /ssd/subvol-105-disk-2             
    ssd/subvol-107-disk-1                1.03G  6.97G  1.03G  /ssd/subvol-107-disk-1             
    ssd/subvol-108-disk-1                 727M  31.3G   727M  /ssd/subvol-108-disk-1             
    ssd/subvol-110-disk-1                1.97G  6.03G  1.97G  /ssd/subvol-110-disk-1             
    ssd/subvol-111-disk-1                1.33G  6.67G  1.33G  /ssd/subvol-111-disk-1             
    ssd/subvol-112-disk-1                1.24G  6.76G  1.24G  /ssd/subvol-112-disk-1             
    ssd/subvol-113-disk-2                2.47G  5.53G  2.47G  /ssd/subvol-113-disk-2             
    ssd/subvol-113-disk-3                  96K  37.0G    96K  /ssd/subvol-113-disk-3             
    ssd/subvol-114-disk-1                1.57G  4.43G  1.57G  /ssd/subvol-114-disk-1             
    ssd/subvol-115-disk-1                2.28G  29.7G  2.28G  /ssd/subvol-115-disk-1             
    ssd/subvol-116-disk-2                13.2G  6.78G  13.2G  /ssd/subvol-116-disk-2             
    ssd/subvol-119-disk-2                2.07G  5.93G  2.07G  /ssd/subvol-119-disk-2             
    ssd/subvol-119-disk-3                  96K  37.0G    96K  /ssd/subvol-119-disk-3             
    ssd/subvol-120-disk-1                2.15G  5.85G  2.15G  /ssd/subvol-120-disk-1             
    ssd/subvol-121-disk-1                1.12G  6.88G  1.12G  /ssd/subvol-121-disk-1             
    ssd/subvol-122-disk-2                2.48G  6.52G  2.48G  /ssd/subvol-122-disk-2             
    ssd/subvol-122-disk-3                 498M  37.0G   498M  /ssd/subvol-122-disk-3             
    ssd/subvol-125-disk-1                1.09G  6.91G  1.09G  /ssd/subvol-125-disk-1             
    ssd/subvol-128-disk-1                1.32G  6.68G  1.32G  /ssd/subvol-128-disk-1             
    ssd/subvol-129-disk-1                 910M  31.1G   910M  /ssd/subvol-129-disk-1             
    ssd/subvol-131-disk-1                1.28G  6.72G  1.28G  /ssd/subvol-131-disk-1             
    ssd/subvol-132-disk-1                1.30G  6.70G  1.30G  /ssd/subvol-132-disk-1             
    ssd/subvol-133-disk-1                1.26G  6.74G  1.26G  /ssd/subvol-133-disk-1             
    ssd/vm-106-disk-1                    12.4G  46.9G  2.12G  -                                   
    storage                              12.4T  8.09T   140K  /storage                           
    storage/backup                       99.3G  8.09T  99.3G  /storage/backup                     
    storage/downloads                     408G  8.09T   140K  /storage/downloads                 
    storage/downloads/subvol-112-disk-1   408G   592G   408G  /storage/downloads/subvol-112-disk-1
    storage/gitlab                        439M  8.09T   140K  /storage/gitlab                     
    storage/gitlab/subvol-113-disk-1      439M  99.6G   439M  /storage/gitlab/subvol-113-disk-1   
    storage/iso                          5.70G  8.09T  5.70G  /storage/iso                       
    storage/nextcloud                     645G  8.09T   174G  /storage/nextcloud                 
    storage/nextcloud/subvol-119-disk-2   471G  8.09T   471G  /storage/nextcloud/subvol-119-disk-2
    storage/other                        56.3G  8.09T   140K  /storage/other                     
    storage/other/subvol-126-disk-1      56.3G   444G  56.3G  /storage/other/subvol-126-disk-1   
    storage/plex                         11.0T  8.09T   140K  /storage/plex                       
    storage/plex/subvol-102-disk-1       11.0T  4.11T  10.9T  /storage/plex/subvol-102-disk-1     
    storage/subvol-130-disk-1             229G  27.0G   229G  /storage/subvol-130-disk-1         
    
    Thank you for you help and your time
     
  8. Stoiko Ivanov

    Stoiko Ivanov Proxmox Staff Member
    Staff Member

    Joined:
    May 2, 2018
    Messages:
    1,377
    Likes Received:
    129
    seems there are 2.47G on that subvolume - either they are completely in the dev folder (`ls -laR /ssd/subvol-113-disk-2/dev`) or something got mounted over the subvolume.
    To check for this it's probably the easiest to change into the directory and run `df -h .`:
    `cd /ssd/subvol-113-disk-2/ ; df -h .`
    (post the output if you're not sure what's mounted there

    else - check your journal from the boot for errors/warning w.r.t. the zpool mount
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  9. TheMineGeek

    TheMineGeek New Member

    Joined:
    Sep 28, 2016
    Messages:
    15
    Likes Received:
    0
    The dev directory is totaly empty :
    Code:
    root@pve:/home/theophile# ls -laR /ssd/subvol-103-disk-2/dev/
    /ssd/subvol-103-disk-2/dev/:
    total 8
    drwxr-xr-x 2 root root 4096 Apr 15 19:58 .
    drwxr----- 3 root root 4096 Apr 15 19:58 ..
    
    Here is the output of df -h in subvol directory :
    Code:
    root@pve:/ssd/subvol-103-disk-2# df -h
    Filesystem                           Size  Used Avail Use% Mounted on
    udev                                  48G     0   48G   0% /dev
    tmpfs                                9.5G  1.6M  9.5G   1% /run
    /dev/mapper/pve-root                  15G  8.3G  5.2G  62% /
    tmpfs                                 48G   34M   48G   1% /dev/shm
    tmpfs                                5.0M     0  5.0M   0% /run/lock
    tmpfs                                 48G     0   48G   0% /sys/fs/cgroup
    ssd/subvol-100-disk-1                8.0G  594M  7.5G   8% /ssd/subvol-100-disk-1
    ssd/subvol-102-disk-2                 14G  6.0G  8.1G  43% /ssd/subvol-102-disk-2
    ssd/subvol-102-disk-3                 37G  128K   37G   1% /ssd/subvol-102-disk-3
    ssd/subvol-108-disk-1                 32G  727M   32G   3% /ssd/subvol-108-disk-1
    ssd/subvol-112-disk-1                8.0G  1.3G  6.8G  16% /ssd/subvol-112-disk-1
    ssd/subvol-113-disk-3                 37G  128K   37G   1% /ssd/subvol-113-disk-3
    ssd/subvol-114-disk-1                6.0G  1.6G  4.5G  27% /ssd/subvol-114-disk-1
    ssd/subvol-115-disk-1                 32G  2.3G   30G   8% /ssd/subvol-115-disk-1
    ssd/subvol-116-disk-2                 20G   14G  6.8G  67% /ssd/subvol-116-disk-2
    ssd/subvol-119-disk-3                 37G  128K   37G   1% /ssd/subvol-119-disk-3
    ssd/subvol-120-disk-1                8.0G  2.2G  5.9G  27% /ssd/subvol-120-disk-1
    ssd/subvol-122-disk-3                 38G  499M   37G   2% /ssd/subvol-122-disk-3
    ssd/subvol-129-disk-1                 32G  910M   32G   3% /ssd/subvol-129-disk-1
    storage/downloads/subvol-112-disk-1 1000G  408G  593G  41% /storage/downloads/subvol-112-disk-1
    storage/gitlab/subvol-113-disk-1     100G  440M  100G   1% /storage/gitlab/subvol-113-disk-1
    storage/nextcloud/subvol-119-disk-2  8.6T  471G  8.1T   6% /storage/nextcloud/subvol-119-disk-2
    storage/plex/subvol-102-disk-1        15T   11T  4.2T  73% /storage/plex/subvol-102-disk-1
    /dev/fuse                             30M   40K   30M   1% /etc/pve
    tmpfs                                9.5G     0  9.5G   0% /run/user/1000
    
    All containers for which rootfs is on the list are working correctly.
     
  10. Stoiko Ivanov

    Stoiko Ivanov Proxmox Staff Member
    Staff Member

    Joined:
    May 2, 2018
    Messages:
    1,377
    Likes Received:
    129
    you missed the final '.' in the df command:
    `cd /ssd/subvol-113-disk-2/ ; df -h .`

    However it looks as if the `ssd/subvol-113-disk-2` is not mounted (please post the output of the above command for verification nonetheless)

    please post additionally:
    * `zfs get all ssd/subvol-113-disk-2`
    * `zfs get all ssd`
    * `zpool get all ssd`

    and carefully check your journal for messages from zfs and mount
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  11. TheMineGeek

    TheMineGeek New Member

    Joined:
    Sep 28, 2016
    Messages:
    15
    Likes Received:
    0
    Sorry totaly missed it :(

    Code:
    root@pve:/ssd/subvol-103-disk-2# df -h .
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/pve-root   15G  8.3G  5.2G  62% /
    
    Code:
    root@pve:/ssd/subvol-103-disk-2# zfs get all ssd/subvol-103-disk-2        
    NAME                   PROPERTY              VALUE                   SOURCE
    ssd/subvol-103-disk-2  type                  filesystem              -    
    ssd/subvol-103-disk-2  creation              Sat Jan 13 11:21 2018   -    
    ssd/subvol-103-disk-2  used                  1.23G                   -    
    ssd/subvol-103-disk-2  available             6.77G                   -    
    ssd/subvol-103-disk-2  referenced            1.23G                   -    
    ssd/subvol-103-disk-2  compressratio         1.00x                   -    
    ssd/subvol-103-disk-2  mounted               no                      -    
    ssd/subvol-103-disk-2  quota                 none                    default
    ssd/subvol-103-disk-2  reservation           none                    default
    ssd/subvol-103-disk-2  recordsize            128K                    default
    ssd/subvol-103-disk-2  mountpoint            /ssd/subvol-103-disk-2  default
    ssd/subvol-103-disk-2  sharenfs              off                     default
    ssd/subvol-103-disk-2  checksum              on                      default
    ssd/subvol-103-disk-2  compression           off                     default
    ssd/subvol-103-disk-2  atime                 on                      default
    ssd/subvol-103-disk-2  devices               on                      default
    ssd/subvol-103-disk-2  exec                  on                      default
    ssd/subvol-103-disk-2  setuid                on                      default
    ssd/subvol-103-disk-2  readonly              off                     default
    ssd/subvol-103-disk-2  zoned                 off                     default
    ssd/subvol-103-disk-2  snapdir               hidden                  default
    ssd/subvol-103-disk-2  aclinherit            restricted              default
    ssd/subvol-103-disk-2  createtxg             8107653                 -    
    ssd/subvol-103-disk-2  canmount              on                      default
    ssd/subvol-103-disk-2  xattr                 sa                      local 
    ssd/subvol-103-disk-2  copies                1                       default
    ssd/subvol-103-disk-2  version               5                       -    
    ssd/subvol-103-disk-2  utf8only              off                     -    
    ssd/subvol-103-disk-2  normalization         none                    -    
    ssd/subvol-103-disk-2  casesensitivity       sensitive               -    
    ssd/subvol-103-disk-2  vscan                 off                     default
    ssd/subvol-103-disk-2  nbmand                off                     default
    ssd/subvol-103-disk-2  sharesmb              off                     default
    ssd/subvol-103-disk-2  refquota              8G                      local 
    ssd/subvol-103-disk-2  refreservation        none                    default
    ssd/subvol-103-disk-2  guid                  7768508220316712427     -    
    ssd/subvol-103-disk-2  primarycache          all                     default
    ssd/subvol-103-disk-2  secondarycache        all                     default
    ssd/subvol-103-disk-2  usedbysnapshots       0B                      -    
    ssd/subvol-103-disk-2  usedbydataset         1.23G                   -    
    ssd/subvol-103-disk-2  usedbychildren        0B                      -    
    ssd/subvol-103-disk-2  usedbyrefreservation  0B                      -    
    ssd/subvol-103-disk-2  logbias               latency                 default
    ssd/subvol-103-disk-2  dedup                 off                     default
    ssd/subvol-103-disk-2  mlslabel              none                    default
    ssd/subvol-103-disk-2  sync                  standard                default
    ssd/subvol-103-disk-2  dnodesize             legacy                  default
    ssd/subvol-103-disk-2  refcompressratio      1.00x                   -    
    ssd/subvol-103-disk-2  written               1.23G                   -    
    ssd/subvol-103-disk-2  logicalused           1.10G                   -    
    ssd/subvol-103-disk-2  logicalreferenced     1.10G                   -    
    ssd/subvol-103-disk-2  volmode               default                 default
    ssd/subvol-103-disk-2  filesystem_limit      none                    default
    ssd/subvol-103-disk-2  snapshot_limit        none                    default
    ssd/subvol-103-disk-2  filesystem_count      none                    default
    ssd/subvol-103-disk-2  snapshot_count        none                    default
    ssd/subvol-103-disk-2  snapdev               hidden                  default
    ssd/subvol-103-disk-2  acltype               posixacl                local 
    ssd/subvol-103-disk-2  context               none                    default
    ssd/subvol-103-disk-2  fscontext             none                    default
    ssd/subvol-103-disk-2  defcontext            none                    default
    ssd/subvol-103-disk-2  rootcontext           none                    default
    ssd/subvol-103-disk-2  relatime              off                     default
    ssd/subvol-103-disk-2  redundant_metadata    all                     default
    ssd/subvol-103-disk-2  overlay               off                     default
    
    Code:
    root@pve:/ssd/subvol-103-disk-2# zfs get all ssd                  
    NAME  PROPERTY              VALUE                  SOURCE        
    ssd   type                  filesystem             -              
    ssd   creation              Sat Oct  8 17:57 2016  -              
    ssd   used                  70.7G                  -              
    ssd   available             36.8G                  -              
    ssd   referenced            29.5M                  -              
    ssd   compressratio         1.00x                  -              
    ssd   mounted               no                     -              
    ssd   quota                 none                   default        
    ssd   reservation           none                   default        
    ssd   recordsize            128K                   default        
    ssd   mountpoint            /ssd                   default        
    ssd   sharenfs              off                    default        
    ssd   checksum              on                     default        
    ssd   compression           off                    default        
    ssd   atime                 on                     default        
    ssd   devices               on                     default        
    ssd   exec                  on                     default        
    ssd   setuid                on                     default        
    ssd   readonly              off                    default        
    ssd   zoned                 off                    default        
    ssd   snapdir               hidden                 default        
    ssd   aclinherit            restricted             default        
    ssd   createtxg             1                      -              
    ssd   canmount              on                     default        
    ssd   xattr                 on                     default        
    ssd   copies                1                      default        
    ssd   version               5                      -              
    ssd   utf8only              off                    -              
    ssd   normalization         none                   -              
    ssd   casesensitivity       sensitive              -              
    ssd   vscan                 off                    default        
    ssd   nbmand                off                    default        
    ssd   sharesmb              off                    default        
    ssd   refquota              none                   default        
    ssd   refreservation        none                   default        
    ssd   guid                  9583472617255780575    -              
    ssd   primarycache          all                    default        
    ssd   secondarycache        all                    default        
    ssd   usedbysnapshots       0B                     -              
    ssd   usedbydataset         29.5M                  -              
    ssd   usedbychildren        70.7G                  -              
    ssd   usedbyrefreservation  0B                     -              
    ssd   logbias               latency                default        
    ssd   dedup                 off                    default        
    ssd   mlslabel              none                   default        
    ssd   sync                  standard               default        
    ssd   dnodesize             legacy                 default        
    ssd   refcompressratio      1.00x                  -              
    ssd   written               29.5M                  -              
    ssd   logicalused           56.7G                  -              
    ssd   logicalreferenced     29.4M                  -              
    ssd   volmode               default                default        
    ssd   filesystem_limit      none                   default        
    ssd   snapshot_limit        none                   default        
    ssd   filesystem_count      none                   default        
    ssd   snapshot_count        none                   default        
    ssd   snapdev               hidden                 default        
    ssd   acltype               off                    default        
    ssd   context               none                   default        
    ssd   fscontext             none                   default        
    ssd   defcontext            none                   default        
    ssd   rootcontext           none                   default        
    ssd   relatime              off                    default        
    ssd   redundant_metadata    all                    default        
    ssd   overlay               off                    default      
    
    Code:
    root@pve:/ssd/subvol-103-disk-2# zpool get all ssd                        
    NAME  PROPERTY                       VALUE                          SOURCE
    ssd   size                           111G                           -    
    ssd   capacity                       54%                            -    
    ssd   altroot                        -                              default
    ssd   health                         ONLINE                         -    
    ssd   guid                           1025623658193926588            -    
    ssd   version                        -                              default
    ssd   bootfs                         -                              default
    ssd   delegation                     on                             default
    ssd   autoreplace                    off                            default
    ssd   cachefile                      -                              default
    ssd   failmode                       wait                           default
    ssd   listsnapshots                  off                            default
    ssd   autoexpand                     off                            default
    ssd   dedupditto                     0                              default
    ssd   dedupratio                     1.00x                          -    
    ssd   free                           50.3G                          -    
    ssd   allocated                      60.7G                          -    
    ssd   readonly                       off                            -    
    ssd   ashift                         0                              default
    ssd   comment                        -                              default
    ssd   expandsize                     -                              -    
    ssd   freeing                        0                              -    
    ssd   fragmentation                  56%                            -    
    ssd   leaked                         0                              -    
    ssd   multihost                      off                            default
    ssd   feature@async_destroy          enabled                        local 
    ssd   feature@empty_bpobj            active                         local 
    ssd   feature@lz4_compress           active                         local 
    ssd   feature@multi_vdev_crash_dump  enabled                        local 
    ssd   feature@spacemap_histogram     active                         local 
    ssd   feature@enabled_txg            active                         local 
    ssd   feature@hole_birth             active                         local 
    ssd   feature@extensible_dataset     active                         local 
    ssd   feature@embedded_data          active                         local 
    ssd   feature@bookmarks              enabled                        local 
    ssd   feature@filesystem_limits      enabled                        local 
    ssd   feature@large_blocks           enabled                        local 
    ssd   feature@large_dnode            enabled                        local 
    ssd   feature@sha512                 enabled                        local 
    ssd   feature@skein                  enabled                        local 
    ssd   feature@edonr                  enabled                        local 
    ssd   feature@userobj_accounting     active                         local
    
    I also found this in journal :
    Code:
    Apr 15 20:00:37 pve zpool[30123]: no pools available to import
    
    April 15 at 20h00 is the time I rebooted server and that lxc containers weren't working anymore
     
  12. Stoiko Ivanov

    Stoiko Ivanov Proxmox Staff Member
    Staff Member

    Joined:
    May 2, 2018
    Messages:
    1,377
    Likes Received:
    129
    hmm - could be a timing issue.
    You could try a `zfs mount -a`
    else - checkout the wikipage: https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks (the entry with rootdelay - 'Boot fails and goes into busybox')

    hope this helps!
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  13. TheMineGeek

    TheMineGeek New Member

    Joined:
    Sep 28, 2016
    Messages:
    15
    Likes Received:
    0
    Thanks a lot. I tried zfs mount -a but it failed :
    Code:
    cannot mount '/ssd/subvol-103-disk-2': directory is not empty
    
    So I deleted the dev folder and mounted it successfuly. Any idea of what could have happened ?
    Thanks again
     
  14. Stoiko Ivanov

    Stoiko Ivanov Proxmox Staff Member
    Staff Member

    Joined:
    May 2, 2018
    Messages:
    1,377
    Likes Received:
    129
    As said - your pool could not be imported - you need to find out why - one of the things which cause this, is that the disks are not available when the system tries to import the pool - there a delay can help (that's explained in the wikipage)

    Anyways - glad that your data is still in place! :)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice