[SOLVED] Timed out waiting for device /dev/mapper/pve-data


Well-Known Member
Sep 11, 2017
Since installing PVE6 each reboot requires me to manually mount the pve-data lvm and switch from emergency mode to init 3.
During boot up the following error shows:

systemd[1]: dev-mapper-pve\x2ddata.device: Job dev-mapper-pve\
systemd[1]: Timed out waiting for device /dev/mapper/pve-data.
-- Subject: A start job for unit dev-mapper-pve\x2ddata.device has failed

The server goes into maintenance mode. When I manually mount the device and issue "init 3" the server goes into full operation.
As the servers are located in a remote datacentre, each time the system needs a reboot I have to drive 30km to reboot manually.

The servers ran pve5 without problems for a year (older server) and 5 months (newer one) without any problems, the inplace upgrade crashed both servers and I had to run new pve6 setups on both.

Thanks in advance for any idea...
could you post the contents of
and the outputs of
pvs -a
vgs -a
lvs -a
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=BBCE-5A2D /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
/dev/mapper/pve-data /data-a xfs defaults 0 2


dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

dir: data-a
        path /data-a
        content vztmpl,iso,backup,rootdir,snippets,images
        maxfiles 1
        shared 1

dir: data-b
        path /data-b
        content images,snippets,backup,rootdir,vztmpl,iso
        maxfiles 1
        shared 1
nfs: VMNFS-Store                                                                                                                                                                                             
        export /backup/VMNFS                                                                                                                                                                                 
        path /mnt/pve/VMNFS-Store                                                                                                                                                                           
        server backup                                                                                                                                                                                       
        content rootdir,backup,iso,vztmpl,images,snippets                                                                                                                                                   
        maxfiles 1                                                                                                                                                                                           
lvmthin: talaxia-lvm                                                                                                                                                                                         
        thinpool talaxia-lvm                                                                                                                                                                                 
        vgname talaxia-lvm                                                                                                                                                                                   
        content images,rootdir                                                                                                                                                                               
        nodes talaxia

pvs -a
  PV         VG  Fmt  Attr PSize  PFree                                                                                                                                                                       
  /dev/sda2           ---      0       0                                                                                                                                                                      
/dev/sda3 pve lvm2 a-- 10.91t <16.38g

vgs -a:
  VG  #PV #LV #SN Attr   VSize  VFree  
pve 1 3 0 wz--n- 10.91t <16.38g

LVS -a:
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve twi-aotz-- <10.77t             0.00   0.38                            
[data_tdata] pve Twi-ao---- <10.77t
[data_tmeta] pve ewi-ao---- 15.81g
[lvol0_pmspare] pve ewi------- 15.81g
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
hmm - seems that '/dev/mapper/pve-data' is provisioned as thin-pool (lvs -a output) and used as an xfs filesystem (mount output).

How did you create that storage?
In any case I would suggest to
* move all data away from that directory
* decide on which you want (thinpool or mounted as directory-storage),
* remove both the filesystem and the thinpool.
* create whichever you want and use that only

There was a thread before with a user with similar problems:

I hope this helps!
Update: Simply removing the /etc/fstab entry solved the problem.
After that I manually changed the entries in /etc/pve/qemu-server/<ID>.conf from <PATH>:<ID>/<VM-DISK> to <LVMTHIN>:<VM-DISK> and restored the machines from the previous backup.
All works as expected, after the reboot the host comes up nicely.

THanks for the help!


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!