After creating VM No. 4 the other VMs are not bootable

goateam

New Member
May 24, 2018
4
0
1
55
I have 3 virtual machines on Proxmox 5.1:

1. Centos 7 on SSD (ZFS, 130GB), 12GB RAM
2. pfSense on HDD (ZFS, 10GB), 1GB RAM
3. Windows 10 on HDD (ZFS, 100GB), 4GB RAM

When I create the fourth VM, the first and sometimes the second virtual machine starts using too much RAM and still keeps working.
But after rebooting VM does not boot - "No bootable device". The partitions are destroyed, restoring in Testdisk does not help. When I mount restored lvm in Ubuntu, it's empty.
There are no errors in logs, Proxmox made planned backups to NAS for 4 days, but these backups contain already damaged systems. I recovered them, but it does not boot.

I don't understand, how can creating one VM destroy the others, even when they are on different physical disks.

Configuration:
HP ProLiant ML150G9
Intel 8 Core Xeon E5-2609v4 (1,7 Ghz, 8 jader)
32 GB RAM
2x SSD HPE 150GB SATA 6G
2x HPE HDD 1TB 6G SATA 7.2K

Can anyone help me to solve this?
 
how is your storage configured? (/etc/pve/storage.cfg)
lvm? zfs ?directory ?
 
dir: local
path: /var/lib/vz
content backup, vztmpl, iso
maxfiles 5
shared 0

zfspool: local-zfs
pool rpool/data
content rootdir, images
sparse 1

zfspool: ssd
pool ssd
content images, rootdir
sparse 1

nfs: NAS
export /volume1/VMbackup
path /mnt/pve/NAS
server 192.168.11.155
content backup
maxfiles 5
options vers=3
 
what does zfs list say?
what is the config of the 4th vm ?
 
zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 211G 688G 104K /rpool
rpool/ROOT 104G 688G 96K /rpool/ROOT
rpool/ROOT/pve-1 104G 688G 104G /
rpool/data 98.1G 688G 96K /rpool/data
rpool/data/vm-101-disk-1 1.38G 688G 1.37G -
rpool/data/vm-101-state-test 242M 688G 242M -
rpool/data/vm-102-disk-1 91.8G 688G 67.3G -
rpool/data/vm-102-state-predABRA 2.92G 688G 2.92G -
rpool/data/vm-102-state- 1.72G 688G 1.72G -
rpool/swap 8.50G 693G 3.69G -
ssd 134G 586M 96K /ssd
ssd/vm-100-disk-1 134G 119G 15.4G -

4th VM was Zentyal, 2GB RAM, disk 500GB sata, network: bridge
 
on which storage did you create the 4th vm? if you did on 'ssd' please notice that
ssd 134G 586M 96K /ssd
ssd/vm-100-disk-1 134G 119G 15.4G -
says you only have ~600MiB free there
 
I know this, ssd was not used. The 4th vm was created on hdd, where is plenty of space.
Moreover, creating a snapshot of 3th vm destroyed vm no. 1 today
 
are you sure your disks are alright? it sound like your storage is not healthy in general
check the smart values (but even if they're ok, it does not mean everything is alright)
check the syslog for anything out of the ordinary
 
  • Like
Reactions: DerDanilo
I have had a similar experience, I mad 'restored' 3vm's booted them up and they were running fine. Then after restoring a 4th vm (all came from another proxmox cluster), suddenly all the vm's shutdown and all 4 vm's disks presented with the 'no bootable device'
The storage for these vm's is 6 120G ssd's using zfs.