Empty conf file

svacaroaia2

New Member
Apr 24, 2014
2
0
1
Hi,
I just stumble on a very strange issue - an empty conf file for a vm that is still up and running fine
I discovered this issue because backup complain about "empty device list"

I am able to login to the VM and do file level backups

Any help/ suggestion as to how to solve this issue will be greatly appreciated

here are some technical details ( VM ID is 345 )

root@blh02-01:~# qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
330 mta.dmz.tor running 2048 6.00 239527
345 VM 345 running 0 0.00 43281

qm status 345
status: running

root@blh02-01:~# cat /etc/pve/qemu-server/345.conf
root@blh02-01:~# ls -l /etc/pve/qemu-server/345.conf
-rw-r----- 1 root www-data 0 Apr 23 21:03 /etc/pve/qemu-server/345.conf



ls -l /etc/pve/qemu-server/*
-rw-r----- 1 root www-data 244 Apr 23 21:03 /etc/pve/qemu-server/330.conf
-rw-r----- 1 root www-data 0 Apr 23 21:03 /etc/pve/qemu-server/345.conf
-rw-r----- 1 root www-data 0 Mar 5 02:37 /etc/pve/qemu-server/345.conf.tmp.869116

root@blh02-01:~# lvdisplay | grep 345
LV Path /dev/cluster01-vol/vm-345-disk-1
LV Name vm-345-disk-1


pveversion -v
proxmox-ve-2.6.32: 3.1-121 (running kernel: 2.6.32-27-pve)
pve-manager: 3.1-43 (running version: 3.1-43/1d4b0dfb)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-13
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1


root@blh02-01:~# pvecm status
Version: 6.2.0
Config Version: 126
Cluster Name: bl02-cluster01
Cluster Id: 29537
Cluster Member: Yes
Cluster Generation: 43944
Membership state: Cluster-Member
Nodes: 8
Expected votes: 5
Total votes: 8
Node votes: 1
Quorum: 5
Active subsystems: 6
Flags:
Ports Bound: 0 177
Node name: blh02-01
Node ID: 8
Multicast addresses: 239.192.115.212
Node addresses: 10.10.19.36
root@blh02-01:~# pvecm nodes
Node Sts Inc Joined Name
1 M 43784 2014-02-26 14:26:36 blh02-14
3 M 43876 2014-03-25 22:41:14 blh02-10
4 M 43940 2014-04-03 22:52:34 blh02-11
5 M 43784 2014-02-26 14:26:36 blh02-12
6 M 43784 2014-02-26 14:26:36 blh02-08
7 M 43784 2014-02-26 14:26:36 blh02-07
8 M 43780 2014-02-26 14:26:36 blh02-01
9 M 43784 2014-02-26 14:26:36 blh02-03
 
Hello svacaroaia2,

Do you have a cluster or a stand-alone PROXMOX server?
- If yes, it´s recommended to have all nodes up and running

Is the configuration for HD and network still visible in the gui?
- If yes - somewhere a misunderstanding, cannot explain, but an attempt to restore the content cannot make it worse
- If no - the devil know who emptied the file - create a new one, the structure is simple and self-explaining, see following example for vm 100:


cat /etc/pve/qemu-server/101.conf
bootdisk: ide0
cores: 1
ide0: ebu:100/base-100-disk-1.qcow2/101/vm-101-disk-1.qcow2,format=qcow2,size=22G
ide2: ebu:iso/lubuntu-13.10-alternate-i386.iso,media=cdrom
memory: 512
name: LUBUNTU6
net0: virtio=0A:7C:F5:69:87:E0,bridge=vmbr3
net1: virtio=79:E2:92:87:C2:43,bridge=vmbr2
ostype: l26
sockets: 1
vga: vmware

As far as I understood the config-file is used only to generate the (parameters for) kvm call.

Kind regards

Mr.Holmes
 
Hi,

Thank you for your prompt response

I do have a cluster with 8 nodes and about 60 VMS - all working well and the cluster has quorum

GUI does NOT show correct info (HD and Network missing and amount of RAM is not correct )

I can certainly create a new conf file - should I do it with VM stopped ?

The ultimate question is, if I do have RAW disk, there is no way I can lose data EVEN if I make mistakes in conf file, correct ?

Many thanks

Steven
Hello svacaroaia2,

Do you have a cluster or a stand-alone PROXMOX server?
- If yes, it´s recommended to have all nodes up and running

Is the configuration for HD and network still visible in the gui?
- If yes - somewhere a misunderstanding, cannot explain, but an attempt to restore the content cannot make it worse
- If no - the devil know who emptied the file - create a new one, the structure is simple and self-explaining, see following example for vm 100:


cat /etc/pve/qemu-server/101.conf
bootdisk: ide0
cores: 1
ide0: ebu:100/base-100-disk-1.qcow2/101/vm-101-disk-1.qcow2,format=qcow2,size=22G
ide2: ebu:iso/lubuntu-13.10-alternate-i386.iso,media=cdrom
memory: 512
name: LUBUNTU6
net0: virtio=0A:7C:F5:69:87:E0,bridge=vmbr3
net1: virtio=79:E2:92:87:C2:43,bridge=vmbr2
ostype: l26
sockets: 1
vga: vmware

As far as I understood the config-file is used only to generate the (parameters for) kvm call.

Kind regards

Mr.Holmes
 
First of all: the following is just my personal conclusion based on my experience (never had a cluster with more than 4, and never HA) ...

I do have a cluster with 8 nodes and about 60 VMS - all working well and the cluster has quorum

A more complex scenario indeed - a bug in synchronization cannot be excluded and it should be investigated/carefully observed. Anyway, to restore the .conf would be a 1st aid solution.

I can certainly create a new conf file - should I do it with VM stopped ?

I dont´t think so - as far as I know the .conf is not used when a VM runs.

The ultimate question is, if I do have RAW disk, there is no way I can lose data EVEN if I make mistakes in conf file, correct ?

I don´t think you can damage a virtual HD (RAW or other) by wrong configuration in .conf - once I defined a VM manually with incorrect HD size - no problems observed. However, to make a copy (rather than a "backup"; but this, when the VM is down) would not be a fault ....
 
What happens if you shut down the VM then try to start it again? Does it start? If it does, then obviously it is picking up conf information from somewhere. As Holmes pointed out <vm>.conf is usually used at the time VM is starting. Without any config the VM wont even start. When a VM is fully operation and you delete the conf file, VM will still operate indefinitely till it is time to restart and it calls for conf file again.