Problem in creating a cluster lost all VM's settings

Tacioandrade

Renowned Member
Sep 14, 2012
120
17
83
Vitória da Conquista, Brazil
Staff nice day, my name is Tácio Andrade and I am a novice user in Proxmox. Yesterday I first tried to create a cluster with Proxmox Proxmox 3.0 system and 3.1 that had just installed. However after a few problems in the integration, occurring over a grave error and files in the / etc / vz / conf / disappeared. The vms are running at the moment and I have the same backup are Saturday, any idea what should I do to not lose the VMs in the event of a reboot?


I went on the node that would be the secondary, but apparently was as master, and went to see the directory, however the output was as follows:

root@isaias:~# ls -l /etc/pve/nodes/joao/openvz/
total 0

PS: I'm desperate here, because these VMs are in production. = S
 
Last edited:
I am not sure what you talk about - file are stored at /etc/pve/

# ls -lR /etc/pve

to see the whole content.
 
Staff nice day, my name is Tácio Andrade and I am a novice user in Proxmox. Yesterday I first tried to create a cluster with Proxmox Proxmox 3.0 system and 3.1 that had just installed. However after a few problems in the integration, occurring over a grave error and files in the / etc / vz / conf / disappeared. The vms are running at the moment and I have the same backup are Saturday, any idea what should I do to not lose the VMs in the event of a reboot?


I went on the node that would be the secondary, but apparently was as master, and went to see the directory, however the output was as follows:

root@isaias:~# ls -l /etc/pve/nodes/joao/openvz/
total 0

PS: I'm desperate here, because these VMs are in production. = S
Hi,
looks like /etc/pve isn't mounted.

What is the output of
Code:
mount | grep etc/pve
pvecm status
If you have quorum perhaps an restart of the pve-cluster helps ("service pve-cluster restart").

Udo
 
Sorry I could not explain properly. In the event had a host joao with Proxmox 3.0 without cluster and production. Installed on a second host isaias the Proxmox 3.1.
During integration (my first), some errors occurred which caused the deletion of the files /etc/vz/*.conf and the same qemu.
After contacting the user group on Facebook Proxmox started trying some things and I'm currently with the following situation: the node offline joao (is so since the problems of cluster creation) and the files as follows:

root@joao # pvecm status
Version: 6.2.0
Config Version: 3
Cluster Name: clubenet
Cluster Id: 26462
Cluster Member: Yes
Cluster Generation: 4
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 1
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: joao
Node ID: 1
Multicast addresses: 239.192.103.197
Node addresses: 189.52.xxx.xxx

root@joao # cat /etc/pve/cluster.conf
<?xml version="1.0"?>
<cluster name="clubenet" config_version="3">


<cman keyfile="/var/lib/pve-cluster/corosync.authkey">
</cman>


<clusternodes>

<clusternode name="joao" votes="1" nodeid="1"/></clusternodes>


</cluster>




Status on de web Interface: https://fbcdn-sphotos-d-a.akamaihd.net/hphotos-ak-prn2/1233274_584818801579908_1589830377_o.jpg

I'm almost giving up and leaving monday up the backup (last Saturday) in a host borrow and go manually updating the files, SGDB, etc..

#LongDay
 
Hi,
looks like /etc/pve isn't mounted.

What is the output of
Code:
mount | grep etc/pve
pvecm status
If you have quorum perhaps an restart of the pve-cluster helps ("service pve-cluster restart").

Udo

Yes it is mounted, the problem is that during integration, configuration files should have been copied from the isaias for joao, however the opposite occurred during transfer and hear a mistake and these configuration files were lost.
After removing the cluster and joao make a restart in pve-cluster directory was recreated put the files .conf Openvz and qemu lost.
 
Yes it is mounted, the problem is that during integration, configuration files should have been copied from the isaias for joao, however the opposite occurred during transfer and hear a mistake and these configuration files were lost.
After removing the cluster and joao make a restart in pve-cluster directory was recreated put the files .conf Openvz and qemu lost.
Hi,
you wrote in the first post, that the VMs are still running. Then you can recreate the settings from the running process like this:
Code:
ps aux | grep kvm
...
root        4383  8.9 12.7 8989672 8403096 ?     Sl   Sep06 1977:08 /usr/bin/kvm -id 210 
 -chardev socket,id=qmp,path=/var/run/qemu-server/210.qmp,server,nowait 
 -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/210.vnc,x509,password 
 -pidfile /var/run/qemu-server/210.pid -daemonize -[B]name fileserver[/B] [B]-smp sockets=2,cores=1 [/B]
 -nodefaults -boot menu=on -[B]vga qxl[/B] -cpu[B] kvm64[/B],+x2apic,+sep -k de 
 -spice tls-port=61000,addr=127.0.0.1,tls-ciphers=DES-CBC3-SHA,seamless-migration=on 
 -device virtio-serial,id=spice,bus=pci.0,addr=0x9 -chardev spicevmc,id=vdagent,name=vdagent 
 -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -m [B]8192[/B] 
 -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 
 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 
 -drive file=[B]/dev/sata/vm-210-disk-1[/B],if=none,id=drive-[B]virtio1[/B],aio=native,cache=none 
 -device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb 
 -drive if=none,id=drive-ide2,media=cdrom,aio=native 
 -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 
 -drive file=[B]/dev/b_r1/vm-210-disk-1[/B],if=none,id=drive-[B]virtio0[/B],aio=native,cache=none 
 -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=101 
 -netdev type=tap,id=net0,ifname=tap210i0,script=/var/lib/qemu-server/pve-bridge,vhost=on
 -device [B]virtio[/B]-net-pci,mac=[B]9E:E0:FB:8C:6A:76[/B],netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300


cat /etc/pve/qemu-server/210.conf

bootdisk: virtio0
[B]cores: 1[/B]
cpu: [B]kvm64[/B]
ide2: none,media=cdrom
memory: [B]8192[/B]
[B]name: fileserver[/B]
net0: [B]virtio[/B]=[B]9E:E0:FB:8C:6A:76[/B],bridge=vmbr4
ostype: l26
[B]sockets: 2[/B]
[B]vga: qxl[/B]
[B]virtio0: b_r1:vm-210-disk-1[/B],size=10G
[B]virtio1: sata:vm-210-disk-1[/B],backup=no,size=5000G
you must look for your storage-naming, but this is not a real problem...

BTW. the configs stay all the time below /etc/pve/qemu-server and not /etc/qemu-server - this was for pve1.x!!

Udo
 
Really got the configuration of qemu-kvm, put the containers based on 5 vms, I am not able to find via ps -aux.


In the case at least my pfSense VM Terie recovering, has been of great help. =D

Monday I'll get another machine and install Proxmox 3.1 0 and try to get it right this time.
 
Really got the configuration of qemu-kvm, put the containers based on 5 vms, I am not able to find via ps -aux.


In the case at least my pfSense VM Terie recovering, has been of great help. =D


Monday I'll get another machine and install Proxmox 3.1 0 and try to get it right this time.
 
Really got the configuration of qemu-kvm, put the containers based on 5 vms, I am not able to find via ps -aux.


In the case at least my pfSense VM Terie recovering, has been of great help. =D


Monday I'll get another machine and install Proxmox 3.1 0 and try to get it right this time.

Hi,
the config for the OpenVZ-container is not so easy, but you can find some information in the proc-filesystem (and much things in the config are "standard").
See here:
Code:
cat /etc/pve/openvz/100.conf 
ONBOOT="no"

PHYSPAGES="0:512M"
SWAPPAGES="0:512M"
KMEMSIZE="232M:256M"
DCACHESIZE="116M:128M"
LOCKEDPAGES="256M"
PRIVVMPAGES="unlimited"
SHMPAGES="unlimited"
NUMPROC="unlimited"
VMGUARPAGES="0:unlimited"
OOMGUARPAGES="0:unlimited"
NUMTCPSOCK="unlimited"
NUMFLOCK="unlimited"
NUMPTY="unlimited"
NUMSIGINFO="unlimited"
TCPSNDBUF="unlimited"
TCPRCVBUF="unlimited"
OTHERSOCKBUF="unlimited"
DGRAMRCVBUF="unlimited"
NUMOTHERSOCK="unlimited"
NUMFILE="unlimited"
NUMIPTENT="unlimited"

# Disk quota parameters (in form of softlimit:hardlimit)
DISKSPACE="6G:[B]6920601[/B]"
DISKINODES="[B]1200000:1320000[/B]"
QUOTATIME="0"
QUOTAUGIDLIMIT="0"

# CPU fair scheduler parameter
CPUUNITS="1000"
CPUS="1"
HOSTNAME="vz1.domain.com"
SEARCHDOMAIN="domain.com"
NAMESERVER="localhost"
NETIF="ifname=eth0,mac=[B]9A:C0:4C:62:8C:7B[/B],host_ifname=veth100.0,host_mac=[B]AA:90:8A:AE:D9:9A[/B],bridge=vmbr4"
VE_ROOT="/var/lib/vz/root/$VEID"
VE_PRIVATE="/var/lib/vz/private/100"
OSTEMPLATE="debian-7.0-standard_7.0-1_i386.tar.gz"
####################################

cat /proc/vz/veth 
Version: 1.0
[B]aa:90:8a:ae:d9:9a        veth100.0 9a:c0:4c:62:8c:7b[/B]             eth0        100  deny

cat /proc/vz/vzquota 
qid: path            usage      softlimit      hardlimit       time     expire
100: /var/lib/vz/private/100
  1k-blocks         567440        6291456        [B]6920601[/B]          0          0
     inodes          22389        [B]1200000        1320000[/B]          0          0
Udo
 
Thank you for openvz template. I'm thinking of these containers I import the backup that I have done 1 week ago and after that update the filesystem, however have yet to see whether it is feasible or not, the problem of downtime.
At least with this story relearned something, never do anything in machine production, even if it is something "ordinary".
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!