Where is my VMs? All VMs running, but not show in management.

Hi Udo.

I have everything works back on other server in the same datacenter with that I can use the same IP address for VMs.

I'm reinstallling OS into this now =)

And thanks for the help, without this:
Code:
[COLOR=#333333]Hi,[/COLOR]
[COLOR=#333333]here is an similiar posting with pve: [/COLOR][URL="http://forum.proxmox.com/threads/9258-Deleted-image-of-running-vm"]http://forum.proxmox.com/threads/925...-of-running-vm[/URL]

[COLOR=#333333]Udo[/COLOR]

I'm not able to do what i need. I have used it over scp =)
 
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 7.1G 404K 7.1G 1% /run
/dev/mapper/pve-root 60G 1.1G 56G 2% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 15G 47M 15G 1% /run/shm
/dev/mapper/pve-data 64Z 64Z 1.2T 100% /var/lib/vz
/dev/sda1 495M 35M 435M 8% /boot
/dev/fuse 30M 12K 30M 1% /etc/pve


No space left to write files on /var/lib/vz

64Z ?
Hi,
can you post the output of following commands:
Code:
pvs
vgs
lvs
Udo
 
Fine!
But it's now to late to find the issue with /var/lib/vz...

But I think you have learned a lot about disaster recovery ;)

Udo

Yes, a lot. That copy from a deleted file with VM running, yes... its really fantastic.
I just got some little fear about proxmox now, but I'll give a try.

I have the Modules Gardem module to work with whmcs and also have an owned Hostbill license with Ipam module.

Modules Gardem is missing the Choose OS for customers when they are going to buy, and the Hostbill is missing the control of backups for customers. =/

Puff!

I really loved the proxmox, just hate this debian!
 
PVE have a wiki that tell us that before of integrate a PVE Node to Cluster is necessary that the Node don't have VMs.

...well, not exactly, it says:

<<Adding nodes to the Cluster
Login via ssh to the other Proxmox VE nodes. Please note, the node cannot hold any VM that has the same ID of a VM on another node (otherwise you will get conflicts with identical VMID's - to workaround, use vzdump to backup and to restore to a different VMID after the cluster configuration)>>

http://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster#Adding_nodes_to_the_Cluster

Marco
 
...well, not exactly, it says:

<<Adding nodes to the Cluster
Login via ssh to the other Proxmox VE nodes. Please note, the node cannot hold any VM that has the same ID of a VM on another node (otherwise you will get conflicts with identical VMID's - to workaround, use vzdump to backup and to restore to a different VMID after the cluster configuration)>>

http://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster#Adding_nodes_to_the_Cluster

Marco

Hi Marco

May be that i am wrong, i don't remember where i had read about of this, but the post of spirit (a developer of PVE) tell us that the old directory "/etc/pve" of node is overwrited when this Host is added to PVE Cluster, please see the post:
http://forum.proxmox.com/threads/16...-but-not-show-in-management?p=85729#post85729

If this is correct (that i believe it, because i had a similar or equal problem much long ago), then the wiki of "Proxmox VE 2.0 Cluster" must be modified
 
Last edited:
Hi Marco

May be that i am wrong, i don't remember where i had read about of this, but the post of spirit (a developer of PVE) tell us that the old directory "/etc/pve" of node is overwrited when this Host is added to PVE Cluster, please see the post:
http://forum.proxmox.com/threads/16...-but-not-show-in-management?p=85729#post85729

If this is correct (that i believe it, because i had a similar problem much long ago), then the wiki of "Proxmox VE 2.0 Cluster" must be modified

I am always willing to update wiki, if anyone confirms... this kind of things confirm my doubts about the need of a more clear & complete cluster documentation...
http://forum.proxmox.com/threads/16...quot-cluster-docs-and-operations-quot-section
but nobody spent a word on this...

Marco
 
Why blame Debian and Proxmox for your own mistakes?
I'm not blaming, really love the Proxmox. I Just say got fear about cluster and I hate Debian. To says "I hate debian", that not says Debian is bad, its me, its I... "I hate debian" because I don't know how to do a lot of things on Debian that I easy on CentOS, and that not says Debian is bad. Debian, I know that's so good and very trust able distro. I just need to learn it.

...well, not exactly, it says:

<<Adding nodes to the Cluster
Login via ssh to the other Proxmox VE nodes. Please note, the node cannot hold any VM that has the same ID of a VM on another node (otherwise you will get conflicts with identical VMID's - to workaround, use vzdump to backup and to restore to a different VMID after the cluster configuration)>>

http://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster#Adding_nodes_to_the_Cluster

Marco
If that is true, then you need to start with all clusters. Because when I followed the wiki tutorial, I have 4 VMs in one server but on the new server it's a clean install, no VMs on this new server. The doubts about the tutorial is: (I'll post it after this post).

I am always willing to update wiki, if anyone confirms... this kind of things confirm my doubts about the need of a more clear & complete cluster documentation...
http://forum.proxmox.com/threads/16...quot-cluster-docs-and-operations-quot-section
but nobody spent a word on this...
Marco
Yes... I got some doubts when I'm doing the cluster config. I'll post it now.
 
I am always willing to update wiki, if anyone confirms... this kind of things confirm my doubts about the need of a more clear & complete cluster documentation...
http://forum.proxmox.com/threads/16...quot-cluster-docs-and-operations-quot-section
but nobody spent a word on this...

Marco

People as you make that the people as me can learn, that's good :D

And I am confirming that the words of Spirit are correct, I lost my VMs configurations when i added a node to a PVE Cluster, but as i had a previous copy of my files "id.conf" and "id.qcow2", in my case don't was a problem for fix it quickly.

Best regards
Cesar
 
Last edited:
I'm not blaming, really love the Proxmox. I Just say got fear about cluster and I hate Debian

:) don't hate Debian, what you need is just some experience... I rarely use fedora/redhat systems, and I would surely be in panic there when nothing works like in Debian-based systems...

To lower my fears about cluster operations, I recently used a nested test pve cluster: I also wrote a wiki article about this: http://pve.proxmox.com/wiki/Nested_Virtualization

Maybe will help you to practice without (or with less) fear :)

Marco
 
Login via ssh to the first Proxmox VE node. Use a unique name for your Cluster, this name cannot be changed later.
Create:
hp1# pvecm create YOUR-CLUSTER-NAME1) Need to do this in all servers, or only on the first server? (I did in all)


Add a node:
hp2# pvecm add IP-ADDRESS-CLUSTER2) From what server I do this? On the new server or on the old server that I have VMs?

[h=2]Requirements[/h]
  • All nodes must be in the same network as it uses IP Multicast to communicate between nodes (See also Corosync Cluster Engine). Note: Some switches do not support IP multicast by default and must be manually enabled first. See multicast notes for more information about multicast.

3) Oks, you need to have time to learn about this, but at the time that I tried to do the cluster servers, I'm really very tired and without sleep for more than 40 hours. Maybe I did something wrong. But, what this says? You need to have the server at the same location? "When you have one server on EU and another server on USA, What is the best way to do the migrations?" I did that question to me, and think, well that will works, maybe slow, but will works. (That's what i think on this time).

4) My server with VMs, have the SSH port changed for security reasons (who uses centos, always change the ssh port, its an habit of security). Then, when I have used the cluster commands, I forgot that server have ssh port changed, and maybe this is the problem. Another thing, when I used commands, i got some errors like: authorization key already exist (something like this), and then I see in some location, that i need to use the --force option, and I did it.

When I finish the commands, and login into the management panel, don't see any VMs, and my local storage has 688978946987.78 TB with 100% in use.
Anyway, these 4 VMs are working now without lost any data.

=P
 
:) don't hate Debian, what you need is just some experience... I rarely use fedora/redhat systems, and I would surely be in panic there when nothing works like in Debian-based systems...

To lower my fears about cluster operations, I recently used a nested test pve cluster: I also wrote a wiki article about this: http://pve.proxmox.com/wiki/Nested_Virtualization

Maybe will help you to practice without (or with less) fear :)

Marco

Yes, at this time, I don't need to use any commands out of the basic of Linux, than its not a problem =)
Its just about some problems with network config, some different configuration files, but its not a problem. Its only a habit.
 
Add a node:
hp2# pvecm add IP-ADDRESS-CLUSTER2) From what server I do this? On the new server or on the old server that I have VMs?

In the server that isn't integrated to the PVE Cluster to any IP of the Host(s) that is/are in the PVE Cluster.
But the format are:

For create a PVE Cluter:
pvecm create [A new name of the Cluster]

And you can to have a PVE Cluster of one single Node if more later you want add more PVE Nodes.

Only in this case (a PVE Cluster of a single Node) you don't lose quorum and ever you will have a perfect control of your PVE Node, so You do not need to do some extra manual work.

For add a single Host to a "PVE Cluster" existing:
In the simple Host you should run:
pvecm add [IP address of some Host that is in a PVE Cluster existing]

But if your PVE Hosts aren't in the same LAN, i believe that the Unicast configuration should be configurated first. please see this link:
http://pve.proxmox.com/wiki/Multicast_notes

And you should to know how configured a cluster of only two PVE Node. Please see this wiki the tittle "Two nodes cluster and quorum issues":
http://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster


"When you have one server on EU and another server on USA, What is the best way to do the migrations?" I did that question to me, and think, well that will works, maybe slow, but will works. (That's what i think on this time).

In this case you must use unicast and not multicast, because multicast is for use only in a LAN, please see this link about of multicast:
http://pve.proxmox.com/wiki/Multicast_notes

But long time ago I did a question to the "PVE team" about of the minimal bandwidth necessary for the PVE Cluster communication, but got no response, this answer you need to know it for know if your "PVE Cluster" will work well on your link WAN. My personal recomendation is: Don't use "PVE Cluster" out of a LAN.

And for make migration of VMs in a WAN is a serious problem due to that is necessary have a common storage of images of VMs (equal think apply it to DRBD), and as this storage should be accessible for the PVE Nodes, I know that for your case, the migration not will be possible due to these requirements that should be previously configured and correctly working.

When I finish the commands, and login into the management panel, don't see any VMs, and my local storage has 688978946987.78 TB with 100% in use.
Anyway, these 4 VMs are working now without lost any data.

If your storage is "local storage", in a clean installation of PVE, the images of the VMs (equal to say "virtual hard disks") are in: /var/liv/vz/images/[Number ID of the VM]/Number ID of the VM.raw, or .qcow2, or .some other format supported by qemu].

And the Hardware configuration of the VMs are in: /etc/pve/nodes/[Name of the Node]/qemu-server/[ID of the VM].conf

Then you can put the "correct Hardware configuration" of the VM where it should be !!!.

But remember that you need to have in the PVE cluster some of this options enabled or configured (for not lost quorum and the control of all your PVE Cluster):

A) Minimal of "3 PVE Nodes", or
B) Minimal of "2 PVE Nodes" with a PVE configuration of only 2 nodes, or
C) Minimal of "2 PVE Nodes" + a third Node with 1 Quorum disk.

I think that with these requerimentes of "PVE Cluster", for you will be dificult apply it in a WAN.

This post was re edited, FcbInfo please read it again
 
Last edited:
Add a node:
hp2# pvecm add IP-ADDRESS-CLUSTER2) From what server I do this? On the new server or on the old server that I have VMs?
This must be done from the server which you want to apply to a running cluster. Maybe this is where you did wrong?

Step 1) pvecm create YOUR-CLUSTER-NAME only on existing/first node!!!
step 2) pvecm add IP-ADDRESS-OF-EXISTING-NODE-ALREADY-MEMBER-OF-CLUSTER is done on every node not part of the cluster yet.

If you have performed step 2 on node in step 1 you might have replaced your configuration files with a clean sheet;-)
 
I am always willing to update wiki, if anyone confirms... this kind of things confirm my doubts about the need of a more clear & complete cluster documentation...

Note: The pvecm tool rejects to add a node to the cluster if the node already contains VMs (unless you specify --force).
 
Hi,
with "ps -aux | grep kvm" you see your running VM-processes and can see a lot of info to recreate the VM-config (Nic-MAC-adresses, drives and so on). Simply create an vm-config on the noder where the VM is running (with the right content ;) ) and all is fine.

OpenVZ-CTs is not so easy...

Udo

For FcbInfo:

Udo is a Grand Master for many people of this forum, even for my. And i think that you should follow their advice.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!