Where is my VMs? All VMs running, but not show in management.

FcbInfo

Renowned Member
Dec 21, 2012
107
0
81
Hi. I don't know what I did wrong when I'm trying to create clusters, but now all VMs not showing on the web management but these VMs are running and working without problems.

I feel fear to restart the server to see if the VMs will back to the web management.

Another think that I see, my local storage has 68719476735.97TB of space (lol)

Look at the screenshot.

ss.jpg

If someone can help, I'll be happy!

When I see that, and trying to find what is the problem, I remember that I have changed the ssh port for security reasons. I always change the port 22 to another port for security.

Thanks!
 
Have you integrated a node to the cluster with VMs into the PVE Host?

PVE have a wiki that tell us that before of integrate a PVE Node to Cluster is necessary that the Node don't have VMs.

But how correct it?, i don't know. :-(
 
When you join a node to a cluster,

the directory /etc/pve/ is replicated from the cluster to the node.


so the old directory of node is overwrite, and vm config are lost.

So, I hop you have a backup of /etc/pve/

or you will need to recreate vm config file manually.

(And you need to check to not have vmid conflict between the old config node and the cluster)
 
Hi. I don't know what I did wrong when I'm trying to create clusters, but now all VMs not showing on the web management but these VMs are running and working without problems.

I feel fear to restart the server to see if the VMs will back to the web management.

Another think that I see, my local storage has 68719476735.97TB of space (lol)

Look at the screenshot.

View attachment 1793

If someone can help, I'll be happy!

When I see that, and trying to find what is the problem, I remember that I have changed the ssh port for security reasons. I always change the port 22 to another port for security.

Thanks!
Hi,
with "ps -aux | grep kvm" you see your running VM-processes and can see a lot of info to recreate the VM-config (Nic-MAC-adresses, drives and so on). Simply create an vm-config on the noder where the VM is running (with the right content ;) ) and all is fine.

OpenVZ-CTs is not so easy...

Udo
 
Have you integrated a node to the cluster with VMs into the PVE Host?

PVE have a wiki that tell us that before of integrate a PVE Node to Cluster is necessary that the Node don't have VMs.

But how correct it?, i don't know. :-(

I see something about don't have duplicate vm id's. Maybe I did something wrong because I'm working hard. Long days with 3 or 4 hours of sleep per day.

Hi,
with "ps -aux | grep kvm" you see your running VM-processes and can see a lot of info to recreate the VM-config (Nic-MAC-adresses, drives and so on). Simply create an vm-config on the noder where the VM is running (with the right content ;) ) and all is fine.

OpenVZ-CTs is not so easy...

Udo

Nice, this will help. Anyway, I believe it's better to reinstall proxmox.

Do you have a backup of these VM's?

Nop. But the VMs is working. I can't lost these VMs. If I lost this, will be the end of my life. (Lol).
But I'm always work with backups, just don't have these backups right now, because I'm reinstalling the OS in all servers, and this include the backup server.


Sorry for the late reply, tired and got some sleep.


I have only 4 VMs inside of it, and the VMs (KVM) is online and working.


All Hard drive files is raw, maybe is better to copy these raw files to other server and reinstall proxmox? It's only 4 KVMs, and i know all configs for the VMs (In my head). Just need to copy the MAC of nics.

=-=-=-=-=-= /

Let's do the job... I'm gonna try to compact these raw files with vms working. All 4 VMs is CentOS and are running cPanel.

What is the best way to compact these raw files with less compression and fast speed?


Thanks!
 
Using Udo's suggestion could save you from making backup, move image etc.

And example:
Code:
root        8199  0.8  1.8 1085964 299628 ?      Sl   Nov09  18:21 /usr/bin/kvm -id 117 -chardev socket,id=qmp,path=/var/run/qemu-server/117.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/117.vnc,x509,password -pidfile /var/run/qemu-server/117.pid -daemonize -name proxy -smp sockets=1,cores=1 -nodefaults -boot menu=on -vga qxl -cpu kvm32,+x2apic,+sep -k da -spice tls-port=61004,addr=127.0.0.1,tls-ciphers=DES-CBC3-SHA,seamless-migration=on -device virtio-serial,id=spice,bus=pci.0,addr=0x9 -chardev spicevmc,id=vdagent,name=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -m 512 -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -drive if=none,id=drive-ide0,media=cdrom,aio=native -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200 -drive file=iscsi://172.16.2.2/iqn.2010-09.org.openindiana:vshare/0,if=none,id=drive-virtio0,cache=writeback,aio=native -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap117i0,script=/var/lib/qemu-server/pve-bridge,vhost=on -device virtio-net-pci,mac=1A:1D:E8:65:38:8C,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -machine type=pc-i440fx-1.4 -incoming tcp:localhost:60000 -S
To recreate the vm.conf
id: 117
vga: qxl (spice)
cpu: kvm32, 1 core (l26)
drive: virtio0, file=iscsi://172.16.2.2/iqn.2010-09.org.openindiana:vshare/0 (lun0, find name in storage.conf), cache=writeback, boot (for size find the image on the device)
nic: tap (bridge. find number using ifconfig), net0, mac=1A:1D:E8:65:38:8C, virtio (driver)
memory: 512M using balloon driver

converts to:
Code:
bootdisk: virtio0
cores: 1
cpu: kvm32
ide0: none,media=cdrom
memory: 512
name: restored-117
net0: virtio=1A:1D:E8:65:38:8C,bridge=vmbr10
onboot: 1
ostype: l26
sockets: 1
startup: order=any
vga: qxl
virtio0: omnios:vm-117-disk-1,cache=writeback,size=4G
 
Using Udo's suggestion could save you from making backup, move image etc.

Thanks for try to help mir.
Anyway I believe is better to reinstall the proxmox over this server. Look at my local disk space (something like): 6876545644.99 TB
I really prefer to reinstall all, and have sure this is working 100% without problems.

Anyway, that help a lot, because I know when we have change on MAC address, its hard to fix inside vps and with this I can get the old MAC number.
I'm goint to restore this server, and never will try to cluster again.

And for sure, i did something wrong, because when I'm trying to do this cluster, I'm working up to 40 hours without sleep an did it very tired.

I have a raw file, with 800GB, but inside this 800GB RAW file, is only using 30GB.
Someone know what's the best way to compact this RAW with less compression and high speed? Gzip? The server has 24 threads.

Thanks =)
 
For the clusters.

I have read now, this need to be in the same network. That says I can do cluster only when we have private network?
Eg: If you have servers in different locations, it's not able to cluster?

If you have a VM in your EU servers and wants to move this VM to USA servers, its better to do it manually?

Thanks!
 
WOW... got surprise now...

at pid...

-drive file=/var/lib/vz/images/102/vm-102-disk-1.raw

but...

root@node-nc01:/var/lib/vz# ls -lah
total 0
root@node-nc01:/var/lib/vz#

o_O

VM still running... wow... where is the vm-102-disk-1.raw ?
 
Lokking into this.

I see here now, have some mounted partitions that not common when we install proxmox

/etc/fuse on /etc/pve

Omg... hard fear to lose this VMs.

Another think, I have 2 template of Windows VMs. Hard work to do these 2 templates, maybe i can recover this.
 
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 7.1G 404K 7.1G 1% /run
/dev/mapper/pve-root 60G 1.1G 56G 2% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 15G 47M 15G 1% /run/shm
/dev/mapper/pve-data 64Z 64Z 1.2T 100% /var/lib/vz
/dev/sda1 495M 35M 435M 8% /boot
/dev/fuse 30M 12K 30M 1% /etc/pve


No space left to write files on /var/lib/vz

64Z ?
 
Code:
scp /proc/50894/fd/14 [EMAIL="root@192.XXX.XXX.XXX:/var/lib/vz/images/vm-107-disk-1.raw"]root@192.XXX.XXX.XXX:/var/lib/vz/images/vm-107-disk-1.raw[/EMAIL]

Let's see if this will help

Transferring the file to other server

Something like 800GB to transfer.
 
Well, that works.
I'll update the post when finish, if another one get a problem like this, maybe will help.

First VM is running and working in another server.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!