add node that conatins vm's to cluster

What about if I
-backup the VM first and delete it afterwards by also checking Purge from job configurations and Destroy unreferenced disks owned by guest
-join the node to cluster and restore the VM afterwards?
What if the backed up VM on node B has the same id number as another VM on node A? Would a workaround be the create a backup of the guest (vzdump) and restore as a different ID after the node has been added to the cluster?

PS Is this info valid, the storage on the node to be added will be wiped? Why is that? So the node not only has to have no VMs but also no storage at all configured?
I believe you just described proxmox recommended way of doing it. Yes - you can backup and then restore to different ID. You can also clone the current vm and this way you get new clone with new id and can delete old one (if you have storage space). As a last resort you can edit .conf files to new id's and also rename storage disks (I have done this a few times, but needs good attention to detail).
 
Under /etc/pve/qemu-server there are configs for vm's. If you want to change the id you can rename the files.

Bash:
root@phoenix:/etc/pve/qemu-server# ls -ll
total 25
-rw-r----- 1 root www-data 392 Feb 22 01:39 1001.conf
-rw-r----- 1 root www-data 445 Feb 22 01:39 1002.conf
-rw-r----- 1 root www-data 491 Feb 22 01:45 1003.conf
-rw-r----- 1 root www-data 412 Feb 22 01:46 1005.conf
-rw-r----- 1 root www-data 385 Feb 22 01:46 1015.conf
-rw-r----- 1 root www-data 539 Feb 22 01:48 1017.conf
-rw-r----- 1 root www-data 419 Feb 22 01:05 103.conf
-rw-r----- 1 root www-data 364 Dec 23 01:09 105.conf
-rw-r----- 1 root www-data 493 Feb  4 06:23 106.conf
-rw-r----- 1 root www-data 323 Feb 22 01:07 107.conf

In the files there:

INI:
root@phoenix:/etc/pve/qemu-server# cat 1001.conf
agent: 1,fstrim_cloned_disks=1
bootdisk: scsi0
cores: 2
ide2: none,media=cdrom
keyboard: sv
memory: 4096
name: myserver
net0: virtio=5A:44:1D:B9:32:EB,bridge=vmbr2
numa: 0
onboot: 1
ostype: l26
scsi0: zf_samsung:vm-1001-disk-0,discard=on,size=12G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=3d6e3bc9-3d6a-4e6c-8897-be7e7bb22da9
sockets: 1
vmgenid: 212a09c1-979c-42f5-a1f5-e808356f2dc5

scsi0 is storage name and volume. So depending if it's zfs or lvm or whatever you can rename it in config and need to rename it in real storage also since it contains vm id. It still works without renaming , but you could clash on id's later on migration etc.

SO recap:
1) rename *.conf file to what id you like
2) change scsi0 or whatever storage on that new *.conf file
3) rename lvm/zfs volume/dataset.

I have done it on really big storage where it's senseless to copy/backup/clone 50TB of data etc.
But if you are asking like this, it's better for you not to go this path if not absolutely nessecary.
 
Under /etc/pve/qemu-server there are configs for vm's.
Well known path didn t know though that it is required only one step to rename the id of a VM. Didn t have to do it up until now.

2) change scsi0 or whatever storage on that new *.conf file
why would i want to change the storage name like for instance scsi0 to scsi1 or something? This name is within the Vm-id. I can t see the reason for this to change. Even in the same node. The first storage for the VM is scsi0 or 1 like always. Probably you mean something else .
3) rename lvm/zfs volume/dataset.
...same here .Can t get why and how would I change that give an example dont generalize it.

But if you are asking like this, it's better for you not to go this path if not absolutely nessecary.
I get why you are saying this. Probably you are from a bunch of the lucky ones where the IT department consists of several people or you have to supervisor just a few. In my case I have under me 80 people and the IT dep consists of the following number of people.... hm...... me. That s it.
-Consulting ,
Rack installation,
cable installation-management.
monitoring,
server installation (Winserv2012 -2019),
Active directory (rules groups users. printers..etc)
SQL deployment
Routers - switches access points configuration
CRM user guidance
ERP user guidance
Shall I continue? So the fact that I have one million things upon my head (which I have documented them in hundreds of pages) process that exhausts me daily, doesn t make me naïve for asking plain questions like the one above. I Just cant keep info anymore and need answers without searching to reinvent the wheel from scratch. Simple as that. Proxmox isnt 10-20 things you need /have to know and you re ok. It is hundreds of them and it is difficult to keep in track with so many more running in parallel.
 
Last edited:
I think you are totally overthinking or not thinking at all.
I did not say to rename scsi0. I said rename scsi0 storage name. It has your vm id.

So like in my case I have id 1001 machine and like it to be 5001. So:
1) rename 1001.conf => 5001.conf
2) edit the config:
INI:
scsi0: zf_samsung:vm-1001-disk-0,discard=on,size=12G,ssd=1
to:
INI:
scsi0: zf_samsung:vm-5001-disk-0,discard=on,size=12G,ssd=1
3) rename dataset (I use zfs):
Bash:
zfs rename zf_samsung/vm-1001-disk-0 zf_samsung/vm-5001-disk-0

Congrats!

How does supervising anybody got to do with anything. Only one I'm supervising today is user called ieronymous.
I'm a freelance sysadmin and service provider who like proxmox and idea of open source projects like this. There is no need for such a rant.
 
Last edited:
I'm also experiencing an issue with this. My nodes are physically separated and the management network where the nodes need to be joined is aggregated into my SD-WAN fabric by an enterprise firewall VM that runs on the node. I really want to take advantage of getting these into a cluster, but can't join them together without that VM running. Has anyone found a workaround to this yet?
 
I'm also experiencing an issue with this. My nodes are physically separated and the management network where the nodes need to be joined is aggregated into my SD-WAN fabric by an enterprise firewall VM that runs on the node. I really want to take advantage of getting these into a cluster, but can't join them together without that VM running. Has anyone found a workaround to this yet?
Unfortunately I couldn't find a reasonable workaround so I ended up having to deploy a temporary host into each colocation that housed the firewall. Rebuild the permanent node, join the cluster and then rebuild the firewall on permanent node. This was a fair amount of additional overhead (cost and effort). I'm hoping somebody can come up with a better way to do this in the future.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!