Ok, first - before I get chewed out for doing this, I know what I did was wrong and I will not doing it again..  At least by accident.
So what I did was rebuilt all of my machines destroying my old cluster, migrating machines to new hosts as needed to reformat and reinstall fresh on the other hosts.
Once I was done, I started creating a new cluster. I went to add my second machine (I have 8 total right now) but it already had VM's running and complained.
I checked the documentation and it said something like: Make sure you don't have any VM's running.. blah blah... Something or another... And then it said something about it could cause a conflict in ID's.
Well, *all* of my ID's are unique across *all* of my hosts, so I figured that wouldn't be a problem for me. So I -force'd the node to get added to the cluster and then, *BAM* noticed that *ALL* of my host configuration files were magically gone (which in the big scheme of things makes sense).
Why did I do this? Well, to be honest - it's a production environment and I couldn't afford to take the time to migrate or backup/restore all of the hosts (about 18-25 of them) on every server I needed to add to the cluster. As long as my ID's don't conflict, I don't understand why that's necessary...
Anyways.. The containers appear to still be running, but I can't do anything with them because pvectl wants the /etc/pve/id.conf files to exist first.
I have some older backups of most of the containers, there has to be some way of generating these config files from the backups without actually restoring every backup - right? How/where does vzdump and vzrestore keep the container information?
Also, on a side note... Would this have worked okay if I backed up my /etc/pve/*.conf files and then restored them to /etc/pve/nodes/<node> or whatever after adding to the cluster?
Thanks in advanced!
				
			So what I did was rebuilt all of my machines destroying my old cluster, migrating machines to new hosts as needed to reformat and reinstall fresh on the other hosts.
Once I was done, I started creating a new cluster. I went to add my second machine (I have 8 total right now) but it already had VM's running and complained.
I checked the documentation and it said something like: Make sure you don't have any VM's running.. blah blah... Something or another... And then it said something about it could cause a conflict in ID's.
Well, *all* of my ID's are unique across *all* of my hosts, so I figured that wouldn't be a problem for me. So I -force'd the node to get added to the cluster and then, *BAM* noticed that *ALL* of my host configuration files were magically gone (which in the big scheme of things makes sense).
Why did I do this? Well, to be honest - it's a production environment and I couldn't afford to take the time to migrate or backup/restore all of the hosts (about 18-25 of them) on every server I needed to add to the cluster. As long as my ID's don't conflict, I don't understand why that's necessary...
Anyways.. The containers appear to still be running, but I can't do anything with them because pvectl wants the /etc/pve/id.conf files to exist first.
I have some older backups of most of the containers, there has to be some way of generating these config files from the backups without actually restoring every backup - right? How/where does vzdump and vzrestore keep the container information?
Also, on a side note... Would this have worked okay if I backed up my /etc/pve/*.conf files and then restored them to /etc/pve/nodes/<node> or whatever after adding to the cluster?
Thanks in advanced!
 
	 
	 
 
		 Knowledge gained from mistakes are the best ones. But painful to learn them nonetheless.
  Knowledge gained from mistakes are the best ones. But painful to learn them nonetheless.