I have a converged system consisting of just one node. My version of proxmox is 5.4-13 with ceph luminous.
I have probably made the fatal mistake of changing the IP on the host barebones. I also made sure that the ceph and public network IPs were changed. But it doesnt matter what I can do I...
I want to add a new node to my cluster. The cluster already has 9 nodes on it and they were added without issue. The node I want to add is a fresh installation and I have checked that all the pve services are running correctly.
When I log onto node 1 and run:
pvecm add 10.10.10.10
I get...
I have a swap disk and I see it mounted in fstab. But in the host proxmox dashboard summary it does not show the swap disk size only N/A.
Is this normal or have I missed something ?
Hi,
I added a new host to my cluster which went fine and no errors. The OS is debian 9 stable.
roxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve)
pve-manager: 5.3-5 (running version: 5.3-5/97ae681d)
pve-kernel-4.15: 5.2-12
pve-kernel-4.15.18-9-pve: 4.15.18-30
corosync: 2.4.4-pve1
criu...
Hi there,
I am trying to migrate a VM template from one Cluster to another. I wanted to just scp the conf file but it will not allow me to write in the qemu-server folder. Any other vm conf file is no problem but the templates are a big problem for me. Any ideas?
Hi ,
I have two ceph clusters and two proxmox clusters.
The first cluster is as follows:
One:
Ceph Cluster version hammer running separately from the proxmox cluster. The storage.cfg contains the config for the Pools that should be added to the promox cluster for storage.
Two:
Ceph...
Hi there,
I have two proxmox clusters each with their own ceph clusters attached. I want to move VMs one by one from one cluster to another. I understand normally this can be achieved with a backup and restore to and from a mounted disk but what if I do not have enough space on the backup...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.