I have 2 nodes in some cluster
#node1 - with available network port, but without compute resources
#node2 - with available compute resources, but no free network port
I need to connect some dedicated network VPN to the available port in #node1
and link it directly to the KVM vm on #node2...
maybe it's bit early but it's look like Proxmox 6 beta1 is here: https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.0_beta1
it's possible to add Proxmox 6 nodes with existing Proxmox 5.x cluster?
we plan to order Hardware with 3 network adapters in enclosure
2x 10GB Switches
1x 100Gbps Switch
this is the max size we have so we can't add additional switch, now:
we using 2x 10GB for redundant proxmox network and Internet connection via Link aggrigation
and we plan to use the 100Gb...
I have Cluster with 11 nodes, each node 2 disks of 2TB SSD
all disks are in Ceph pool with size = 2
now, I want to move to size = 3
there is any downtime / problem we should expect during this change?
I have MicroBlade MBI-6219G-T8HX node
I added to this node additional M.2 Drive Samsung 970 EVO NVMe M.2 2TB
but I don't see this new Drive, not in fdisk and not in Disks list on Proxmox GUI
if I boot from Installation CD, I can see this new drive on installation Drives list
in daily backup we did reboot to one of the servers
in case of that reboot, all other nodes stop the backup proccess
INFO: Starting Backup of VM 117 (lxc)
INFO: status = running
ERROR: Backup of VM 117 failed - unable to open file '/etc/pve/nodes/server13nvme/lxc/117.conf.tmp.7676'...
I have ceph Cluster working on multiple nodes with 10.10.10.0/24 network
now, I added new nodes to the Proxmox cluster but this nodes was no access to the 10.10.10.0/24
and I did pveceph init --network 10.10.10.0/24
on this nodes, now after I see that no connection I added the network...
I have cluster with 4 nodes, when I set backup on node1 it's backup only node1
when I go directly to node2 GUI -> DataCenter -> Backups
I don't see there the backup that set on node1
any suggestion / solution for that?
I have removed node from cluster before the node was shutdown
now, when I check from another node I see:
# pvecm status
Expected votes: 5
Highest expected: 5
Total votes: 4
but I have only 4...
I want to create multiple LXC Containers using Linked Clone, the problem is that if I set some network / IP settings inside one container, it's apply to all other Container Linked clone containers
any solution for that?
I have 2 issues with LXC containers backup
1. some times it's show even few days it's processing for some container but not completing the backup and I need to stop the task manually.
2. some containers got this error:
command 'mount -o ro,noload /dev/rbd5 /mnt/vzsnap0//' failed: exit code...
I have LXC Containers working over Ceph Shared storage with Quota = 1 option for the Disk
now, I try to Run:
# quotacheck -vguma
quotacheck: Scanning /dev/rbd1 [/] quotacheck: error (1) while opening /dev/rbd1
how can I add quota support?
LXC Container OS: CentOS 7.x
Hello to Proxmox team,
first, thanks from the community for your great job!
some Ideas for next releases Features / Feedbacks
1. more options for LXC
a. option to select target storage for migration (both CLI and FUI)
b. option to do Live migration and not only restart mode (criu?)
c. better CPU...
Hello to the community,
when I have some very high load on LXC container, all the containers show same CPU load
so, how can I see from the node list of the container with real CPU status for each Conatiner
so I can turn off the container case the high CPU or just handle it well?
I have Ceph storage, now I want to copy my VM Disk from disk-1 to disk-3
rbd -p Ceph1 -m 10.10.10.1 -n client.admin --keyring /etc/pve/priv/ceph/Ceph1_vm.keyring --auth_supported cephx cp vm-110-disk-1 vm-110-disk-3
show me the error:
rbd: error opening default pool 'rbd'
I have cluster of 2x10Gbps network via Bond, using LACP
but almost every day, and sometime even few times a day, I connect to the GUI and see only on server info all servers and nodes with questionmark
after I logged-in to server202 and run:
am running lsblk from one of the container and I see it's show the list of the disks for all the containers under the physical node server
how can I avoid the container owner to run that command or view this info?