In last few days my latest Proxmox installation on AMD CPUs has VM freezing issue.
My CPUs in different servers was in two models:
1.AMD Ryzen 9 3900 12-Core Processor
2. AMD EPYC 7401P 24-Core Processor
On both I can see Windows VMs freez after free hours, and stop/starting them will temporary...
As far as I have tested with more than few servers, whenever I install Proxmox on a Hetzner server for example AX or PX lines, my servers go down and when I check them out with remote KVM the screen is blank.
The only temporary solution is to reboot the server and they just come up fine.
For purpose of fast provisioning we need to be able to clone a base LVM, which is raw OS image on it,
Is that possible like what below link said in shared storage environment? ( of course we have SAN storage device)
For better Ceph performance, I need to disable all kind of kernel protections for CPU vulnerabilities,
Is there any guide on how to ask kernel in Proxmox to do so?
Servers are being used only for Ceph so I have no security concerns regarding disabling CPU vulnerabilities.
I setup OVS switch on 10 nodes which are interconnected via GRE tunnels, but I can see Proxmox is not letting us create more than 4094 vlans because tag ID larger than 4094 is not validated in Proxmox interface or API.
Is there any restriction based on GRE overlay that prevent us having more vlans?
In proxmox documents I can see it's advised to set rstp_designated_path_cost for physical ports, but as I want to create a mesh network with openvswitch containing 20 nodes which all are connected together via GRE is it necessary to set path_cost for GRE too?
I've tried command below but it...
I wonder if there's any exporter that gather guest usage information from inside of them via GuestAgent and export them to prometheus?
If not is it reasonable to use GuestAgent for gathering resource usage of VMs?
I've used a simple iptables rule to test some idea, but I can see it's not picking any packet on tap interface.
Is that normal?
root@node01:~# iptables -I FORWARD -i tap101i0
root@node01:~# iptables -L FORWARD -v -n
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target...
I've a cluster of 40 Proxmox nodes and I want to start offer private network to my clients,
Each client of me may have few VMs on different nodes, and it's not static, they may add/remove VMs.
My question is if I want to setup OVS based private networking it's necessary to add GRE for each two...
I've build a cloned VM using linked clone from Proxmox interface, but consider when I want to rebuild that VM, I will need to remove the primary disk of that VM and add an other one which is based on an other base image, ( for example changing the OS of that VM from Ubuntu to CentOS )
I'm connecting two Proxmox servers with ConnectX-3 cards, they can see each other in Ethernet mode but in ib mode, the link remains in Initiating state despite physical link up status.
Also in opensm logs I can see that it cant detect ib ports while there are active. and both cards are in the...
I've a SSD based Ceph cluster with 3 nodes, the read IOPS is about 250K with 96 parallel fio jobs ( running from 3 different nodes ), the reasults are fine.
But the Write performance is not more than 2k IOPS when more than 1 parallel jobs is running. ( with only one job I can reach 15K write...
I'm going to use Mellanox ConnectX-3 for Proxmox 5.4 but the kernel is not detecting it, while it's being detected fine by Windows on the same server.
Any help on how to make it work?
lspci | grep Mellanox
returns empty result
proxmox-ve: 5.4-1 (running kernel...
I had a two node ceph cluster, It was working fine, but after adding new node to ceph cluster and there for joining it to proxmox cluster, one of my previous monitors went into unknown status despite it's running.
Also the new monitor is in unknow status too.
Please check the attached picture...
I just tried to create a VM on CEPH via GUI but it got timeout from RBD, after that I've tried to create VM via command line and it seems to be OK, so I wonder if there's any way to find the command which GUI is running to create VM on CEPH?
TASK ERROR: unable to create VM 102 - error with cfs...
I'm planning for a Proxmox cluster consisting of 3 nodes for compute and 3 different nodes for Ceph with SSD,
And it's important for me to have HA but I'm concerned about possibility of starting an already running VM on different node by HA controller in case of a network issue.
Recently I can see in latest Proxmox version, it's making disk files from 0 index while previously it was from 1,
Is there any way to ask it for creating disks from number 1?
The command which I'm using for creating new VMs via SSH is:
/usr/sbin/qm create 100 --name s100 --net0...