Thanks for the response - I don't see any logs
pvesh delete /node/<nodename>/qemu-image/<vmid> shows nothing - they just hang
when stopping the destroy task [just spinning and not ended] then when try to delete again
this logs
disk image '/var/lib/vz/images/33058/vm-33058-disk-0.qcow2' does...
We have been using Proxmox for years and have a 3 nodes cluster that had been healthy for a while then recently we noticed that VMs couldn't be deleted (destroyed) by any means (GUI, CLI, or the API)
We don't see any storage issues that could refer to a problem with file locking, The VM disk...
Hello,
This had been asked several times however after extensive search there is still no real answer that would make sense or a formula that I was able to follow to assign memory - sometime it work and sometimes doesn't (guessing!)
up to 48GB works fine after adding this workaround "options...
Basically I been trying and spending hours on this setup and can't get it to work, Any help here would be appreciated.
Note: My host management network is on VLAN 2 - switches to host are trunk ports... without native VLANs
The problem is I want a VM with access to either a trunk or specific...
Can the same interface be used in multiple bonds?
For example
auto eth0.3
iface eth0.3 inet manual
auto eth1.3
iface eth1.3 inet manual
auto bond0
iface bond0 inet manual
bond-slaves eth0.3 eth1.3
auto bond1
iface bond1 inet manual
bond-slaves eth1.3
If yes, Do i need to repeat the VLAN...
eth1 is the backup router link - so the eth0 should be the primary - Also when we connect CSR on the bridge vmbr0 - SW2 learns the mac address from CSR over the etherchannel and not to the directly connected layer 2 port.
Compared to ESXi we are able to create as many port groups as we need...
CSR doesn't need redundancy - I have another router on the other switch for that purpose - however having it on a bond would route traffic over the ether-channel.
I tried ifupdonw2 and the network doesn't come up so I uninstalled it.
What is the proper configuration for my setup?
Hello, I have been struggling for a while to getting this configuration to work, Searched forums and compared multiple times with existing configs without luck
2 x Cisco switches with an ether-channel trunk for HSRP
Proxmox hosts connected to port 5 on each switch eth0 to sw1 and eth1 to sw2...
Thanks for the quick response - Is there any way to configure without making changes to /etc/network/interfaces file - I would like to use one bridge for all guests in the interface file then the rest is over OVS.
my manual flow is like that:
ovs-vsctl add-br sw
ip link set sw up
ovs-vsctl...
For KVM machine creation Proxmox API works however for the creation of the OVS elements such as bridge (ovs-vsctl add-br) - Is there an API way to do that? or any recommendation on how to automate that over an API?
Thanks in advance
Is there any way to manage multiple hosts (not part of cluster) from one interface - free or paid?
Similar to vCenter in terms of the ability to monitor / access all hosts in one pane of glass interface.
Thanks in advance.
Thanks again for continues help... How can I take advantage of the VLAN tag setup on the VM level form hardware menu---> add VLAN tag?
a valid example of that working would help.... where I set the tag on the VM level not preset the VLANs on the host interface file.
Thanks
Thanks for prompt reply...
I know your example works cause that's how I have LAN "VLAN" assigned on the first NIC eth0 over the main trunk link,
My issue is:
I would like to take advantage of the VLAN tag setup on the VM level form hardware menu---> add VLAN tag...
The scenario would be...
Ok, I have made some changes based on your recommendation...
1- Changed from VMware virtual machine to a physical host - latest version.
2- Connected host with 2 interfaces... eth0 and eth1 to a Cisco switch.
3- configured trunk ports on both switch ports.
4- configured vlan on eth0 and added...
Hello Everyone.. This is my first post... I'm actually looking for the same request like a previous post..
I have searched forum for every relative thread and couldn't find an answer.
I have 3 nodes in a cluster... physical hosts.. I have
VM1 on Node1 VLAN ID 100
VM2 on Node2 VLAN ID 200...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.