thanks I will try that when I can have the server offline later
However I have restarted the vm after applying the firewall and it was still blocked so would that not suggest that its not anything to do with established connections?
Hi,
I have just enabled the firewall on my proxmox machine and when I do all guests and vm's lose network connectivity? Any idea why this would be, from reading the datacenter/node level firewall should not affec the vm/ct's? (They all currently have their firewalls turned off)
Also after...
I don't really see why that would be an issue. The ID would stay as is, it could easily parse the filename to get the vm id or disk id and ignore the label - would just be a simple regex...
I think a lot of people would agree with me that current naming convention is pretty dangerous when doing...
is this likely to be implimented any time soon?
Also I think it should go further than just backup names it should be the same for the disk image names for containers and vm's.
so instead of just vm-120-disk-0 it would be vm-120-debian7mysql-disk-0 or similar.
looking at the snapshots in "zfs...
I updated to latest dell bios and also turned on SRIOV - still the same issue. I have also tried the pcie=on flag for the pci device.
Its strange I have had it working on vmware fine.
I have just read about using the pci-stub module as per here, is that still something that might work...
seems this is the problem :
failed to setup container for group 16: Failed to set iommu for container: Operation not permitted
Just so you know I have used passthrough on the same fc card on this machine in vmware esxi and it worked fine. (hardware is a Dell R710 so pretty standard I assume)...
After starting the vm it does say this
05:00.0 Fibre Channel [0c04]: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA [1077:2432] (rev 03)
Subsystem: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA [1077:0138]
Kernel driver in use: vfio-pci...
yes I just copied that from your help
in my config file I actually have (/etc/modprobe.d/vfio-pci.conf)
options vfio-pci ids=1077:2432
lspci -nnk
05:00.0 Fibre Channel [0c04]: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA [1077:2432] (rev 03)
Subsystem: QLogic...
I am trying to get my QLogic Corp. ISP2432-based 4Gb Fibre Channel card passed through to a Centos 7 guest, I have the pci passthrough setup and I can see the qla device in centos so the passthrough all seems to be working.
However when centos guest starts up and initialises the fibre target...
Sorry only tested in the latest version. From the logs its authenticating fine which is good. Its just failing to get the Spice ticket.
Could it be the node or vm setting is not correct? They are not used until that stage.
node=pve vm=121.
In my case pve matches up with what is shown in the...
new version in here
https://moxhamconsultants.com/proxmox/ProxmoxSpiceLauncher-NoConsole.zip
it now by default does not show the command window - but if you add the argument "debug=on" to the command line it will show it (so you can see any errors if its failing)
cheers
interestingly I can also ssh in remotely to the container from another machine.
so it appers just the proxmox console seems to be broken for it, I have tried vnc/spice and xterm - same result.
Hi,
I have a number of containers and vms which all work fine.
However if I create a container using the built in centos6 template centos-6-default_20161207_amd64.tar.xz
On first install it works fine and I can start / stop the container with no problems.
If I do a yum update and install all...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.