Anyone tried to make Palo Alto FW VMx00/PanOS 7.x work on Proxmox/KVM, my netadmin claims that it fails to find an uuid+cpuid to generate a unique serial number to be used for licensing. But on other VMs running Linux I can see the assigned VM smbios uuid from dmidecode.
Palo Alto support was...
My mistake it is v.2.3.2-2 ;) and network config was/is okay, never set net config from GUI only manually. Reboot fixed issue.
Yes this was the node that seem to have lost it's subscription key, but now that's also fixed. Hopefully this wont happen again...
This morning I found that one of our hypervisor nodes was revert an openvswitch 2.3.2-1 to default mtu of 1500 instead of the configured 9000. Underlying NIC still had 9000 for their MTUs, but the openvswitch vmbr1 only had 1500 and I couldn't configure it from CLI with: ifconfig vmbr1 mtu 9000...
I only tried stopping WD, mistaken the mux for the server-side of WD device, as the first box got a NMI whilst rebuilding the grub config. Two step patching in future should spare me NMIs I believe.
Right okay thought the mux was the server-side of the WD device :)
I'll assume dost-upgrade took down WD mux before patching and held it down while rebuilding grub, only it took more than the default 60 sec the WD is default configured for, hence the NMI. In future I'll run kernel patching...
I've got the hpwdt blacklisted last week, but then when I patched to 4.1 few days ago by running 'apt-get dist-upgrade' it also pulled a new 4.2.6 kernel alongside new PVE versions and while the server was running grub updating (which seem to take a long time ie. minutes, possible probing all...
With blacklisted hpwdt it seems to use the SW WD (might be a good idea to default blacklist this as suggested elsewhere here), will see how this works out for us...
Have similar issues currently after a network failure on one of our 7 nodes today, which caused brief network issues on various nodes, which they recovered from though. This also caused brief issues with our iSCSI SAN plan :(
We've got two corosync rings each by their bonded NIC across dual-hw...
and waste a support ticket on howto use their support portal rather than on our licensed product :/
Anyway maybe someone else knew here and wanted to share
Just subscribed our cluster, added sub-accounts for our billing email address and a colleague, but how do a sub account acquire a password to be able to login on maurer-it?
try to map devices through sata:
virtio0: lvm:vm-102-disk-1,size=16G
virtio1: lvm:vm-102-disk-2,size=64G
sata0: /dev/sdb,size=457344M
sata1: /dev/sdc,size=457344M
or all through sata:
sata0: lvm:vm-102-disk-1,size=16G
sata1: lvm:vm-102-disk-2,size=64G
sata2: /dev/sdb,size=457344M
sata3...
What I did was after adding iSCSI storage and creating VG based on such, I edited /etc/pve/storage.cfg and removed the iSCSI handle and the basename pointing to this from under the VG definitions. Then extended the VGs as required from command line on one pve node and voila pve doesn't care any...
Created a VM with an virtio mapped image of a LV in a iSCSI attached VG.
Then when trying to install inside I discovered that the need OS didn't support the virtio driver, so I tried changing the virtio to a scsi driver only to discover, that this seemed to map hypervisor iSCSI device directly...
Whenever we do a live migration, we always sees events like this at start though migration successes ok.
...
<date & time> starting ssh migration tunnel
bind: Cannot assign requested address
<date + time> starting online/live migration on localhost:60000
...
Wondering if this can/should be...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.