i am in update process of the nodes from 5.3 to 5.4. When trying to add firewall rules from the 5.4 node webinterface to an vm on a 5.3 version node the rules get never applied. Wehen i activate or deactivte the firewall the simple disappear from the gui.
i tried to add a second ring address as described. Directly after i rbootet the first node the cluster lost communication. Maybe some problems in the second network with multicast and so on...
Pvecm status says allways that the nodes are in quorum but the pmxcfs fails. Rebooting a node does...
with reinstall cluster node i can setup a new server and write all the configs back. is this possible to do this with 4.x to 5. or do the configs change? mixing proxmox 4 and 5 is possible at the moment?
i would like to use the new zfs repliaction feature in the future. So i want to install ZFS instead of ext4. As we have Proliants with hardare Rais controllers i would like to use their cache. Can i install ZFS with RAID= with only 1 Disk? Does it make sense?
Hello we want to install a programm on the server but it is not possible to install because it needs a unique server uid.
Found this link https://nuanceimaging.custhelp.com/app/answers/detail/a_id/18329
whichs says proxmox/kvm does not support this.
i am using latest proxmox version pve-manager/4.3-3/557191d3
I want that users can change or add their firewall rules. but not activate deactivate firewall in generall- or deactivate IP filter.
with user rights: PVEVMAdmin a user can change ALL settings including IP filter which is not...
we want to upgrade from pve 3.4 to 4.2 and from ceph firefly to hammer- jewel is not a good idea at the moment i read that the update is non trivial on the ceph homepage??
we did a pve upgrade with our test cluster. no problem. whats the best way to update ceph? directly all machines at...
We are using still proxmox 3.4 (we want to update in 1 month to 4) and ceph firefly.
but we cannot install new osds anymore..
ceph4 pvedaemon: command 'ceph-disk prepare --zap-disk --fs-type xfs --cluster ceph --cluster-uuid 820e9ff3-7d60-4244-a606-0d11e96b9504 /dev/sdc' failed...
i have a vm server linux ubuntu 14.04 running for some time with 24 nics... perfect.
no i have two new vm (kvm) servers ubuntu 14.04 up to date. but after some days they freeze. (100% cpu)
they have 30 nics each. some nics are e1000 and some virtio.
could virtio be the problem or the 30 nics...
since years we are experiencing kernel panics on our host machines once a year more or less. it does not matter which hardware it was or which proxmox version.
this time one of our virt machines allways had a BSOD. it could not be restartet. after the third reset also the node machine had a...
im am running pve-manager/3.4-1/3f2d890e (running kernel: 3.10.0-7-pve) on one node.
some virtual machines consume a lot more memory then they should.
i put for exmaple fixed 16GB for the machine id 212- and top shows me RES 22.3 and VIRT even 29GB!
i am not able to start more vms as the...
i have some servers where i use lvm for the virtual disk.
as it is not possible to set up a lvm for the existing disk (pve ocupies 100% with its lvm) on installation i have inly "local" storage now. (old servers i did first install a ubuntu and afterwards the proxmox installlation)
after upgrading to 3.4 logs are full of:
Mar 4 23:57:37 node7 pveproxy: problem with client 192.168.11.8; ssl3_read_bytes: ssl handshake failure
Mar 4 23:57:37 node7 pveproxy: Can't call method "timeout_reset" on an undefined value at /usr/share/perl5/PVE/HTTPServer.pm line 225.
yesterday all nodes went red.
only each node shows itself green.
i allready tried to restart cman, pvestatd, pvedaemon - all works without error on every node. but nothing changes.
even tried to reboot one node...
i also can write to /etc/pve/...
all nfs shares (images and backups) are...
since some time during backups it happenes that a node looses qorum. (searching for why is another task)
the node (which lost qorum) itself afterwards is the only node which show all other nodes in green and with data.
i checked that all nodes have qorum at this time.
we are loosing qorum one onde node during backup. another node which is the default management node shows no statistics for all node machines (and all other nodes red). i have to restart the pvemanager on that node to get statistics and green signs again.
1) question one is why is the node...
after some time now i get this message all the time and the firewall will not reload anymore. how can i debug that?
pve-firewall: status update error: command '/usr/sbin/ipset restore' failed: exit code 1