Separate names with a comma.
as reported here:
in ProxMox 5.* VMs do not get tagged traffic passed to them, if...
where does pvecm get data about node name when doing pvecm nodes?
Why would a single command to add a node to a cluster with "pvecm add IP"...
On cluster with nodes running:
Linux 4.15.18-16-pve #1 SMP PVE 4.15.18-41 (Tue, 18 Jun 2019 07:36:54 +0200)
on two PM (proxmox-ve: 5.4-1 (running kernel: 4.15.18-16-pve)) servers, I see messages like:
[Thu Jun 27 23:02:34 2019] INFO: task...
are there any technical obstacles for not to do an offline migration of suspended VM?
I think this would be an awesome feature, which...
I destroyed and removed a node (pvecm delnode nodename).
Then I installed a new node, but used the same IP and name.
After doing operations like...
just installed new node with latest iso and upgraded it from test repos. Here is the log:
Starting system upgrade: apt-get dist-upgrade...
i have a PM cluster, where KVM VMs get replicated with ZFS using the GUI.
What would the official way to recover VM on another host from its...
one of my ZFS nodes crashed:
[DATE59:15 2019] ------------[ cut here ]------------
[DATE59:15 2019] kernel BUG at mm/slub.c:296!
Do we have a fix in 5.*?
Will you make an patch for...
What are the default values for:
Why is pve-zsync not syncing ZFS attributes and why is it not fixed, if a fix entitles only adding two characters (-p)?
Bug with no response has...
some time ago I migrated LXCs from PM 4.4 to 5.*
Recently I noticed that zfs subvols, that were transferred via pve-zsync do have xattr set...
xterm.js only works for PM hosts for me.
Does having a working xterm.js on KVM VMs require some modification to the VM like described here:...
Should pve-zsync work for LXC VMs where VM has multiple disks across multiple storages?
I'm in a process of migrating some LXCs and just noticed...
i was doing routine offline migration of VMs. For each and every VM it worked except for CloudLinux VM.
After I set up replication this is...
I just wanted to let you know, that online migration of VM to different node, to differently named local storage seems to work perfectly for us....
When I move a VM disk on ZFS i have high IO WAIT.
Reasons aside, I want to reduce IO WAIT by reducing IO strain by disk moving operation.
am I backing up via ZFS send pve root (rpool/ROOT/pve-1).
It works fine, so I mount it via zfs mount on target node and it mounts.
I can see...
because ZFS in ZFS RAID setup needs BIOS booting (can't do EFI), can the disks which PM are booted be bigger than 2TB?
For examle install...