Just upgraded to 0.8-11 latest release.
Trying to run prune, following error popup in PBS:
unable to parse active worker status
'UPID:pbs:0000021F:000**MASK**:0000000A:**MASK**:termproxy::root:' - not a valid user id
Please advise ...
The output as below:
root@pbs:~# proxmox-backup-client garbage-collect --repository vmbackup
starting garbage collection on store vmbackup
TASK ERROR: unable to get exclusive lock - EACCES: Permission denied
root@pbs:~# ls -lha /vmbackup
drwxr-xr-x 2 backup backup 0 Jul 16...
I try to run GC and encountered following error:
root@pbs:~# proxmox-backup-client garbage-collect
Error: unable to get (default) repository
In PBS GUI, when click on the GC, it shows:
unable to get exclusive lock - EACCES: Permission denied
May I know what causes this error and how to...
VM with 2G RAM, 2 vCPU changed to host, E5-2640v3 seems improve a lot.
But relatively slow compared to those available figure..
Uploaded 573 chunks in 5 seconds.
Time per request: 8800 microseconds.
TLS speed: 476.62 MB/s
SHA256 speed: 330.90 MB/s
Compression speed: 806.28 MB/s
It work for me too, thank you.
But backup speed seems very slow when CIFS attached via 10G network to a VM for testing purpose.
Uploaded 132 chunks in 5 seconds.
Time per request: 38461 microseconds.
TLS speed: 109.05 MB/s
SHA256 speed: 135.45 MB/s
Compression speed: 554.75 MB/s
I saw latest release of pve-firewall (3.0-19) is equipped with ebtables: add arp filtering.
My cluster running in multicast mode and no VLAN implemented from my upstream router. Host and vm network using openvswitch with different physical interfaces.
When using iptraf-ng either...
Just an existing VM on the upgraded node (with latest no-subscription repo), 8 nodes ceph cluster with luminous.
Steps to reproduce:
1) shutdown the existing VM with HA using GUI.
2) Start the VM with HA using GUI
VM with HA unable to boot and following logs output via task viewer:
I manually edited Cloudinit.pm as per the patch line in https://pve.proxmox.com/pipermail/pve-devel/2019-April/036621.html. Live migration started to work fine.
But, If after the respective VM in HA shutdown, it unable to boot up again. Following error showed:
task started by HA resource...
Please look at https://pve.proxmox.com/wiki/Storage_Replication
Replication is not act a full HA solution right now.
In order to move back your container or vm back to default node, you need to disable or remove the container or vm from HA
I just try to update one of the node with dist-upgrade.
Bugs seems in recent iproute2 (4.13.0-1) update. VM that with rate limit unable to start now. It can only start after remove the rate limit. change the rate-limit by hotplug also not possible now.
=== Output ===
What is ":1"?
it is weird, nginx reverse proxy with ip_hash is working fine on machine that install with java version 7 update 80.
Unfortunately, most of the time it is not work on machine that install with java version 8 update 144.
How about others?
I'm currently running single OS disk in LVM and wanted to convert to ZFS rz2 (just don't want to reformat the test server). May I know this is possible?
Any working reference much appreciated.
Thanks in advanced.