Greetings,
Proxmox VM GUI shows 'RAM usage' as used + buff/cache instead of just used. Would be nice if this was fixed as it was for CTs.
https://forum.proxmox.com/threads/memory-usage-graph-in-gui.30112/
https://bugzilla.proxmox.com/show_bug.cgi?id=1139
I performed a backup test on a 20GB LXC to compare the newly added ZSTD with LZO ( I used 'Mode: Stop' for both):
LZO
ZSTD
INFO: Total bytes written: 6240829440 (5.9GiB, 100MiB/s)
INFO: archive file size: 2.80GB
INFO: Finished Backup of VM 8114 (00:01:14)
INFO: Total bytes written...
When using the 'Backup now' feature in the Container GUI, the following parameter in /etc/vzdump.conf does not work:
remove: <boolean> (default = 1)
Remove old backup files if there are more than maxfiles backup files.
I have the parameter set to 'remove: 1'
If the set max files is reached...
Seeing this error since upgrading to 6.2 when I try to change CPU resources while Container is running. I have to stop container, change CPU and restart. I didn't have any problem changing CPU resources with 6.1....
Package Versions:
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)...
Is there a way to use a hook script with Restart Mode Migration of Containers like can be done with backups? I need to properly shut down a running process before Container is stopped.
Greetings,
I can't find the gear you are referring to fix the Storage calcualted for the graphic display for 'Datacenter/Summary/Resources'?
Wolfgang said in a different post:
Hi,
you can set what storages are included the calculation.
Click on the gear and then you can select the storages...
I have a script in a container that I want to execute when doing a backup.
If I manually execute the script from the node CLI using:
pct exe <id> -- ./script.sh
The script runs as expected.
The problem is when I put the exact same command in my hook script, I get an execution error:
INFO: Stop...
I have a 4 node ProxMox Cluster with Ceph, 4 OSDs per node.
When I run 'cat /sys/kernel/debug/ceph/*/osdmap' on each node I get the following on 3 of 4 nodes.
epoch 7125 barrier 0 flags 0x588000
pool 1 'Ceph-CT-VM' type 1 size 3 min_size 2 pg_num 256 pg_num_mask 255 flags 0x1 lfor 0 read_tier...
Same port on all 4 nodes, report way longer than able to paste here. This port is used for the Ceph Public Network VLAN...
lsmod | grep -i i40e
i40e 385024 0
root@pve14:~# cat /var/log/messages | grep -i i40e
Jan 2 06:25:54 pve14 kernel: [560724.602777] i40e 0000:81:00.2...
I'm looking for ideas on tracking down the cause of this seemingly random high IO that happens on varying nodes and lasts for 30 minutes to a few hours and then goes away. I thought this problem went away with the last large update, but it's back... The only other coincidence I see that it seems...
Did upgrades today that included Ceph 14.2.5, Had to restart all OSDs, Monitors, and Managers.
After restarting all Monitors and Managers was still getting errors every 5 seconds:
Dec 17 21:59:05 pve11 ceph-mon[3925461]: 2019-12-17 21:59:05.214 7f29ff2c5700 -1 mon.pve11@0(leader) e5...
After Updating my 4 node cluster today, I can no longer start any of my CTs.
Corosync Cluster and Ceph show as healthy.
I created a new Unpriviledged CT after the updates and it works fine
I hope there's a way to fix this and not have to rebuild this cluster...
I get an error when running...
There is a person on you tube that has made a few nice tutorials about ProxMox Ceph setup. The latest video for ProxMox 6 and Ceph Nautilus is:
https://www.youtube.com/watch?v=GgliWaOfvsA
The ProxMox 5.1 Ceph Luminous tutorial recommended separate pools for VMs and CTs, the current tutorial for...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.