Search results

  1. LZO vs ZSTD

    I performed a backup test on a 20GB LXC to compare the newly added ZSTD with LZO ( I used 'Mode: Stop' for both): LZO ZSTD INFO: Total bytes written: 6240829440 (5.9GiB, 100MiB/s) INFO: archive file size: 2.80GB INFO: Finished Backup of VM 8114 (00:01:14) INFO: Total bytes written...
  2. /etc/vzdump.conf parameter remove: <boolean> (default = 1)

    When using the 'Backup now' feature in the Container GUI, the following parameter in /etc/vzdump.conf does not work: remove: <boolean> (default = 1) Remove old backup files if there are more than maxfiles backup files. I have the parameter set to 'remove: 1' If the set max files is reached...
  3. Parameter verification failed. (400) Error

    Seeing this error since upgrading to 6.2 when I try to change CPU resources while Container is running. I have to stop container, change CPU and restart. I didn't have any problem changing CPU resources with 6.1.... Package Versions: proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)...
  4. Restart Mode Migration of Containers

    Is there a way to use a hook script with Restart Mode Migration of Containers like can be done with backups? I need to properly shut down a running process before Container is stopped.
  5. Choosing Drives that are calculated for GUI at Datacenter/Summary/Resources

    Greetings, I can't find the gear you are referring to fix the Storage calcualted for the graphic display for 'Datacenter/Summary/Resources'? Wolfgang said in a different post: Hi, you can set what storages are included the calculation. Click on the gear and then you can select the storages...
  6. Run a script residing in a container using 'pct exe'

    I have a script in a container that I want to execute when doing a backup. If I manually execute the script from the node CLI using: pct exe <id> -- ./script.sh The script runs as expected. The problem is when I put the exact same command in my hook script, I get an execution error: INFO: Stop...
  7. Ceph OSD Map

    I have a 4 node ProxMox Cluster with Ceph, 4 OSDs per node. When I run 'cat /sys/kernel/debug/ceph/*/osdmap' on each node I get the following on 3 of 4 nodes. epoch 7125 barrier 0 flags 0x588000 pool 1 'Ceph-CT-VM' type 1 size 3 min_size 2 pg_num 256 pg_num_mask 255 flags 0x1 lfor 0 read_tier...
  8. Flapping Network NICs on Ceph Public Network VLAN

    Same port on all 4 nodes, report way longer than able to paste here. This port is used for the Ceph Public Network VLAN... lsmod | grep -i i40e i40e 385024 0 root@pve14:~# cat /var/log/messages | grep -i i40e Jan 2 06:25:54 pve14 kernel: [560724.602777] i40e 0000:81:00.2...
  9. HIgh IO delay on one node of cluster

    I'm looking for ideas on tracking down the cause of this seemingly random high IO that happens on varying nodes and lasts for 30 minutes to a few hours and then goes away. I thought this problem went away with the last large update, but it's back... The only other coincidence I see that it seems...
  10. Ceph 14.2.5 - get_health_metrics reporting 1 slow ops

    Did upgrades today that included Ceph 14.2.5, Had to restart all OSDs, Monitors, and Managers. After restarting all Monitors and Managers was still getting errors every 5 seconds: Dec 17 21:59:05 pve11 ceph-mon[3925461]: 2019-12-17 21:59:05.214 7f29ff2c5700 -1 mon.pve11@0(leader) e5...
  11. Can't Start any CTs in cluster after performing latest updates

    After Updating my 4 node cluster today, I can no longer start any of my CTs. Corosync Cluster and Ceph show as healthy. I created a new Unpriviledged CT after the updates and it works fine I hope there's a way to fix this and not have to rebuild this cluster... I get an error when running...
  12. ProxMox 6 Ceph Nautilus Pool Setup

    There is a person on you tube that has made a few nice tutorials about ProxMox Ceph setup. The latest video for ProxMox 6 and Ceph Nautilus is: https://www.youtube.com/watch?v=GgliWaOfvsA The ProxMox 5.1 Ceph Luminous tutorial recommended separate pools for VMs and CTs, the current tutorial for...
  13. Ceph Cluster Bonded 10GBe error

    Seeing the following errors on Proxmox Node Terminal screen: ens1f2 and ens1f3 are LACP bonded with Layer 2 Hash policy, using XS728T smart switch. ethtool -i bond0 driver: bonding version: 3.7.1 firmware-version: 2 expansion-rom-version: bus-info: supports-statistics: no...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!