Search results

  1. Stopped VM raise zabbix notification of the network interface due backup

    Hi, running backup of the stopped VM regularly and when it meets some zabbix check, it will raise notification because VM interface on PVE is going up/down: PVE: INFO: Finished Backup of VM 101 (00:00:10) INFO: Backup finished at 2022-02-11 02:00:35 INFO: Starting Backup of VM 102 (qemu) INFO...
  2. PVE 7.1 DMAR: DRHD errors - ilo4 problems on HP DL3xx G8

    We have all HP DL3xx G8 on PVE7.1, version from last ugraded below: proxmox-ve: 7.1-1 (running kernel: 5.13.19-3-pve) pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe) pve-kernel-helper: 7.1-8 pve-kernel-5.13: 7.1-6 pve-kernel-5.13.19-3-pve: 5.13.19-7 ceph: 15.2.15-pve1 ceph-fuse...
  3. [SOLVED] PVE 7.1.8 - notes formatting

    Tab notes has broken formatting. I restored VM from PVE6.4 to 7.1 with such notes: In edit panel are those lines line by line. root IP vg0 - root 8G, swap 2G v20210914 In view panel are those lines all on one line. root IP vg0 - root 8G, swap 2G v20210914 Clearing notes to empy->save->reenter...
  4. Upgrade 6.2 to 6.4 - high disk utilization in VMs

    Hi, we upgraded our PVE cluster (very old HP G7 and 3yr old Dell R940) 6.2 to 6.4 and disk utilization in VMs raised from floor. This problem is same for VMs on: - nfs ssd storage (raw files), default (no cache) - local ssd disks (LVM thick), default (no cache) The change depends on VM...
  5. [SOLVED] Failed node and recovery in cluster

    Hi, one of my cluster nodes hard failed due failed disks in raid. Now because cluster is 6.2, we decided to upgrade to 6.4 (needed req for 7). Reinstalled node will have same fqdn as failed node. Now i have two possible ways: 1] remove failed node from cluster (aka cleanup) and add reinstalled...
  6. Feature hint? VMs in multiple permission pools, disk backup include/exclude difference

    Hello Proxmox team, based on testing PBS solution with our PVE server with multiple types backup VMs, i have some use-cases, that's don't fit current PVE versions (at least via gui): 1] permissions->pools - VM can be only in one pool. If i want to use multiple backup scenarios, it's not...
  7. [SOLVED] qemu-ga - it's possible to reboot via pve from VM?

    For example, when i change VM name, without reboot on PVE side it's still using old name in qm process on PVE. Or use some swap on PVE. Can i use qemu-ga to send reboot (itself) command from VM to PVE and if yes, what's correct syntax?
  8. PVE restore file from PBS inconsistency

    Hi, we are running with Apt::Install-Recommends "0"; and it trigger some inconsistency when testing file restore. 1] file restore proxmox-file-restore failed: Error: cannot run file-restore VM: package 'proxmox-backup-restore-image' is not (correctly) installed (500) 2]...
  9. [SOLVED] Mail notifications

    Hi, is there any way to set mail notifications to some sane setup as in PVE? Because it's sending on every job execution, but for example in PVE i can set it to only when job fails. So, for example, if i set sync hourly, i don't really want hourly mails, if sync job was without errors. This is...
  10. [SOLVED] PMG 6.2.5 sa-update doesn't run automatically

    Hi based on timers: Thu 2020-08-27 04:21:58 CEST 17h left Wed 2020-08-26 03:57:57 CEST 6h ago pmg-daily.timer pmg-daily.service I will expect to sa-update automatically ran. But: root@pmg-01:/var/log/pve/tasks/C# cat...
  11. Backup error - user mismatch

    1] backup VM to pbs as some user to repository A (acl assigned) 2] add other user to repository A (asl assigned) 3] change user in PVE for backup 4] backup error: other_user@pbs != some_user@pbs
  12. [SOLVED] bug: vzdump error limit backups

    client: pve-manager/6.2-9/4d363c5b (running kernel: 5.4.44-2-pve) -> pbs: Linux pbs 5.4.44-2-pve #1 SMP PVE 5.4.44-2 (Wed, 01 Jul 2020 16:37:57 +0200) x86_64 GNU/Linux 1st backup from GUI ok. 2nd backup from GUI failed - error is limit (1) no vzdump There is no posibility to edit...
  13. [SOLVED] ACL for accessing info about cluster

    Hi, i am playing with acl lists to check, if it's possible for specific user: role: sys.audit, vm.audit allowed access: list of nodes -> out of box list of vms -> acl /vms list of HA section -> HOW? /cluster/ha or /ha doesn't works in GUI testing. The goal is create specific user for creating...
  14. [SOLVED] Backup via pool

    Hi, pve-manager/6.0-11/2140ef37 (running kernel: 5.0.21-5-pve), configs: storage.cfg dir: local-lvm-data path /mnt/pve/local-lvm-data content iso,backup maxfiles 1 shared 0 user.cfg pool:production:::data: vzdump 0 1 * * * root vzdump --mailto root@domain.tld...
  15. PVE6 bug ceph osd configuration

    Hi, real example: 3x pve nodes (P1, P2, P3) with ceph installed. I want add P1 disks as osds (P2&P3 done) Datacenter -> P3 -> Ceph -> OSD -> Create: OSD -> /dev/sda OSD is created on different host, than expected. They are created on P3 instead P1. Can we get fix and even better - in dropdown...
  16. PVE6 ceph ipv6

    Hi, i am trying to make ceph works on ipv6. Ipv6 can ping each other on their subnets. But i am stuck in ceph... #ipv6 cluster_network = fd8d:7868:1cfd:6443::/64 public_network = fd8d:7868:1cfd:6444::/64 mon_host =...
  17. ceph failure scenario

    Hi, we are testing some failure scenarios and spotted unexpected long delay on disk access availability: Scenario: 3 x pve hosts, every has mgr,mon,2 osds, replica 3/2 1] hard power loss on one node result: ~ 24s before disks available (grace period >= 20 seconds) I thought, that node fail...
  18. NFS VM over Ceph

    Hi, i am testing performance PVE6 with ceph with 2 nodes (2/2) and 2 ssd OSDs per node, network is shared 1x10Gbps for all traffic on PVEs (just for small test). HW: 2x HP DL380p G8, every node has 2x E5-2690 2.9GHz, 96GB RAM, SmartArray P430, 1x Intel S4610 2TB, 1x Kingston DC500m, 10Gbps...
  19. [SOLVED] PVE6 unsupported sfp+ modules

    Hi, intel ixgbe module. No sfp+ in slot: ip link shows ens1f0 ens1f1 When i add sfp+ in slot and reboot pve, there is logged: Jul 31 12:36:56 pve-01 kernel: [ 255.607753] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 5.1.0-k Jul 31 12:36:56 pve-01 kernel: [ 255.607755]...
  20. PVE6 bnx2x vlan-aware failling

    Hi, i have HP DL380P G8 with 2x HP 10G module. When trying vlan-aware network setup as in documentation (vmbr0 without ip, vmbr0.vlan with ip) modul is failling root@pve-01:/tmp# ethtool -i eno1 driver: bnx2x version: 1.712.30-0 storm 7.13.1.0 firmware-version: mbi 7.14.79 bc 7.13.75...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!