running backup of the stopped VM regularly and when it meets some zabbix check, it will raise notification because VM interface on PVE is going up/down:
INFO: Finished Backup of VM 101 (00:00:10)
INFO: Backup finished at 2022-02-11 02:00:35
INFO: Starting Backup of VM 102 (qemu)
We have all HP DL3xx G8 on PVE7.1, version from last ugraded below:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-3-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
Tab notes has broken formatting. I restored VM from PVE6.4 to 7.1 with such notes:
In edit panel are those lines line by line.
vg0 - root 8G, swap 2G
In view panel are those lines all on one line.
root IP vg0 - root 8G, swap 2G v20210914
Clearing notes to empy->save->reenter...
we upgraded our PVE cluster (very old HP G7 and 3yr old Dell R940) 6.2 to 6.4 and disk utilization in VMs raised from floor.
This problem is same for VMs on:
- nfs ssd storage (raw files), default (no cache)
- local ssd disks (LVM thick), default (no cache)
The change depends on VM...
one of my cluster nodes hard failed due failed disks in raid. Now because cluster is 6.2, we decided to upgrade to 6.4 (needed req for 7). Reinstalled node will have same fqdn as failed node. Now i have two possible ways:
1] remove failed node from cluster (aka cleanup) and add reinstalled...
Hello Proxmox team,
based on testing PBS solution with our PVE server with multiple types backup VMs, i have some use-cases, that's don't fit current PVE versions (at least via gui):
1] permissions->pools - VM can be only in one pool. If i want to use multiple backup scenarios, it's not...
For example, when i change VM name, without reboot on PVE side it's still using old name in qm process on PVE. Or use some swap on PVE.
Can i use qemu-ga to send reboot (itself) command from VM to PVE and if yes, what's correct syntax?
we are running with
and it trigger some inconsistency when testing file restore.
1] file restore
proxmox-file-restore failed: Error: cannot run file-restore VM: package 'proxmox-backup-restore-image' is not (correctly) installed (500)
is there any way to set mail notifications to some sane setup as in PVE? Because it's sending on every job execution, but for example in PVE i can set it to only when job fails.
So, for example, if i set sync hourly, i don't really want hourly mails, if sync job was without errors. This is...
based on timers:
Thu 2020-08-27 04:21:58 CEST 17h left Wed 2020-08-26 03:57:57 CEST 6h ago pmg-daily.timer pmg-daily.service
I will expect to sa-update automatically ran. But:
client: pve-manager/6.2-9/4d363c5b (running kernel: 5.4.44-2-pve)
pbs: Linux pbs 5.4.44-2-pve #1 SMP PVE 5.4.44-2 (Wed, 01 Jul 2020 16:37:57 +0200) x86_64 GNU/Linux
1st backup from GUI ok.
2nd backup from GUI failed - error is limit (1) no vzdump
There is no posibility to edit...
i am playing with acl lists to check, if it's possible for specific user:
role: sys.audit, vm.audit
list of nodes -> out of box
list of vms -> acl /vms
list of HA section -> HOW? /cluster/ha or /ha doesn't works in GUI testing.
The goal is create specific user for creating...
real example: 3x pve nodes (P1, P2, P3) with ceph installed. I want add P1 disks as osds (P2&P3 done)
Datacenter -> P3 -> Ceph -> OSD -> Create: OSD -> /dev/sda
OSD is created on different host, than expected. They are created on P3 instead P1. Can we get fix and even better - in dropdown...
i am trying to make ceph works on ipv6. Ipv6 can ping each other on their subnets. But i am stuck in ceph...
cluster_network = fd8d:7868:1cfd:6443::/64
public_network = fd8d:7868:1cfd:6444::/64
we are testing some failure scenarios and spotted unexpected long delay on disk access availability:
Scenario: 3 x pve hosts, every has mgr,mon,2 osds, replica 3/2
1] hard power loss on one node
result: ~ 24s before disks available (grace period >= 20 seconds)
I thought, that node fail...
i am testing performance PVE6 with ceph with 2 nodes (2/2) and 2 ssd OSDs per node, network is shared 1x10Gbps for all traffic on PVEs (just for small test).
HW: 2x HP DL380p G8, every node has 2x E5-2690 2.9GHz, 96GB RAM, SmartArray P430, 1x Intel S4610 2TB, 1x Kingston DC500m, 10Gbps...
intel ixgbe module. No sfp+ in slot:
ip link shows
When i add sfp+ in slot and reboot pve, there is logged:
Jul 31 12:36:56 pve-01 kernel: [ 255.607753] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 5.1.0-k
Jul 31 12:36:56 pve-01 kernel: [ 255.607755]...
i have HP DL380P G8 with 2x HP 10G module. When trying vlan-aware network setup as in documentation (vmbr0 without ip, vmbr0.vlan with ip) modul is failling
root@pve-01:/tmp# ethtool -i eno1
version: 1.712.30-0 storm 22.214.171.124
firmware-version: mbi 7.14.79 bc 7.13.75...