We don't want to use rsyslog for logfiles: only journald.
This breaks the "Tracking Center", it shows nothing when rsyslog is removed. To fix it, i've renamed pmg-log-filter to pmg-log-filter.real and wrote a tiny script as replacement for pmg-log-filter:
#!/bin/sh
journalctl -u "postfix*" -u...
We've upgraded PVE from 7 to 8 and also the Kernel from 6.2 to 6.5.11-6-pve: since these changes, the CPU-usage of the kvm-process is much higher then before. Before the upgrade the idle process usage was ~2%, and never above ~50% now after the upgrade the idle cpu usage is at ~20% and it goes...
I wanted to calculate the fingerprint of the encrypton key in python, thats what works for me:
import base64
import hashlib
import hmac
import json
import sys
def get_fingerprint(encryptionkey):
b = base64.b64decode(encryptionkey)
id_key = hashlib.pbkdf2_hmac('sha256', b...
it seems the linux kernel shipped with PVE doesn't have lockdown support:
$ cat /sys/kernel/security/lsm
capability,yama,apparmor
(output of pve-kernel-6.2.9-1-pve)
is there a reason why its disabled at compile time?
i couldn't find any info about it, the default ubuntu kernel seems to...
since the upgrade to pve 7.2 linux vms hang after live migration, see the screenshot. the vm is pingable but ssh login doesn't work any more.
an example vm config:
agent: 1,fstrim_cloned_disks=1
balloon: 16000
boot: order=scsi0
cores: 14
machine: q35
memory: 12000
name: serverhang
net0...
what i read about it in several posts: yes. But when looking into it:
i.e.:
cd .chunks/683c
for i in *; do echo $i; hexdump -C $i | head; done
some of the data seems uncompressed, as i can read plain data!
are there exceptions or are all chunks uncompressed and only the filesystem is...
#
du -hs /var/log/proxmox-backup/*
119M /var/log/proxmox-backup/api
3,9G /var/log/proxmox-backup/tasks
# find /var/log/proxmox-backup |wc -l
50269
# find /var/log/proxmox-backup -mtime +400 |wc -l
250
how can we shrink the size of the directory? is a "find /var/log/proxmox-backup...
For me it seems there are few reasons when zfs should be used as datastore for pbs:
pbs does:
- verification
- deduplication
- compression
So, whats the benefit of using zfs as datastore for pbs? IMHO it just makes things slower. The only reason for using ZFS is when you use multiple disks /...
since the upgrade to kernel 5.13, from time to time the process kworker uses 1 core with 100% even when no pbs task is active:
$ top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
54147 root 20 0 0 0 0 R 100.0 0.0 73:38.66...
How can i connect the "real network" to the sdn?
i.e.
router 192.168.1.254
proxmox node 1: 192.168.1.1
proxmox node 2: 192.168.1.2
proxmox node 3: 192.168.1.3
evpn vnet on all nodes: "test" with subnet 192.168.2.0/254 with gateway 192.168.2.254
what needs to be configured/set, so the...
For testing purposes we bought the card from the topic. It seems to work out of the box with pve7 as two interfaces shows up and can be configured, but i don't get a link using a fs SFP28-Module and a switch: when i try to bring the inferfaces up, it just says link down and the LED indikator on...
The services fail to start at system boot:
Sep 03 10:42:58 servername pmgbanner[42970]: Use of uninitialized value $ip in concatenation (.) or string at /usr/share/perl5/PVE/Network.pm line 645.
Sep 03 10:42:58 servername pmgbanner[42970]: hostname lookup 'servername' failed - got local IP...
Is it possible to limit concurrent tasks?
We use a large spinning disk as archive and it becomes really slow when the prune / gc / sync / verify jobs run at the same time.
It would be nice when these tasks can be run sequentially: if a task is already running, wait until its finished and then...
i've tried a live restore of a vm, but it failed:
new volume ID is 'cephrbd:vm-105-disk-0'
new volume ID is 'cephrbd:vm-105-disk-1'
new volume ID is 'cephrbd:vm-105-disk-2'
rescan volumes...
VM is locked (create)
Starting VM for live-restore
kvm: -drive...
what filesystems are supported?
i guess this error happens, because the partition is a lvm partition: but it contains an ext4 lvm volume.
is lvm + ext4 support planned?
an ext2 (boot) partition on the same vm without lvm seems to work.
pve 6.4-4
pbs 1.1-5
we are backing up a windows domain controller using proxmox backup. The ActiveDirectory_DomainService complains with Event 2089: it thinks that no backup is made: https://docs.microsoft.com/en-us/troubleshoot/windows-server/identity/ntds-replication-event-2089-backup-latency-interval
"Use QEMU...
sometimes when i restart a cluster node, it seems this triggers a reboot of all other nodes ( https://forum.proxmox.com/threads/proxmox-crash-with-ceph-clock_skew-when-time-synchronsation-is-started-chrony.84663/ )
the same happened again:
node b + c where running, i've rebooted node a and...
when i go to administration -> storage / disks -> ZFS -> select pool, click on detail i get this error message.
how can this be fixed without recreating the pool? i guess it fails because i've manually created the zfs pool and something is missing.
proxmox Backup Server 1.0-8 is used.
i got some strange reproducable crash in a ceph cluster with 3 nodes:
ceph had the state HEALTH_WARN with clock_skew detected. To fix this, i've installed chrony and manually started it. A few seconds after starting chrony, the node instantly reseted. This was reproduceable on an other running...
I've setup three nodes with direct connections to each other:
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Method_2_.28routed.29
ceph works already fine with it, the migration via the (slow) public network works, too.
i want the live migrations to use the fast connections...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.