Recent content by RRR

  1. R

    Debian 12 Turnkey templates

    So far, the turnkey CT templates have been released quite quickly after the Debian release. Currently there are no new turnkey templates. Does anyone know when the new Debian 12 templates will be released?
  2. R

    Backup restore from an already deleted container.

    all Backups from this CT are verified, so this can not be the issue. Here the log from the latest restore: Header Proxmox Virtual Environment 8.0.4 Storage 'pxbu-local' on node 'px01' Search: Logs recovering backed-up configuration from 'pxbu-local:backup/ct/139/2023-08-22T19:08:22Z' /dev/rbd4...
  3. R

    Backup restore from an already deleted container.

    Hi! We have a Proxmox cluster and 2 backup servers. PVE 8.0.4 PBS 3.0-1 internal pbs syncing with external PBS 2.4-1 I have made many different restore attempts (changing VM ID, different servers from the cluster, both backup servers) but always get the same error message. Do you have any...
  4. R

    CPU Flags

    Hallo! Ich habe eine weiterführende Frage zu den CPU Flags. Unsere Hosts zeigt in der cpuinfo folgende flags: flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon...
  5. R

    Ceph Geschwindigkeit als Pool und als KVM

    Hallo! Wir haben einen Ceph-Cluster mit 2 Pools ( SSDs und NVMe's ). In einem Rados benchmark Test ist wie zu erwarten der NVME Pool viel schneller als der SSD Pool . NVMe Pool: Schreiben: BW 900 MB/s IOPS: 220 Lesen: BW 1400 MB/s IOPS: 350 SSD Pool: Schreiben: BW 190 MB/s IOPS: 50...
  6. R

    usage data VM

    Hello! We use an external tool for the setup of KVMs. The tool shows graphs for CPU, RAM traffic etc. which are linked to the VMid. If I delete a VM and later another VM gets the same VMid, the statistics of the already deleted VM are displayed. Does anyone know where this data is stored? I...
  7. R

    Ceph Performance Understanding

    I am running the fio tests inside the virtual machines, they should not know anything about the ceph filesystem nor should they be concerned about performance or cpu time used for ceph operations. I don't understand why i see 4.5 times higher writing speeds for the NVMe pool in the rados...
  8. R

    Ceph Performance Understanding

    Rados Write Benchmark: rados bench -p <SSDPool|NVMePool> 600 write -b 4M -t 16 --run-name `hostname` --no-cleanup Rados Read Benchmark: rados bench -p <SSDPool|NVMePool> 600 seq -t 16 --run-name `hostname` Rados Random Read Benchmark: rados bench -p <SSDPool|NVMePool> 600 rand -t 16 --run-name...
  9. R

    Ceph Performance Understanding

    I setup a Proxmox cluster with 3 servers (Intel Xeon E5-2673 and 192 GB RAM each). There are 2 Ceph Pools configured on them and separated into a NVMe- and a SSD-Pool through crush rules. The public_network is using a dedicated 10 GBit network while the cluster_network is using a dedicated 40...