Was mir noch aufgefallen ist...ab einer bestimmten queue size (--iodepth) bricht die Performance ein.
READ: bw=3876MiB/s (4064MB/s), 3876MiB/s-3876MiB/s (4064MB/s-4064MB/s), io=37.9GiB (40.6GB), run=10001-10001msec
READ: bw=1396MiB/s...
fwiw, I had a stall on backups after upgrading and unpinning my PBS server but not upgrading PVE to the newest kernel a couple days ago. I then upgraded the PVE cluster to the new kernel and so far its working as it should.
fwiw, I had a stall on backups after upgrading and unpinning my PBS server but not upgrading PVE to the newest kernel a couple days ago. I then upgraded the PVE cluster to the new kernel and so far its working as it should.
Hi All,
@dpearceFL (and everyone else), have you perhaps considered wolfSSL's FIPS 140-3 offerings?
Recently we've done work to override the cryptography underlying OpenSSL, gnuTLS, NSS and libgcrypt while keeping all their interfaces...
Check your /etc/hosts and /etc/network/interfaces. Please note that Proxmox does not support DHCP out of the box and it's best to use a static IP address outside the DHCP-range of the router.
Please share this
grep -sR "172.16.1" /etc
The pvebanner command suggestion above was made under the assumption that /etc/hosts is configured correctly.
I think there may indeed be some overlap with bug 6905.
In both cases, the issue seems to stem from devices being added unconditionally in PVE::QemuServer.pm, based on the assumption of a PCI-based “classic PC” machine.
With machine: microvm...
You can also try curl -k https://[fqdn or IP]:8006 , this will show not only that the port is open but that you get a full response.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I am looking for a configuration to "pre-stage" vm disks as a DR for a primary storage failure. I understand the Proxmox limitation, TY for confirming.
It looks like I'm going to have to look at using the Truenas replication tool for this.. or...
Genau, ARC hatte ich manuell auf 8GB festgesetzt, da er damals in seinem Default Wert ca. 25GB genommen hat. Was bis vor kurzem auch egal war, weil der Host keine Zicken gemacht hat.
Ballooning habe ich inzwischen bei allen VMs deaktiviert. Aber...
Today, we announce the availability of a new archive CDN dedicated to the long-term archival of our old and End-of-Life (EOL) releases.
Effective immediately, this archive hosts all repositories for releases based on Debian 10 (Buster) and older...
That is a somewhat critical piece of information for your particular situation.
You can create a trusted self-signed certificate for the IP if you wanted.
Without looking at the code I suspect that it is hard-coded. You may want to try to...
I've reinstalled Proxmox 8.4 and restored the backups I had. Now everything works fine. I don't know if it because there's some kind of incompatibility between 8.4 and 9.1 backups or because 9.1 is still "young". I've resolved reinstalling the...
Hello, I'm sorry if I write here after a long time from the last message of this thread but I have the same problem. I've set the SDN and gave it a DHCP range and enabled DHCP, but when I create a LXC or VM giving it the SDN zone name bridge with...
Hallo zusammen,
ich habe am Wochenende unseren PVE Cluster von PVE 8.4 auf 9.1 aktualisiert.
CEPH war bereits vorher auf Squid und wurde von 19.2.1 auf 19.2.3 aktualisiert.
Das Update verlief problemlos, jedoch meldet CEPH seither die folgende...
Hi,
i think i got some mess on my storage settings:
got 2*1 tb disks, 1 ssd and 1 spindel, node disks screenshot attached.
i got zfs mount as tank.
The problem is - on bm creation:
local shows only 25gb out of 100gb.
cannot...