Probably, but in my case it's only 1TB of backups and this gave me both a read improvement and a third copy. Eventually it is via multiple locations, as you also pointed out.
My backups are on two different media (due to the mix of SSD and HDD...
Hello, I'm facing the problem with adding the new node to existing PVE cluster. After adding the node through the web-interface, it shows as offline:
And on the new node all other nodes are shown offline:
And the error I'm getting is:
I've...
Yup, that upgrade guide is the one I followed to perform the upgrade. Didn't run into any strange issues during the upgrade either.
Just double checked my repo's and there are no references to bookworm.
hello, did you run this?
# https://pve.proxmox.com/wiki/Upgrade_from_8_to_9
and that?
# https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#Update_the_configured_APT_repositories
You need to add the shared=1 option on this line. This way the container can be migrated whereby the local mountpoint will be ignored during migration. You need to make sure that the local mount point is available on every node yourself (e.g...
Yes, that did it. Thank you so much narrateourale
My mistake was that I didn't adjust the Proxmox VM Options -> boot order. I kept going around in circles trying to adjust the boot order in the Windows repair screen, which of course didn't work...
A bit of good news to kick off the new year: our pull request addressing the iSCSI DB consistency/compatibility issue has been accepted by the Open-iSCSI maintainers. This means the fix will be included upstream and should make its way into a...
Good find!
Experienced same thing today, cleaned out old snapshots, and we'll see.
However I do rely on frequent snapshotting a lot, as Sanoid runs hourly on several machines. IO delay isn't seriously impacted on those.
Any thoughts?
After upgrading from 8.x to 9 (currently on 9.1.4), my logs are getting flooded with the same error every 5 seconds, where the PID shown changes based on one of the three PID's pveproxy is running on. I've restarted the service and this is the...
its kinda sad that it has to be done, but after hours or even days of debugging i found this thread and can confirm that this is still a bug in 01/2026, proxmox 9.1.4, kernel 6.17.4-2-pve and HP ProDesk 400 G5 Desktop Mini in my case.
for the...
This isnt valid for zfs. ZFS will simply replace any read with a failed checksum and replace it on the affected vdev.
EXCEPT this didnt actually work, which is why you dont see these anymore.
abstraction doesnt change the underlying device...
Hey @softworx , did you mean to use µ (us) ? 20 ns is less than DRAM :-)
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Nein, er erklärt es auch noch mal als separaten CT ohne Home Assistant später im Video. Und nach dieser habe ich es installiert.
Ich habe Home Assistant als VM und paperless als separaten CT laufen.
I experienced similar issues during the install on Dell R730xd. After, editing the install script with nomodeset and SR-IOV and I/OAT DMA Engine in Dell R730xd BIOS, ProxMox VE 9.1 installer ran successfully.
it would be easier, "qm monitor" provides access to the Human Monitor Interface (HMP)
and I haven't had success with "block_resize device size -- resize a block image"
(qemu) block_resize drive-scsi0 107374182400
Error: Cannot grow device files...
This isnt valid for zfs. ZFS will simply replace any read with a failed checksum and replace it on the affected vdev.
EXCEPT this didnt actually work, which is why you dont see these anymore.
abstraction doesnt change the underlying device...
Also, check to make sure TCP Delayed Ack is disabled.
Delayed acks will sometimes hold tcp acks for other data. This is normally good, and the impact is not that dramatic, for regular network traffic. For iscsi it can do bad things for your...