Hi,
most likely you did install PBS on top of a vanilla Debian installation? In that case the vanilla Debian kernel should be uninstalled, e.g via apt remove linux-image-amd64 'linux-image-6.1*'
Hi Victor,
One other 'temporary' thing that you may configure if there is a critical need for all OSDs to be up is to change the allocation_size for each OSD from 64k to 4k using the 'bluestore_shared_alloc_size' parameter [0], which you can...
hi, yep it's currently not implemented, would you mind opening a bug report on https://bugzilla.proxmox.com (maybe check if there already is one) so we can keep better track of it?
The failure domain must never be the OSD.
With failure domain = host you only have one copy or one chunk of the erasure coded object in one host. All the other copies or chunks live on other hosts.
That is why you need at least three hosts for...
The failure domain must never be the OSD.
With failure domain = host you only have one copy or one chunk of the erasure coded object in one host. All the other copies or chunks live on other hosts.
That is why you need at least three hosts for...
only partially related, but why does this need SMB? Have you looked at NFS exports? They should also be a valid option and should work without cephadm or ceph orch, last time I checked (has been a while)
https://docs.ceph.com/en/latest/cephfs/nfs/
Hi!
Es scheint als gäbe es insgesamt Lese/Schreibfehler auf dem Speichermedium, welches Dateisystem und Speichermedium wird unter / und /mnt/pve/backup-opnsense/ verwendet? Gibt der syslog mehr Aufschlüsse über die I/O errors über die hier...
Summary der einzelnen VMs zeigen über 100% RAM Auslastung, wenn man sich aber in die Systeme einloggt, ist Auslastung aber normal (z.b. 45%). Es betrifft nur einzelne Windows VMs.
Hardware-Properties:
SCSI Controller: VirtIO SCSI
Machine type...
Could you try upgrading to pve-container version 6.1.2, which is currently available on the pve-no-subscription repo? This makes the attribute preservation code opt-in via the "Keep attributes" flag on mountpoints and should resolve this issue...
Guten morgen,
nach einem Vollbackup einer CT wollte ich einen restore durchführen, allerdings bekam ich folgende Meldung!
extracting archive '/mnt/pve/backup-opnsense/dump/vzdump-lxc-121-2026_02_12-04_00_14.tar.zst'
tar: .pfad zur Datei -...
LVM is a PVE integrated way to use FC as shared storage. You can read this article to get high level understanding of the components involved:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/
Although it references iSCSI as...
one other issue that could be happening is that allocating the memory of the windows vm just takes an (absurd) amount of time. we had such behavior in the past, especially if the memory is fragmented. Could you try to reduce the amount of memory...
I gave you one possible answer, but you choose to ignore it.
As a funny coincidence you answered one of my questions by posting this picture; your volblocksize is 16k.
So my theory was right.
So again, every 1TB VM disk will not only use 1TB...
this is the problem, the tape job sees a tape that is already part of the media set and writable, and still available (for a standalone drive, 'offline' means not in the drive, but on-site)
i'd bet that if you mark this too as vaulted, it would...
Interesting problem. I am testing the restore of a Windows server with 4 SATA disks. Running ProxMox 8.4.1, Backup Server 3.4.1. The backup is made with the backup server, but when I restore the VM, when the restore is finished , it starts again...
Dear Community,
I struggled last week-end to restart a VM which stopped during a power outage (this is running on a homelab server). I manage to open the Proxmox application once the power is back but the VM (OpenMediaVault) cannot restart and...
Viele aktivieren das ja auch ohne Raid 0 oder 1, weil sie sich einen Geschwindigkeitsgewinn versprechen. Rapid Storage Technology (neu, geil) hört sich so viel besser an als AHCI (alt, Ü50...).
Hi Victor,
One other 'temporary' thing that you may configure if there is a critical need for all OSDs to be up is to change the allocation_size for each OSD from 64k to 4k using the 'bluestore_shared_alloc_size' parameter [0], which you can...