only partially related, but why does this need SMB? Have you looked at NFS exports? They should also be a valid option and should work without cephadm or ceph orch, last time I checked (has been a while)
https://docs.ceph.com/en/latest/cephfs/nfs/
The dashboard and smb modules are, as the terms suggests, OPTIONAL MODULES. they are not required for "basic functionality" and provide no utility to a ceph installation as a component of PVE.
Hallo, danke für deine Antwort. Ich habe natürlich den Port 25 in der Firewall gesperrt aber meine Mailkonfig benutzt einen anderen Port dafür. Ich verstehe auch gar nicht wieso Proxmox den Port 25 nimmt und finde aber nirgends einen Ansatz...
danke, nein, andere VMs und CTs lassen sich Problemlos zurücksichern. Das Backup holt sich der Backupserver über NFS auf dem Hauptserver.
Wo ich etwas verwundert bin, ich habe 4 Proxmox Server, die nicht in einem Cluster laufen. Also SingleHost...
he likely is struggling with the stupid google drive quota where downloads are limited.
i ran into it several times myself.
i sent @avluis86 a pm with a link (for me the download from google drive worked btw, just tested).
Hi,
most likely you did install PBS on top of a vanilla Debian installation? In that case the vanilla Debian kernel should be uninstalled, e.g via apt remove linux-image-amd64 'linux-image-6.1*'
Hi Victor,
One other 'temporary' thing that you may configure if there is a critical need for all OSDs to be up is to change the allocation_size for each OSD from 64k to 4k using the 'bluestore_shared_alloc_size' parameter [0], which you can...
hi, yep it's currently not implemented, would you mind opening a bug report on https://bugzilla.proxmox.com (maybe check if there already is one) so we can keep better track of it?
The failure domain must never be the OSD.
With failure domain = host you only have one copy or one chunk of the erasure coded object in one host. All the other copies or chunks live on other hosts.
That is why you need at least three hosts for...
The failure domain must never be the OSD.
With failure domain = host you only have one copy or one chunk of the erasure coded object in one host. All the other copies or chunks live on other hosts.
That is why you need at least three hosts for...
only partially related, but why does this need SMB? Have you looked at NFS exports? They should also be a valid option and should work without cephadm or ceph orch, last time I checked (has been a while)
https://docs.ceph.com/en/latest/cephfs/nfs/
Hi!
Es scheint als gäbe es insgesamt Lese/Schreibfehler auf dem Speichermedium, welches Dateisystem und Speichermedium wird unter / und /mnt/pve/backup-opnsense/ verwendet? Gibt der syslog mehr Aufschlüsse über die I/O errors über die hier...
Summary der einzelnen VMs zeigen über 100% RAM Auslastung, wenn man sich aber in die Systeme einloggt, ist Auslastung aber normal (z.b. 45%). Es betrifft nur einzelne Windows VMs.
Hardware-Properties:
SCSI Controller: VirtIO SCSI
Machine type...
Could you try upgrading to pve-container version 6.1.2, which is currently available on the pve-no-subscription repo? This makes the attribute preservation code opt-in via the "Keep attributes" flag on mountpoints and should resolve this issue...
Guten morgen,
nach einem Vollbackup einer CT wollte ich einen restore durchführen, allerdings bekam ich folgende Meldung!
extracting archive '/mnt/pve/backup-opnsense/dump/vzdump-lxc-121-2026_02_12-04_00_14.tar.zst'
tar: .pfad zur Datei -...
LVM is a PVE integrated way to use FC as shared storage. You can read this article to get high level understanding of the components involved:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/
Although it references iSCSI as...
one other issue that could be happening is that allocating the memory of the windows vm just takes an (absurd) amount of time. we had such behavior in the past, especially if the memory is fragmented. Could you try to reduce the amount of memory...
I gave you one possible answer, but you choose to ignore it.
As a funny coincidence you answered one of my questions by posting this picture; your volblocksize is 16k.
So my theory was right.
So again, every 1TB VM disk will not only use 1TB...