Search results

  1. M

    Resize ZFS pool after running out of space (PBS datastore full)

    Over the weekend my pbs datastore is running out of space and now I'm unable to run a garbage collection. This is the output of discus: and this is the output of zpool list: What is the meaning of FREE and second question: How could this help me out of my dilema? Is is possible to use the...
  2. M

    Visibility! Feature request for backup reports (or more details in email reports)

    What I can't see (wether in pbs nor in the email reports) is detailed information which vm has backupped how many gigs per day / week / month... Is this planned for future versions?
  3. M

    Long time span with no update in log while gc is running...

    Is it normal that there are huge time spans where no update seems to happen in logfile (last three lines): 2024-10-25T08:21:40+02:00: starting garbage collection on store remote_backupstorage_nas07 2024-10-25T08:21:40+02:00: Start GC phase1 (mark used chunks) 2024-10-25T08:23:12+02:00: marked...
  4. M

    Push sync job from PBS to NAS (nfs)

    My backup plan syncs the backup on the local storage at the pbs server to a remote nfs share on a nas. If I set up a sync job for this I think this scenario isn't envisaged by pbs as I only can do this if I turn the remote storage (nas) to a local storage by mounting the nfs-share to pbs. So...
  5. M

    Deactivate Sync Job

    Is it possible to add a checkbox to deactivate a scheduled sync job like already available in prune jobs? Will make testing easier (or emergency tasks ;-) ) Thanks in advance...
  6. M

    Single SAS Port Passthrough (Dual Port HBA)

    Hello Guys. Is it possible to passthroug the ports of a dual sas hba to two different vm's? root@prox11:~# lspci -s 19:00.0 -v 19:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02) Subsystem: Broadcom / LSI SAS9300-8e Flags: bus...
  7. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    Hello Guys! I'm setting up our new cluster at the moment. The cluster network is a 25 GBit Full-Mesh configuration between the nodes (up and running! ;-) ) To follow the KISS principle and reduce the point(s) of failure I thougt about a second mesh for corosync (with fallback over public...
  8. M

    BackupExec (Windows VM) - Best practice Backup2Disk Storage

    Hello guys. I plan to change the harddisks of the B2D-Storage in our BackupExec VM. Recently this is a zfs-mirror configured on the pve host which is connected to the vm via a virtio block device because of problems with the virtio scsi driver at installation time. (see...
  9. M

    VM: Same name of disks on different storages

    Hello. I have a running VM on ProxVE 8 with 3 disks on 3 different storages. They all have the same (file-) name. That makes it a bit confusing if you check the content: Second problem: There is no "notes" field or similar that shows the name of the corresponding VM. This could be a...
  10. M

    KVM killed by OOM killer - Out of memory (ZFS problem?) / Proxmox 8.1.4

    Hello! We bought a new backup server last year. The file system is ZFS. The memory usage is always high, which seems to be normal with zfs. The server has 64GB RAM and there is one virtual machine with 16GB RAM (Windows Server 2019). So after one day (since a reboot) the memory usage of...
  11. M

    High RAM Load PVE Host 8.0.4

    Hello. I have a pve host with several virtual maschines. The host consumes always a high amount of the installed ram. Even if no machine is started. Some details: At the moment there are 3 virtual machines running which are configured with 38 GB of ram in total. But as you can see the host...
  12. M

    Laufwerksfehler Windows Server VM (2019/2022)

    Guten Morgen. Ich habe hier 2 relativ neue Server (Server1: 2 Monate alt/PVE 8.0.4/VM Server 2019 | Server2: 1 Jahr alt/PVE 7.4.3/VM Server 2022) die mir Laufwerksfehler in der Windows VM anzeigen. Bei Server 2 wurde auch bereits ein Neustart (mit Reparatur durchgeführt) allerdings zeigt er...
  13. M

    Problem BackupExec (B2D)2Tape Proxmox VE8

    Hallo Leute, wir betreiben einen eigenständigen Proxmox-Server mit einer Windows Server 2019 VM, in der BackupExec 20.6 installiert ist. Die Systemplatte (C: ) der VM liegt auf einem ZFS-Storage mit 2 SSD's (Mirror). Der Backupspeicher (D: ) liegt ebenfalls auf einem (eigenen) ZFS-Storage mit...
  14. M

    CPU Sockets / NUMA

    Hello Guys, can somebody clear up things a little about the best setting of core/sockets with numa enabled? The manuals says you should enable numa and set the number of sockets equal to the "real" sockets on the mainboard. That means to me (vm with 4 cores e.q.) that ich have to set the...
  15. M

    Backup the PBS (VM) itself?

    Hello everyone! I have the problem that the backup of the VM with PBS constantly fails with the error: INFO: creating Proxmox Backup Server archive 'vm/101/2023-04-14T06:01:12Z' INFO: issuing guest-agent 'fs-freeze' command INFO: issuing guest-agent 'fs-thaw' command ERROR: VM 101 qmp command...
  16. M

    Minimal PVE Version for proxmox-backup-client

    We have a proxmox v5 cluster up und running for years. Because of the upgrade complexity we plan the upgrade (i.e. new installation) for the next cluster (maybe next year!?). In the meantime we test alternatives for our backup environment to meet "modern" expectations. We also evaluated the...
  17. M

    Diskspace "incremental" backups

    Good morning Users, As i read in the forum the backup of proxmox is sliced to junks and only changed junks are saved by the follow-up backups. I defined a backup job for a single vm to smb/cifs storage (NAS) and after three days i got a alarm from my storage that the diskspace is filling...