Search results

  1. Jackobli

    Ceph Managers Seg Faulting Post Upgrade (8 -> 9 upgrade)

    We are also seeing this issue on our TEST-Environment. Acutally, we have planned to upgrade our PROD next week (there we are in «Basic» Support not only Community like the TEST). We will stop the upgrade until this issue is well known and resolved! I'd suggest to send out a warning for other...
  2. Jackobli

    Upgrade PVE 8 to 9, Migration of VM no more possible due to key error

    Gnaaah, I was always looking into sshd_config.d while searching and the configuration was also in ssh_config.d I removed the files and now the migration works. Thank you for pointing again at it.
  3. Jackobli

    Upgrade PVE 8 to 9, Migration of VM no more possible due to key error

    Hello Migrated our LAB Cluster from 8 to 9 along the documentation. Migration seemed ok, but after the last upgrade I was unable to move (migrate) vms from one to the other node. I thought I answered the question for changing sshd_config with "enter" (keep). Tried to renew the keys but still...
  4. Jackobli

    Anzeige Storage Kapazität im WebGUI PVE

    Warum? Müssten wir? Wir haben aber versucht, möglichst auf allen Nodes eine gleichartige Ausstattung zu haben, also keine Nodes mit sehr vielen sehr grossen Disks und umgekehrt solche mit wenigen kleinen.
  5. Jackobli

    Default Host Type

    Sorry for jumping in, I went here while searching other stuff. That would have to "lock" older nodes to be joined to a cluster unless changing all vms cpu type. I think, it's like usual. The individual mileage may vary. At least, the standard type is now set to x86-64-v2-AES, before it was...
  6. Jackobli

    Dirty bitmap becomes invalid, after storage migration.

    Hi again / Bonsoir Maximilio from the Support Team writes We also found, that changing the disk size (Disk Action --> Resize) will lead to a loss of the dirty bitmap table (or the later manipulation of the partition table perhaps). Regards, Urs
  7. Jackobli

    Dirty bitmap becomes invalid, after storage migration.

    That would be another case as already mentioned here: https://forum.proxmox.com/threads/backup-to-pbs-very-long-due-to-missing-dirty-bitmap.162551 I will open a case because we are loosing bitmaps randomly on large vms what makes backups very unpredictable. Keep you updated if there are more...
  8. Jackobli

    Syncronize/Move Backups on the same server into another namespace

    Ok, tried it on the test pbs. 1st attempt: mv vm/108 /ns/MoveTest logged into the test pbs vm/108 is no more visible under root ns/MoveTest is empty reload didn't work either 2nd attemp: created directory vm under ns/MoveTest moved ns/MoveTest/108 to ns/MoveTest/vm logged in into test pbs vm...
  9. Jackobli

    Syncronize/Move Backups on the same server into another namespace

    Thank you guys (@Hannes Laimer and @alietz ) for your reply Do I understand correctly: I will go into the shell of the *pbs* itself I may move a single or multiple backups from vm to ns? Eg. cd /zpool/where/we/store/our/backups mv vm/100/ ns/NewHotAndCoolNameSpace/ Kind regards, Urs (who...
  10. Jackobli

    Syncronize/Move Backups on the same server into another namespace

    Sorry, @Hannes Laimer to nag you, should I open an Incident as this does not work for me or I am just not able to understand the concept? Kind regards, Urs
  11. Jackobli

    Syncronize/Move Backups on the same server into another namespace

    Thank you. What I did: Created a namespace "Target" Tried to create a Pull Sync-Job with: Set Job Location Local Pull from local datastore to "Target" but I'm unable to choose a source datastore, there is no root or other namespace available. The other way, to push it would mean I would...
  12. Jackobli

    PBS Expected speeds and performances

    Thought the recommendation of pbs for zfs is to use mirrors. We run an AMD EPYC 9124 16-Core with 384 GB RAM and 15 nvme (7.68 TB Read Intensive) organized in 7 mirrors and one spare. No optimization of zfs, especially no compression or dedupe (dedupe kills it, really). running...
  13. Jackobli

    Syncronize/Move Backups on the same server into another namespace

    Hello Guys I would like to "move" several existing backups on the same pbs from the root namespace into an upward newly created one. This has to be done, because we have to use a different retention for this backups. I tried to do this trough a sync job but ran into errors. Is there a...
  14. Jackobli

    Backup to PBS very long due to missing dirty bitmap

    Ok, I can confirm, that resize the disk in Proxmox *or* resize the partition/volume and filesystem also makes the bitmap invalid. So this should be kept in mind. Regards, Urs
  15. Jackobli

    Backup to PBS very long due to missing dirty bitmap

    Thank you, both (aaron & spirit). I don't think, that one of the three actions did occur. The vm has an uptime of 8 days and the backup that did start from scratch was yesterday. We only backup to one server / namespace. And the last verify (yesterday at 10am) has ended without any errors. IMHO...
  16. Jackobli

    Backup to PBS very long due to missing dirty bitmap

    Hi guys I know there are many threads about, but I haven't found the right one who explains my question perhaps. How is the behavior or where do I find the documentation of PVE for handling the "dirty blocks bitmap" for doing incremental backups? We know: If a VM has been stopped, the Bitmap is...
  17. Jackobli

    More a feature request, than a qestion, please add a AVX-CPU Type to proxmox

    As mentioned in this thread, MongoDB and perhaps other applications are relying on an available AVX at the host cpu. Actual standard / default is: x86-64-v2-AES We know, that we might change the type to "host" or another appropriate type (we do run mostly AMD Epyc Rome), but for migrations it...
  18. Jackobli

    Adding NFS Share as Datastore in Proxmox Backup Server

    PBS gets data through https in very small chunks. https or pbs uses for such writes synchronously operations, means every file or chunk must be written, before the next gets processed. Due to the latency and overhead of nfs its terrible slow as you mention.
  19. Jackobli

    Ceph x daemons have recently crashed

    Ceph 18.2.4, everything on Bluestore. Perhaps interesting: - It only happens on the nodes with rotating disk (SAS 10K RPM) and WAL/DB on (SATA) SSD - The other nodes are SSD-only (SAS SSD) and are not affected.
  20. Jackobli

    Ceph x daemons have recently crashed

    Sure, nearly on every reboot after patching. But only on our some nodes that still have some traditional hard disks. Dunno if the fact, that they have also a wal/db on a separate SSD is the key. Would be nice to have that fixed.