Search results

  1. J

    EFI VM's won't start under 7 Beta with Writeback Cache

    Yes, I typically use the kernel RBD module for performance. If I use librbd instead (shutdown VM, uncheck KRBD, changed settings to writeback cache from none, boot VM) everything works as expected. The problem does not occur using librbd but appears to be isolated to the kernel RBD module when...
  2. J

    I've really screwed up bad, PLEASE HELP!

    Your prior post was deleted, not edited.
  3. J

    New R210ii 3 node cluster

    1GBe will limit you to a maximum of 125MB/s without any overhead. In reality you will top out around 100MBs. Ceph is going to be communicating across all three nodes simultaneously. You also have VM traffic. Hopefully Corosync is not on this network. Don’t expect to achieve greater than 30-40MBs...
  4. J

    I've really screwed up bad, PLEASE HELP!

    Probably stupid question… you took a recursive snapshot. Did you use the -R flag to send?
  5. J

    Can't Reshard OSD's under Ceph Pacific 16.2.4

    @t.lamprecht thank you for promptly updating the docs to warn about the bug. I was going to suggest it, but you already did it! (I knew exactly what I was risking when I tested out resharding... worst case scenario on a 3x replication test cluster was having to backfill one OSD... but it could...
  6. J

    Proxmox VE 7.0 (beta) released!

    The usual three repos (pve-no-subscription, pvetest, and pve-enterprise) will all be available after a release. During the beta period, only the pvetest repo exists. As stated in the documents, it should be possible to transition from pvetest to a subscription or non-subscription repo after...
  7. J

    Proxmox 7 Cluster and Backups questions

    I haven't had to test a restore yet, but my backups from 7 Beta to PBS are as expected.
  8. J

    EFI VM's won't start under 7 Beta with Writeback Cache

    Boots: scsi0: CephRBD:vm-111-disk-1,cache=none,discard=on,iothread=1,size=153601M,ssd=1,aio=io_uring scsi0: CephRBD:vm-111-disk-1,cache=none,discard=on,iothread=1,size=153601M,ssd=1,aio=native scsi0: CephRBD:vm-111-disk-1,cache=writeback,discard=on,iothread=1,size=153601M,ssd=1,aio=threads...
  9. J

    EFI VM's won't start under 7 Beta with Writeback Cache

    I wanted to see if this was related to io_uring, so I ran "qm set VMID --aio native". I received an error. qm set 111 --aio native Unknown option: aio 400 unable to parse option qm set <vmid> [OPTIONS]
  10. J

    EFI VM's won't start under 7 Beta with Writeback Cache

    I have waited several minutes between each iterative test. Tested using virto-gpu, VMware, and SPICE. No change. The results are the same. Graphics seems unrelated. I also turned off SSD emulation, IO thread, and discard to test those settings. No effect. What did work was changing the...
  11. J

    Can't Reshard OSD's under Ceph Pacific 16.2.4

    Original Post Ceph Pacific introduced new RocksDB Sharding. Attempts to reshard an OSD using Ceph Pacific on Proxmox 7.0-5 Beta results in the corruption of the OSD, requiring the OSD's deletion and a backfilling. The OSD can't be restarted or repaired after the failed reshard. I first stopped...
  12. J

    EFI VM's won't start under 7 Beta with Writeback Cache

    Original Post Here I can't get any virtual machines to start on 7.0-5 Beta when using writeback cache with Ceph backed raw virtual disks. The console hangs before the EFI stage begins. See attached image. At the advise of @t.lamprecht, I narrowed the problem down to the cache setting. Won't...
  13. J

    Proxmox VE 7.0 (beta) released!

    Ceph Pacific introduced new RocksDB Sharding. Attempts to reshard an OSD using Ceph Pacific on Proxmox 7.0-5 Beta results in the corruption of the OSD, requiring the OSD's deletion and a backfilling. The OSD can't be restarted or repaired after the failed reshard. root@viper:~#...
  14. J

    Proxmox VE 7.0 (beta) released!

    I can't get any virtual machines to start on 7.0-5 Beta. The console hangs before the EFI stage begins. See attached image. proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve) pve-manager: 7.0-5 (running version: 7.0-5/cce9b25f) pve-kernel-5.11: 7.0-3 pve-kernel-helper: 7.0-3 pve-kernel-5.4...
  15. J

    Ceph OSD disk replacement: Prevent multiple re-balancing of data

    Assuming this is 3X replication (it isn’t explicitly stated): 1. set norebalance and no backfill flags 2. stop OSDs of the faulty discs, set them to out, and then destroy OSD to remove from crushmap. 3. install new discs and create new OSDs 4. deactivate Global OSD flags, begin rebalance
  16. J

    Proxmox VE 7.0 (beta) released!

    Thank you @spirit and @dcsapak for your detailed and thorough answers!
  17. J

    Proxmox VE 7.0 (beta) released!

    Awesome! An observation and related question. I see that "EFI disks stored on Ceph now use the writeback caching-mode, improving boot times in case of slower or highly-loaded Ceph storages." Is "writeback" the recommended caching-mode for Ceph backed disks? I was under the impression "no...
  18. J

    ZFS Special Device question

    It will spill over. You can add a metadata device to a pool at anytime, but only newly created files will have their metadata stored on the special device. Existing files will remain entirely on the prior existing vdev. If a pool exists with sample data, it’s very easily to run commands to check...