Recent content by EPM

  1. E

    [SOLVED] Ceph OSD not start on reboot

    I restarted it again and now it seems to be working fine !! I ask again: I need to recreate all OSDs? Thank you !!
  2. E

    [SOLVED] Ceph OSD not start on reboot

    I re-created the OSD. Unfortunately, it still doesn't start after reboot. There is a problem with the owner of the sde device: brw-rw ---- 1 root disk 8, 32 Jul 26 16.48 sdc brw-rw ---- 1 root disk 8, 33 Jul 26 16.48 sdc1 brw-rw ---- 1 ceph ceph 8, 34 Jul 26 16.49 sdc2 brw-rw ---- 1 root disk 8...
  3. E

    [SOLVED] Ceph OSD not start on reboot

    proxmox-ve: 7.0-2 (running kernel: 5.11.22-2-pve) pve-manager: 7.0-10 (running version: 7.0-10/d2f465d3) pve-kernel-5.11: 7.0-5 pve-kernel-helper: 7.0-5 pve-kernel-5.4: 6.4-3 pve-kernel-5.3: 6.1-6 pve-kernel-5.11.22-2-pve: 5.11.22-4 pve-kernel-5.11.22-1-pve: 5.11.22-2 pve-kernel-5.4.119-1-pve...
  4. E

    [SOLVED] Ceph OSD not start on reboot

    Hi, I am used 3 ceph node with 3 osd/node. If reboot nodes. The 3. node 3. osd cannot start: bluestore(/var/lib/ceph/osd/ceph-8/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-8/block: (13) Permission denied ls -l /dev/sd* brw-rw---- 1 root disk 8, 32 júl 21 13.57 /dev/sdc...
  5. E

    Proxmox best local system and storage allocation

    Thank you for reply. Total 3x240GB (2+1), 4x960GB(3+1)
  6. E

    Proxmox best local system and storage allocation

    Hi, I want to ask for help: I am planning to install 1 proxmox server and I am looking for the best allocation of storage space. 3 240pcs ssd for the system (1pc spare). 4 pcs 960gb ssd for local storage (1pc spare). All ssd Intel D3-S4510. What would be the optimal solution (raid, filesystem...
  7. E

    V 6.4 Live Migration Limitations?

    I am set 1024 (Datacenter -> Options): 2021-04-29 16:48:58 migration active, transferred 1018.0 MiB of 4.0 GiB VM-state, 1.4 GiB/s 2021-04-29 16:48:59 migration active, transferred 2.0 GiB of 4.0 GiB VM-state, 1.0 GiB/s 2021-04-29 16:49:00 migration active, transferred 3.0 GiB of 4.0 GiB...
  8. E

    V 6.4 Live Migration Limitations?

    Hi, Before upgrade: 2021-04-29 08:39:58 migration speed: 1024.00 MB/s - downtime 88 ms 2021-04-29 08:39:58 migration status: completed 2021-04-29 08:40:01 migration finished successfully (duration 00:00:09) After upgrade : 2021-04-29 08:43:21 average migration speed: 136.8 MiB/s - downtime 56 ms...
  9. E

    PVE Backup Speed Optimization

    https://forum.proxmox.com/threads/vzdump-speed-improvment.63500/
  10. E

    VZDump slow on ceph images, RBD export fast

    Hi, I also have problems with vzdump speed under ceph to save to nfs: https://forum.proxmox.com/threads/vzdump-speed-improvment.63500/ Pigz parameter "-b 8192 -3". It worked for me . Worth a try. L,
  11. E

    Vzdump speed improvment

    Hi! I am testing vzdump backup and restore. I wrote little wrapper script, add pigz parameter "-b 8192". Normal backup with original pigz: INFO: status: 100% (652835028992/652835028992), sparse 3% (25770594304), duration 4795, read/write 117/115 MB/s INFO: transferred 652835 MB in 4795 seconds...
  12. E

    Upgrade ceph to Nautilus (14.2.2)

    hi, ceph config set mon mon_crush_min_required_version hammer