Search results

  1. K

    [SOLVED] after node failure, local storage VM cannot be migrated

    Hi Fabian, It happened again, too bad the GUI does not allow it. Thanks again for the help. Regards, Kifeo
  2. K

    [SOLVED] after node failure, local storage VM cannot be migrated

    Many thanks, the move operation did resolve the situation.
  3. K

    [SOLVED] after node failure, local storage VM cannot be migrated

    Hello Fabian, I'm sorry, indeed it was on a shared storage, and I moved it to local and forgot to remove the HA. here is the pveversion -v : proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve) pve-manager: 6.3-3 (running version: 6.3-3/eee5f901) pve-kernel-5.4: 6.3-3 pve-kernel-helper: 6.3-3...
  4. K

    [SOLVED] after node failure, local storage VM cannot be migrated

    Hi everyone, I had a node with a VM that had a local storage, which had a failure and shut down for some time. Due to the HA, it was migrated to another node, and ended up in failed state due to the local storage not migrated. Now the node is up again. I've put the HA state to disabled for...
  5. K

    Web UI cannot create CEPH monitor when multiple pulbic nets are defined

    Hi, me again. Is the patch available in a certain pve version ? I'm in 6.3-3 and the issue is still present. EDIT: Looking at the code it seems it is only for mon, and not osd ?
  6. K

    [SOLVED] pvestatd[1873]: unable to activate storage 'cephfs' - directory '/mnt/pve/cephfs' does not exist or is unreachable

    I had the same issue, unmounting the directory and waiting a little bit resolved the issue, thanks @wolfgang
  7. K

    Upgraded to VE 6.3 ceph manager not starting

    I encountered also this issue after upgrading today
  8. K

    [SOLVED] cannot use ceph after joining cluster if already installed

    root@proxmox2:~# ls -la /etc/ceph/ceph.conf lrwxrwxrwx 1 root root 18 Nov 19 15:13 /etc/ceph/ceph.conf -> /etc/pve/ceph.conf Thanks for mentionning the network, it was in the same network before, it needs to be allowed in : TCP 6800:7100 for OSD TCP 6789 TCP 3300 for monitor This is working...
  9. K

    [SOLVED] cannot use ceph after joining cluster if already installed

    Thank Alwin, however this did not work. I also wanted to remove all ceph* packages to start over, but then it wanted to remove proxmox-ve and everything :(
  10. K

    [SOLVED] cannot use ceph after joining cluster if already installed

    Hi Team! I reconfigured a server from scratch. Then installed the ceph package but did cancel the configuration after the install, so it could use the setup of the already configured setup. Then made it join the cluster. now I cannot configure it with the GUI, and have the got timeout (500)'...
  11. K

    bad ceph performance on SSD

    network is not saturated, first spike is with hdd (around 100M, second is with sdd around 50M), max seen on interface is 400Mb during VM migrations ceph osd dump osd.0 up in weight 1 up_from 31272 up_thru 32880 down_at 31271 last_clean_interval [30448,31269)...
  12. K

    bad ceph performance on SSD

    I tried multiple times and on different days, I have always the same results. yes the crushrules are different, the difference is the hdd/sdd class rule replicated_hdd { id 1 type replicated min_size 2 max_size 10 step take default class hdd step chooseleaf firstn 0...
  13. K

    bad ceph performance on SSD

    Thanks for the reply, however, if the network would be the cause, shouln't it affect both pools during the benchmark ?
  14. K

    bad ceph performance on SSD

    SanDisk SDSSDP06 SanDisk SDSSDP12 Samsung SSD 850 and two mvne on pcie port
  15. K

    bad ceph performance on SSD

    My understanding is not clear on why the HDD perform much faster than the SSD, the hddpool is almost not used. I would expect around the same speed with both disks
  16. K

    bad ceph performance on SSD

    proxmox1 is a NUC8i3, proxmox2 and proxmox3 are N54L, proxmox4 and proxmox5 are dell 8200 SFF i7. the SSD are all 1To. Those pc are all connected to the same Gigabit switch and up to date. What other info would you like ?
  17. K

    bad ceph performance on SSD

    Hello ! I have this setup of proxmox and ceph : root@proxmox1:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 18.19080 root default -7 0.90970 host proxmox1 5 ssd 0.90970 osd.5...
  18. K

    e1000e eno1: Detected Hardware Unit Hang:

    same issue here on NUC and 8200 SFF
  19. K

    e1000 driver hang

    same issue here on NUC and HP 8200 SFF

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!