Search results

  1. I

    Fail to backup some containers

    I have the latest proxmox installed . proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve) pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754) pve-kernel-5.4: 6.2-2 pve-kernel-helper: 6.2-2 pve-kernel-5.3: 6.1-6 pve-kernel-5.0: 6.0-11 pve-kernel-5.4.41-1-pve: 5.4.41-1 pve-kernel-4.15: 5.4-18...
  2. I

    [SOLVED] lxc container faild to start

    making a backup fixed the issues, (without any restore )
  3. I

    [SOLVED] lxc container faild to start

    currently have trying to backup it (running without an error so far) afterwards ill try what you told.
  4. I

    [SOLVED] lxc container faild to start

    i think i found the issue that cased it ( i disabled the nfs share by mistake), now i re-enabled it but it still fails to start. i have access to the continer raw file. can i restore it ?
  5. I

    [SOLVED] lxc container faild to start

    i noticed one of my lxc containers was down, and it failed to start with he following error: /usr/bin/lxc-start -F -n 143 lxc-start: 143: conf.c: run_buffer: 352 Script exited with status 13 lxc-start: 143: start.c: lxc_init: 897 Failed to run lxc.hook.pre-start for container "143" lxc-start...
  6. I

    [SOLVED] python : get lxc allocated ram\cores

    when running python script inside the lxc all the python functions i found return the original core count of the host, and it should return the amount of recesses allocated to the lxc. anyone got an idea ?
  7. I

    ceph rebalance osd

    Unfortunately we dont have a test environment :( We are a small company,, this is all that we have. i think we will just wait until new ssds arrive
  8. I

    ceph rebalance osd

    we use it in our production environment, what are the risks ?
  9. I

    ceph rebalance osd

    just looked again i have 5 clients ( that mounted the cephfs ) and are outside of proxmox they are Ubuntu 18.04 LTS (GNU/Linux 4.15.0-88-generic x86_64) ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable) but it is luminous. as fast as i know this the setup we...
  10. I

    ceph rebalance osd

    { "mon": [ { "features": "0x3ffddff8ffacffff", "release": "luminous", "num": 3 } ], "mds": [ { "features": "0x3ffddff8ffacffff", "release": "luminous", "num": 3 } ], "osd"...
  11. I

    ceph rebalance osd

    what are the jewel clients? i have basic ceph fs and pool
  12. I

    ceph rebalance osd

    i got an error: root@pve-srv3:~# ceph balancer mode upmap Error EPERM: min_compat_client "jewel" < "luminous", which is required for pg-upmap. Try "ceph osd set-require-min-compat-client luminous" before enabling this mode
  13. I

    Reduce ceph pool from replication 3/2 to 2/2

    we have a working production pool, we running out of space and until we will get more ssds (due to covid everything is slow) what is the best approach to do it ?
  14. I

    ceph rebalance osd

    how to set it in proxmox, just as root shell from one of the main ceph hosts?
  15. I

    ceph rebalance osd

    slower pool is not relevant for this, because this storage have two main tasks, host our vms, and provide data for our computational grid
  16. I

    ceph rebalance osd

    i know, we in the process of ordering more, i am still looking for best performance\value for our company.. currently there are no deals for fast sas3 drives..
  17. I

    [SOLVED] can i make two separate cluster ?

    i see, thanks,, now i know what to look for. thanks. btw i have another question (https://forum.proxmox.com/threads/ceph-rebalance-osd.68168/) hopefully you can take a a look
  18. I

    [SOLVED] can i make two separate cluster ?

    i would like to add another volume (the same as "default")
  19. I

    [SOLVED] can i make two separate cluster ?

    i dont want to add hostes, just to add hdds and make second volume one for ssd (already exists) and one for hdd