Search results

  1. I

    [SOLVED] Ceph - slow Recovery/ Rebalance on fast sas ssd

    We have suffered some server Failure but when the server came back ceph had to restore\rebalance around 10-30TB of data, Ssds are based on relativly high end sas ssds (4TB segate nitro and hp\dell 8TB pm1643 based) Network is 2x40Gb Ethernet (one dedicated for sync\replication and one for...
  2. I

    [SOLVED] rpool import issue - recovery after power failure

    i found a workaround: after i receive the error of rpool not found. i should wait around 1-2 minutes at least and then run zpool import -N rpool -f after it worked i changed the zfs sleep time and now it more stable
  3. I

    [SOLVED] rpool import issue - recovery after power failure

    anyone got another idea? i am not willing to reinstall the server. i tired to recover the boot via installation cd but it failed
  4. I

    [SOLVED] rpool import issue - recovery after power failure

    give the error: cannot import 'rpool': no such pool available
  5. I

    [SOLVED] rpool import issue - recovery after power failure

    How i can get to a menu to update zfs sleep args? "ZFS_INITRD_PRE_MOUNTROOT_SLEEP"
  6. I

    [SOLVED] rpool import issue - recovery after power failure

    for some reason i thought it worked. and tried to do another reboot, now after running zpool import i receive no pools available
  7. I

    [SOLVED] rpool import issue - recovery after power failure

    grub files does not exist, i updated zfs pre sleep, but how i "commit" force config update for the change? because update-initramfs does not exist
  8. I

    [SOLVED] rpool import issue - recovery after power failure

    *yes from iso zpool import: lbsk: ls -la /dev: starts with blkid: the end of dmseg:
  9. I

    [SOLVED] rpool import issue - recovery after power failure

    This morning we had some power outage issue on our server room, all nodes recovered except one. This node is part of (one of three) ceph cluster , (pools are set to replication 3) so the data is safe and the cluster is stable, (excluding ceph warning) Any idea how i can fix it?
  10. I

    ram usage sharing between lxc(ubuntu),vm(windows),host

    the proxmox host had the same ram usage as before: the max ram available to the vm was marked as used, even when the vm used around 1% of the ram
  11. I

    ram usage sharing between lxc(ubuntu),vm(windows),host

    i installed virtio and enable the flag, but the system still uses all ram and it is not shared i see "BalloonService " running on windows
  12. I

    ram usage sharing between lxc(ubuntu),vm(windows),host

    I assume that this is impossible, but ill ask this anyway. i need to put one lxc (ubuntu) and one vm on each host (for large computational tasks) somtiems we need windows and sometimes we need linux the problem that the VM need to have all the ram prealocation and not freed when not in use...
  13. I

    lxc bavkup slow ( how i can check what limit the backup speed?)

    nfs (qnap based) over 40GB network. tmp dir is default (what come pre configured with proxmox)
  14. I

    backup failure exit code 2

    how i can pull only the relevant file ? libpve-common-perl >= 6.2-3 i did not see instructions
  15. I

    backup failure exit code 2

    proxmox 6.2-12 any idea? INFO: starting new backup job: vzdump 140 --storage vqfiler1-lxc --remove 0 --node pve-srv1 --compress zstd --mode snapshot INFO: Starting Backup of VM 140 (lxc) INFO: Backup started at 2020-10-02 15:37:29 INFO: status = running INFO: CT Name: grid-master INFO...
  16. I

    lxc bavkup slow ( how i can check what limit the backup speed?)

    i have lxc containers hostes on ceph storage (with speed over 5GBs ) when making backup using zstd the speed is between 20-90MBs the storage i am writing to store the backup have read/write speed around 1GB proxmox 6.2.12
  17. I

    [SOLVED] lxc backup error when ZSTD selected

    my mistake, repository was on strech insted of buster . updating .. upgrade fixes the issue, thanks
  18. I

    [SOLVED] lxc backup error when ZSTD selected

    most nodes are 6.2-11 (oldest is 6.1-2) (upgraded but did not rebooted yet)
  19. I

    [SOLVED] lxc backup error when ZSTD selected

    on some hosts the ZSTD backup compression is not working? any idea why ? this is the error? Parameter verification failed. (400) compress: value 'zstd' does not have a value in the enumeration '0, 1, gzip, lzo' solution: upgrade proxmox to latest version on all nodes.