Search results

  1. I

    Ceph Storage question

    my question was on replica 1 . (not on replica 3) i am looking for a solution for fast scalable storage where no redundancy is needed (after the task is done all the data inside no longer needed)
  2. I

    Ceph Storage question

    what is going to happen for the pool (replication 1) when one of the osd dies? the data is not important (it just scratch\tmp ) , but it is required that the pool will up and running for future tasks
  3. I

    Random host crashes after upgrade to 6.2-9

    it occurred again today: i found few more details with the flow: a scheduled backup task started (for 8 lxc containers) : 3 lxc backup finish successfully next container backup startingINFO: Starting Backup of VM 114 (lxc) (this is the end of the task log, after this all the relevant host...
  4. I

    Random host crashes after upgrade to 6.2-9

    it happen on multiple hosts, and as far as i know always on the same hosts ( all hosts that have the latest kernel) we have few more hosts which i did not reboot after upgrade because they are also works as ceph storage. (ill reboot them in few weeks once we get few more servers and increase...
  5. I

    Random host crashes after upgrade to 6.2-9

    yes it is from the task with the error from sys log: Jul 19 22:30:02 pve-srv1 vzdump[13451]: INFO: starting new backup job: vzdump 110 112 115 119 114 101 126 129 --mode suspend ... ... ... Jul 19 23:17:00 pve-srv1 systemd[1]: Starting Proxmox VE replication runner... Jul 19 23:17:01 pve-srv1...
  6. I

    Random host crashes after upgrade to 6.2-9

    i looked there but noting: here are the last logs in the task (it was a task that backup multiple containers and vm) NFO: Matched data: 4,001,484 bytes INFO: File list size: 65,521 INFO: File list generation time: 0.001 seconds INFO: File list transfer time: 0.000 seconds INFO: Total bytes sent...
  7. I

    Random host crashes after upgrade to 6.2-9

    we have 10 hosts in the cluster and once every few days (usually adjacent to backup task) there are some random host reboots. reboots on the same tome proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve) pve-manager: 6.2-9 (running version: 6.2-9/4d363c5b) pve-kernel-5.4: 6.2-4 pve-kernel-helper...
  8. I

    [SOLVED] what is the best place to post feature requests?

    what is the best place to post feature requests?
  9. I

    Ceph Storage question

    I am looking for a storage solution for scratchpad (need a fast storage, shared over all the cluster,can be unreliable. the data is used only data computational tasks) and after the computation the data is deleted currently i have the following setup: proxmox cluster consist of 10 nodes ceph...
  10. I

    ceph performance estimation calculation?

    there will only read requests. (no write while the data is used). the 50 clients is just an estimation. it total we have around 500 clients. but each need a cunk of data once every 10-40 seconds.. caching is useless because he data that needed to be read is not repeated. the data is used...
  11. I

    ceph performance estimation calculation?

    we are planning to add a new ceph pull that will be consists of 60 HDDS across 5 servers with 40GB duel network. (one for ceph sync, and one for clients) in future all the hdd slots will be populated how far this assumption from the reality: HDD have a read speed of 100MBs the data is...
  12. I

    [SOLVED] Ceph - hardware raid compatibility question

    i am planning to add few more servers to our cluster and most of them are with intention to extend our ceph cluster: the chassis in plan is https://www.supermicro.com/en/products/system/2U/6028/SSG-6028R-E1CR24L.cfm: this motherboard comes with Broadcom 3008 SAS3 IT mode controller is it...
  13. I

    [SOLVED] Can i disable the swap on the proxmox host?

    i have some proxmox servers installed, but on some of them there are swap enabled and always full. (i think it might reduce overall performance) there are lots of ram available. is it safe to disable the swap in proxmox host? ( it have around 20 containers and 4 vms running)
  14. I

    Fail to backup some containers

    I have the latest proxmox installed . proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve) pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754) pve-kernel-5.4: 6.2-2 pve-kernel-helper: 6.2-2 pve-kernel-5.3: 6.1-6 pve-kernel-5.0: 6.0-11 pve-kernel-5.4.41-1-pve: 5.4.41-1 pve-kernel-4.15: 5.4-18...
  15. I

    [SOLVED] lxc container faild to start

    making a backup fixed the issues, (without any restore )
  16. I

    [SOLVED] lxc container faild to start

    currently have trying to backup it (running without an error so far) afterwards ill try what you told.
  17. I

    [SOLVED] lxc container faild to start

    i think i found the issue that cased it ( i disabled the nfs share by mistake), now i re-enabled it but it still fails to start. i have access to the continer raw file. can i restore it ?
  18. I

    [SOLVED] lxc container faild to start

    i noticed one of my lxc containers was down, and it failed to start with he following error: /usr/bin/lxc-start -F -n 143 lxc-start: 143: conf.c: run_buffer: 352 Script exited with status 13 lxc-start: 143: start.c: lxc_init: 897 Failed to run lxc.hook.pre-start for container "143" lxc-start...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!