Search results

  1. I

    to satadom or not to satadom

    we are planing to add few more servers. due to our growing storage demands i would like to use all the 2.5 ports with high capacity drives and add them to ceph. the server will mostly be a ceph storage, (read intensive ) and compute grid (the containers will be hosted on the ceph) does the...
  2. I

    [SOLVED] proxmox does not see some ssds

    i just added some ssds to grow our ceph, we bought two bulks of sas ssds KPM51RUG7T68 hp branded (works ,integrated into our ceph) ARFX7680S5xnNTRI MZ-ILT7T60 (link for ebay listing for same model) does not recognized, (cannot added to ceph) both connected to 3008lsi on it mode hdd is found...
  3. I

    do you have an estimation when the beta stage will end?

    in general we are very happy, but since then (upgrading from 5.4) we have a random crash(1-2 month) on 4-7 nodes (simultaneously) due to unknown backup failure. i tried to investigate but i couldn't find anything. so ill wait for proxmox backup server to replace the current backup schedules
  4. I

    do you have an estimation when the beta stage will end?

    not to 6.2-10 , but all nodes are at 6.2-9, it is occurred since i updated from 5.4
  5. I

    do you have an estimation when the beta stage will end?

    bummer, currently the current current backup giving us some problems (random crashes(reboot) for multiple nodes in our cluster .. and i cannot find the reason) ill keep waiting.. thanks..
  6. I

    do you have an estimation when the beta stage will end?

    I have read about it, and i wane to use it on our core cluster to backup all lxc and vms. but waiting for the "beta" to finish. anyone got an estimation ?
  7. I

    Ceph Storage question

    my question was on replica 1 . (not on replica 3) i am looking for a solution for fast scalable storage where no redundancy is needed (after the task is done all the data inside no longer needed)
  8. I

    Ceph Storage question

    what is going to happen for the pool (replication 1) when one of the osd dies? the data is not important (it just scratch\tmp ) , but it is required that the pool will up and running for future tasks
  9. I

    Random host crashes after upgrade to 6.2-9

    it occurred again today: i found few more details with the flow: a scheduled backup task started (for 8 lxc containers) : 3 lxc backup finish successfully next container backup startingINFO: Starting Backup of VM 114 (lxc) (this is the end of the task log, after this all the relevant host...
  10. I

    Random host crashes after upgrade to 6.2-9

    it happen on multiple hosts, and as far as i know always on the same hosts ( all hosts that have the latest kernel) we have few more hosts which i did not reboot after upgrade because they are also works as ceph storage. (ill reboot them in few weeks once we get few more servers and increase...
  11. I

    Random host crashes after upgrade to 6.2-9

    yes it is from the task with the error from sys log: Jul 19 22:30:02 pve-srv1 vzdump[13451]: INFO: starting new backup job: vzdump 110 112 115 119 114 101 126 129 --mode suspend ... ... ... Jul 19 23:17:00 pve-srv1 systemd[1]: Starting Proxmox VE replication runner... Jul 19 23:17:01 pve-srv1...
  12. I

    Random host crashes after upgrade to 6.2-9

    i looked there but noting: here are the last logs in the task (it was a task that backup multiple containers and vm) NFO: Matched data: 4,001,484 bytes INFO: File list size: 65,521 INFO: File list generation time: 0.001 seconds INFO: File list transfer time: 0.000 seconds INFO: Total bytes sent...
  13. I

    Random host crashes after upgrade to 6.2-9

    we have 10 hosts in the cluster and once every few days (usually adjacent to backup task) there are some random host reboots. reboots on the same tome proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve) pve-manager: 6.2-9 (running version: 6.2-9/4d363c5b) pve-kernel-5.4: 6.2-4 pve-kernel-helper...
  14. I

    [SOLVED] what is the best place to post feature requests?

    what is the best place to post feature requests?
  15. I

    Ceph Storage question

    I am looking for a storage solution for scratchpad (need a fast storage, shared over all the cluster,can be unreliable. the data is used only data computational tasks) and after the computation the data is deleted currently i have the following setup: proxmox cluster consist of 10 nodes ceph...
  16. I

    ceph performance estimation calculation?

    there will only read requests. (no write while the data is used). the 50 clients is just an estimation. it total we have around 500 clients. but each need a cunk of data once every 10-40 seconds.. caching is useless because he data that needed to be read is not repeated. the data is used...
  17. I

    ceph performance estimation calculation?

    we are planning to add a new ceph pull that will be consists of 60 HDDS across 5 servers with 40GB duel network. (one for ceph sync, and one for clients) in future all the hdd slots will be populated how far this assumption from the reality: HDD have a read speed of 100MBs the data is...
  18. I

    [SOLVED] Ceph - hardware raid compatibility question

    i am planning to add few more servers to our cluster and most of them are with intention to extend our ceph cluster: the chassis in plan is https://www.supermicro.com/en/products/system/2U/6028/SSG-6028R-E1CR24L.cfm: this motherboard comes with Broadcom 3008 SAS3 IT mode controller is it...
  19. I

    [SOLVED] Can i disable the swap on the proxmox host?

    i have some proxmox servers installed, but on some of them there are swap enabled and always full. (i think it might reduce overall performance) there are lots of ram available. is it safe to disable the swap in proxmox host? ( it have around 20 containers and 4 vms running)