Search results

  1. B

    Getting to BIOS

    that's it , thank you
  2. B

    Getting to BIOS

    I am trying to boot to BIOS on the console to try to boot from another source like ISO and I need to do that that way for various reasons. I can not sent the brake signal "esc" as the console is delayed and boots the system. Is this still the best way of doing it: qm sendkey vmid esc perhaps...
  3. B

    Proxmox 6.1 ISO

    I need it for testing, specifically to check what is the ceph clietn version and if it is updating during updates from 6.1.x to 6.4.x on which my current cluster is. It is regarding another post...
  4. B

    Proxmox 6.1 ISO

    anybody on this ?
  5. B

    Proxmox 6.1 ISO

    Is there an ISO of PVE 6.1 available ? I searched downloads but cannot find it. The archives don't have a link. Thank you
  6. B

    Proxmox GUI freaks out after adding ceph storage

    Thank you for prompt response. In the 5 node cluster that was affected none of the nodes have ceph installed. They are just connecting to the existing/older ceph cluster that consists of 4 nodes. They are using ceph-fuse client version 12.2.11+dfsg1-2.1+b1 and the new cluster on 16.2.7 says...
  7. B

    Proxmox GUI freaks out after adding ceph storage

    I have a 5 node PVE cluster (ver 6.4-1 running kernel: 5.4.124-1-pve) with existing 4 node ceph storage installed under proxmox (ver 14.2.6) - working great for at least 700 days. Today I added secondary 4 node ceph cluster working under proxmox ver 16.2.7. This cluster was working in the lab...
  8. B

    Ceph pool size (is 2/1 really a bad idea?)

    Empirically tested, working with 2 out of 4 nodes if they are the right ones :-) Can not change the number of OSDs as the servers have only 4 bays and need two for system RAID - it is kind of a small cluster with limited RAM for non demanding VMs. No issues with OSDs I have relatively large...
  9. B

    Ceph pool size (is 2/1 really a bad idea?)

    ok, that makes sense I was just hopeful that I missed something based on aaron post referencing 4 nodes with 2 nodes down. Perhaps there is a way to rig it just like we can do pvecm expected -1 to make the PVE working when it looses quorum, is there something similar that can be done for ceph...
  10. B

    Ceph pool size (is 2/1 really a bad idea?)

    I am experimenting with my new cluster still in a pre-production , 4 nodes, 8 OSDs 2 per node, 2.2GB each OSD, replicas 3/2, PG Autoscale set to warn, ceph version 16.2.7. What you described above did not work. First, I simulated 2 node failure one after the other which as you described would...
  11. B

    One OSD weareouts faster then others

    I have a 4 server Proxmox server dedicated to ceph. I have ssd pool with 8 ssds (2 per server) , these are enterprises ssds with 10 DWPD rating. I noticed that one ssd, osd.1 wearouts faster then all other ssds. All other ssds show 3% used endurance indicator but but osd.1 shows 6% - all...
  12. B

    An osd goes offline

    Put the new hard drive in and it has been running for 3 weeks with no issues. Thank you
  13. B

    Ceph cluster connected with two separate Proxmox nodes

    I got it to work thanks to this post: https://forum.proxmox.com/threads/laggy-ceph-status-and-got-timeout-in-proxmox-gui.50118/#post-429902 It was an MTU setting , on one node it was 9000 on the cluster default 1500. Thank you
  14. B

    [SOLVED] Laggy 'ceph status' and 'got timeout' in proxmox gui

    I had the same problem , getting "error with 'df': got timeout" when trying to either install a VM with ceph storage or move an existing disk to ceph storage , otherwise it looked "good". I had the MTU size setup on one interface to 9000 and all rest to default 1500. Once I changed it to 1500...
  15. B

    Ceph cluster connected with two separate Proxmox nodes

    This is all testing in lab environment. I have 4 node ceph cluster (installed on proxomx) and 1 Proxmox node (based on Dell server) connected to it working perfectly fine. I also have secondary proxmox node running under Hyper-V with nested virtualization enable that I have problems with. The...
  16. B

    An osd goes offline

    I understand , that was my plan. I thought I might have missed something obvious but the fact that the cluster has been up for two years and this is the only drive crushing made me think it is the drive or perhaps the server's bay somehow getting affected. I will get the disk and run some test...
  17. B

    An osd goes offline

    anybody on this ?
  18. B

    An osd goes offline

    I have one osd that goes out every 3-7 days. It is an osd in 4 node ceph cluster running under Proxmox and a mamber of 16 osds pool (4 osds per node). The issue is recent , the pool has been up almost 2 years. It happened 3 times in last two weeks. I checked the drive with SMART but it did not...
  19. B

    Updating from 6 to 7 possible issue

    I was testing on my test server the update with no-subscription license, fully updated in the 6.x revision and I got this: Processing triggers for initramfs-tools (0.140) ... update-initramfs: Generating /boot/initrd.img-5.11.22-3-pve Running hook script 'zz-proxmox-boot'.. Re-executing...