Search results

  1. B

    Proxmox 6.1 ISO

    anybody on this ?
  2. B

    Proxmox 6.1 ISO

    Is there an ISO of PVE 6.1 available ? I searched downloads but cannot find it. The archives don't have a link. Thank you
  3. B

    Proxmox GUI freaks out after adding ceph storage

    Thank you for prompt response. In the 5 node cluster that was affected none of the nodes have ceph installed. They are just connecting to the existing/older ceph cluster that consists of 4 nodes. They are using ceph-fuse client version 12.2.11+dfsg1-2.1+b1 and the new cluster on 16.2.7 says...
  4. B

    Proxmox GUI freaks out after adding ceph storage

    I have a 5 node PVE cluster (ver 6.4-1 running kernel: 5.4.124-1-pve) with existing 4 node ceph storage installed under proxmox (ver 14.2.6) - working great for at least 700 days. Today I added secondary 4 node ceph cluster working under proxmox ver 16.2.7. This cluster was working in the lab...
  5. B

    Ceph pool size (is 2/1 really a bad idea?)

    Empirically tested, working with 2 out of 4 nodes if they are the right ones :-) Can not change the number of OSDs as the servers have only 4 bays and need two for system RAID - it is kind of a small cluster with limited RAM for non demanding VMs. No issues with OSDs I have relatively large...
  6. B

    Ceph pool size (is 2/1 really a bad idea?)

    ok, that makes sense I was just hopeful that I missed something based on aaron post referencing 4 nodes with 2 nodes down. Perhaps there is a way to rig it just like we can do pvecm expected -1 to make the PVE working when it looses quorum, is there something similar that can be done for ceph...
  7. B

    Ceph pool size (is 2/1 really a bad idea?)

    I am experimenting with my new cluster still in a pre-production , 4 nodes, 8 OSDs 2 per node, 2.2GB each OSD, replicas 3/2, PG Autoscale set to warn, ceph version 16.2.7. What you described above did not work. First, I simulated 2 node failure one after the other which as you described would...
  8. B

    One OSD weareouts faster then others

    I have a 4 server Proxmox server dedicated to ceph. I have ssd pool with 8 ssds (2 per server) , these are enterprises ssds with 10 DWPD rating. I noticed that one ssd, osd.1 wearouts faster then all other ssds. All other ssds show 3% used endurance indicator but but osd.1 shows 6% - all...
  9. B

    An osd goes offline

    Put the new hard drive in and it has been running for 3 weeks with no issues. Thank you
  10. B

    Ceph cluster connected with two separate Proxmox nodes

    I got it to work thanks to this post: https://forum.proxmox.com/threads/laggy-ceph-status-and-got-timeout-in-proxmox-gui.50118/#post-429902 It was an MTU setting , on one node it was 9000 on the cluster default 1500. Thank you
  11. B

    [SOLVED] Laggy 'ceph status' and 'got timeout' in proxmox gui

    I had the same problem , getting "error with 'df': got timeout" when trying to either install a VM with ceph storage or move an existing disk to ceph storage , otherwise it looked "good". I had the MTU size setup on one interface to 9000 and all rest to default 1500. Once I changed it to 1500...
  12. B

    Ceph cluster connected with two separate Proxmox nodes

    This is all testing in lab environment. I have 4 node ceph cluster (installed on proxomx) and 1 Proxmox node (based on Dell server) connected to it working perfectly fine. I also have secondary proxmox node running under Hyper-V with nested virtualization enable that I have problems with. The...
  13. B

    An osd goes offline

    I understand , that was my plan. I thought I might have missed something obvious but the fact that the cluster has been up for two years and this is the only drive crushing made me think it is the drive or perhaps the server's bay somehow getting affected. I will get the disk and run some test...
  14. B

    An osd goes offline

    anybody on this ?
  15. B

    An osd goes offline

    I have one osd that goes out every 3-7 days. It is an osd in 4 node ceph cluster running under Proxmox and a mamber of 16 osds pool (4 osds per node). The issue is recent , the pool has been up almost 2 years. It happened 3 times in last two weeks. I checked the drive with SMART but it did not...
  16. B

    Updating from 6 to 7 possible issue

    I was testing on my test server the update with no-subscription license, fully updated in the 6.x revision and I got this: Processing triggers for initramfs-tools (0.140) ... update-initramfs: Generating /boot/initrd.img-5.11.22-3-pve Running hook script 'zz-proxmox-boot'.. Re-executing...
  17. B

    Can I backup to a network store/share

    I will give it a try. Otherwise what do you recommend for 160+ VMs in terms of local drives , not capacity but performance wise ? SAS vs. SATA , SSD vs. HDD spinners ? I think in our case the bottleneck is Ceph storage so saving in backup speed just comes from incremental backups and not...
  18. B

    Can I backup to a network store/share

    Is 10Gbps fast enough for this ? You referenced storage also , I am testing the PBS on a VM running on Proxmox and the incremental backup makes a difference already, using regular hard drives for this. I think my limitation/bottleneck is Ceph storage on which my VPN run and not necessary the...
  19. B

    Can I backup to a network store/share

    Good job with the server based on my tests works great. I already have a backup solution with TB of storage. It is kind of hard to justify the additional investment in the storage itself for additional backup server. I have about 160 VM and growing so the incremental backup is what I need, the...