Search results

  1. B

    Ceph pool size (is 2/1 really a bad idea?)

    I am experimenting with my new cluster still in a pre-production , 4 nodes, 8 OSDs 2 per node, 2.2GB each OSD, replicas 3/2, PG Autoscale set to warn, ceph version 16.2.7. What you described above did not work. First, I simulated 2 node failure one after the other which as you described would...
  2. B

    One OSD weareouts faster then others

    I have a 4 server Proxmox server dedicated to ceph. I have ssd pool with 8 ssds (2 per server) , these are enterprises ssds with 10 DWPD rating. I noticed that one ssd, osd.1 wearouts faster then all other ssds. All other ssds show 3% used endurance indicator but but osd.1 shows 6% - all...
  3. B

    An osd goes offline

    Put the new hard drive in and it has been running for 3 weeks with no issues. Thank you
  4. B

    Ceph cluster connected with two separate Proxmox nodes

    I got it to work thanks to this post: https://forum.proxmox.com/threads/laggy-ceph-status-and-got-timeout-in-proxmox-gui.50118/#post-429902 It was an MTU setting , on one node it was 9000 on the cluster default 1500. Thank you
  5. B

    [SOLVED] Laggy 'ceph status' and 'got timeout' in proxmox gui

    I had the same problem , getting "error with 'df': got timeout" when trying to either install a VM with ceph storage or move an existing disk to ceph storage , otherwise it looked "good". I had the MTU size setup on one interface to 9000 and all rest to default 1500. Once I changed it to 1500...
  6. B

    Ceph cluster connected with two separate Proxmox nodes

    This is all testing in lab environment. I have 4 node ceph cluster (installed on proxomx) and 1 Proxmox node (based on Dell server) connected to it working perfectly fine. I also have secondary proxmox node running under Hyper-V with nested virtualization enable that I have problems with. The...
  7. B

    An osd goes offline

    I understand , that was my plan. I thought I might have missed something obvious but the fact that the cluster has been up for two years and this is the only drive crushing made me think it is the drive or perhaps the server's bay somehow getting affected. I will get the disk and run some test...
  8. B

    An osd goes offline

    anybody on this ?
  9. B

    An osd goes offline

    I have one osd that goes out every 3-7 days. It is an osd in 4 node ceph cluster running under Proxmox and a mamber of 16 osds pool (4 osds per node). The issue is recent , the pool has been up almost 2 years. It happened 3 times in last two weeks. I checked the drive with SMART but it did not...
  10. B

    Updating from 6 to 7 possible issue

    I was testing on my test server the update with no-subscription license, fully updated in the 6.x revision and I got this: Processing triggers for initramfs-tools (0.140) ... update-initramfs: Generating /boot/initrd.img-5.11.22-3-pve Running hook script 'zz-proxmox-boot'.. Re-executing...
  11. B

    Can I backup to a network store/share

    I will give it a try. Otherwise what do you recommend for 160+ VMs in terms of local drives , not capacity but performance wise ? SAS vs. SATA , SSD vs. HDD spinners ? I think in our case the bottleneck is Ceph storage so saving in backup speed just comes from incremental backups and not...
  12. B

    Can I backup to a network store/share

    Is 10Gbps fast enough for this ? You referenced storage also , I am testing the PBS on a VM running on Proxmox and the incremental backup makes a difference already, using regular hard drives for this. I think my limitation/bottleneck is Ceph storage on which my VPN run and not necessary the...
  13. B

    Can I backup to a network store/share

    Good job with the server based on my tests works great. I already have a backup solution with TB of storage. It is kind of hard to justify the additional investment in the storage itself for additional backup server. I have about 160 VM and growing so the incremental backup is what I need, the...
  14. B

    Slow live migration for hosts with long uptime.

    I know there was already a post about it but it was unresolved. I had to live migrate a lot of VMs recently (and still have to migrate more) and noticed the relationship between long uptime and the live migration time, I have proxmox clusted 6.1-5 and I have xternal ceph cluster for external...
  15. B

    Updating from 6.1-5 to 6.3

    no I have not started yet just prepping. Wanted to make sure that if I update node 1 I will not have to update node 2, 3 and 4 right away and will be able to do this next day. One node at the time , need to move the VMs.
  16. B

    Updating from 6.1-5 to 6.3

    Is there a time limit to update all nodes? I can update one note and in two days update another, next 2-day 3rd note etc. Should I anticipate any issues, or it is like with updating when the corosync version changed (which I don't see in teh release notes) and there was a limited time to update...
  17. B

    Failed to migrate disk (Device /dev/dm-xx not initialized in udev database even after waiting 10000

    Had the same issue on PVE6.2-4 while updating which coincidentally was right after I imported a drive via "qm importdisk command". udev admintrigger worked for me. Thx
  18. B

    ZFS on SSDs with replication

    Thanks, finally finished that project . Everything works great.
  19. B

    Migrating from QNAP Virtualization Staion to PVE

    I am trying to migrate a virtuall machine , Win Server from QNAP to PVE. The QNAP backup is just one file backup.img , there are two drives there. When I open this backup with app like powerISO I see 3 files 0.ntfs , 1,ntfs (which seems to be a main file taking 99.999% of space), and 2 just say...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!