I am experimenting with my new cluster still in a pre-production , 4 nodes, 8 OSDs 2 per node, 2.2GB each OSD, replicas 3/2, PG Autoscale set to warn, ceph version 16.2.7. What you described above did not work. First, I simulated 2 node failure one after the other which as you described would...
I have a 4 server Proxmox server dedicated to ceph. I have ssd pool with 8 ssds (2 per server) , these are enterprises ssds with 10 DWPD rating. I noticed that one ssd, osd.1 wearouts faster then all other ssds. All other ssds show 3% used endurance indicator but but osd.1 shows 6% - all...
I got it to work thanks to this post: https://forum.proxmox.com/threads/laggy-ceph-status-and-got-timeout-in-proxmox-gui.50118/#post-429902
It was an MTU setting , on one node it was 9000 on the cluster default 1500.
Thank you
I had the same problem , getting "error with 'df': got timeout" when trying to either install a VM with ceph storage or move an existing disk to ceph storage , otherwise it looked "good". I had the MTU size setup on one interface to 9000 and all rest to default 1500. Once I changed it to 1500...
This is all testing in lab environment.
I have 4 node ceph cluster (installed on proxomx) and 1 Proxmox node (based on Dell server) connected to it working perfectly fine. I also have secondary proxmox node running under Hyper-V with nested virtualization enable that I have problems with. The...
I understand , that was my plan. I thought I might have missed something obvious but the fact that the cluster has been up for two years and this is the only drive crushing made me think it is the drive or perhaps the server's bay somehow getting affected. I will get the disk and run some test...
I have one osd that goes out every 3-7 days.
It is an osd in 4 node ceph cluster running under Proxmox and a mamber of 16 osds pool (4 osds per node). The issue is recent , the pool has been up almost 2 years. It happened 3 times in last two weeks. I checked the drive with SMART but it did not...
I was testing on my test server the update with no-subscription license, fully updated in the 6.x revision and I got this:
Processing triggers for initramfs-tools (0.140) ...
update-initramfs: Generating /boot/initrd.img-5.11.22-3-pve
Running hook script 'zz-proxmox-boot'..
Re-executing...
I will give it a try. Otherwise what do you recommend for 160+ VMs in terms of local drives , not capacity but performance wise ? SAS vs. SATA , SSD vs. HDD spinners ? I think in our case the bottleneck is Ceph storage so saving in backup speed just comes from incremental backups and not...
Is 10Gbps fast enough for this ? You referenced storage also , I am testing the PBS on a VM running on Proxmox and the incremental backup makes a difference already, using regular hard drives for this. I think my limitation/bottleneck is Ceph storage on which my VPN run and not necessary the...
Good job with the server based on my tests works great.
I already have a backup solution with TB of storage. It is kind of hard to justify the additional investment in the storage itself for additional backup server. I have about 160 VM and growing so the incremental backup is what I need, the...
I know there was already a post about it but it was unresolved. I had to live migrate a lot of VMs recently (and still have to migrate more) and noticed the relationship between long uptime and the live migration time, I have proxmox clusted 6.1-5 and I have xternal ceph cluster for external...
no I have not started yet just prepping. Wanted to make sure that if I update node 1 I will not have to update node 2, 3 and 4 right away and will be able to do this next day. One node at the time , need to move the VMs.
Is there a time limit to update all nodes? I can update one note and in two days update another, next 2-day 3rd note etc. Should I anticipate any issues, or it is like with updating when the corosync version changed (which I don't see in teh release notes) and there was a limited time to update...
Had the same issue on PVE6.2-4 while updating which coincidentally was right after I imported a drive via "qm importdisk command".
udev admintrigger worked for me.
Thx
I am trying to migrate a virtuall machine , Win Server from QNAP to PVE. The QNAP backup is just one file backup.img , there are two drives there. When I open this backup with app like powerISO I see 3 files 0.ntfs , 1,ntfs (which seems to be a main file taking 99.999% of space), and 2 just say...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.