How would a drive with a hardware issue be much different with and without another partition on the drive? You will still have throughput contention if both partitions are being used, but that is the case whether or not the drive is bad (and I've already considered this and I find the benefits...
What experiences does everyone else have with failed drives in ceph? Does ceph keep it running until a complete failure where the drive is marked as down/out? Any other places where failures may crop up before a complete failure?
Thanks!
A drive on one of my nodes is constantly throwing ZFS write errors on my rpool (triple mirror, so not currently worried about data loss). Based on SMART self test, age of the drive, and the fact that I think it started when I was moving some cabling around, I'm pretty confident the issue is due...
So even with large sequential writes RocksDB generates a bunch of random write IO? What has a biggest bang for the buck, more nodes or more OSDs?
So it sounds like even though I plan on using this mostly for cold storage/sequential writes I need to test out reserving some of my SSD storage...
They will be used primarily for cold storage and for some large continuous writes so I don't need them to be super speedy. I have the SSD based pool for anything I need to be particularly fast.
However currently with 9 HDD OSDs across 4 nodes (3/3/2/1) I am getting a max of ~55MB/s on a large...
What is the typical commit latency I should expect from HDDs in a cluster?
I'm currently in the process of migrating from my ZFS pools to a Ceph pools. For the initial migration I have a random mix of hardware. Once the migration is complete, I'll shutdown the ZFS pools and migrate that...
I'm starting to play around with ceph pools to replace one of my zfs mirrored pools. One of my biggest questions is how to detect errors from flakey drives/cables/etc.
zpool status can provide the following information and I want to see if similar data is available for ceph pools (using...
Replication for the container that failed migration typically runs < 15 seconds and at least one of the times it failed I tried a few minutes later with the same error message.
In any case, I just removed replication for the container and there are no pending tasks for it, but there is still a...
Well another migration failed for the same issue.
Can someone please at least do an ls on the /var/lock/pve-manager folder and let me know if you see any files? It would be helpful to see this on nodes with and without replication running.
Thanks!
I was doing some server maintenance today and migrating some containers/VMs and twice I ran into issues with migration due to the following error:
TASK ERROR: can't lock file '/var/lock/pve-manager/pve-migrate-xxx' - got timeout
This happened with 2 different servers and 2 different...
I have 4 nodes that are all currently setup with vmbr0 and all VMs/Containers are connected to vmbr0. Migration works well between all of the nodes.
One of the nodes I am planning on converting to a bonded network so VMs/containers will be connected to bond0 for this node. How will migration...
I tried installing Proxmox using the 7.0 installer and I received the following error:
bootloader setup errors:
- unable to init ESP and install proxmox-boot loader on '/dev/sdb2'
Hardware
- ASUS H97M-E/CSM
- i7-4790t
- 32GB RAM
- 1TB SATA drive
- 250G M.2 Samsung 850 EVO
Installation details...
Thanks for the info. I'm surprised that there is no support for this SSH best practice.
If i set PermitRootLogin to without-password instead of no, how can I configure Proxmox replication to use key based auth? EDIT: Nevermind, it's already configured for this.
Also, is there a list of...
I am setting up a new proxmox cluster for my homelab and I would like to use replication for some of my critical VMs.
However for all of my servers, one of the first changes I make is to modify the SSH Server to disable root logins and to disable password authentication. This seems to break...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.