Search results

  1. B

    Write Errors in ZFS, but not in Ceph

    How would a drive with a hardware issue be much different with and without another partition on the drive? You will still have throughput contention if both partitions are being used, but that is the case whether or not the drive is bad (and I've already considered this and I find the benefits...
  2. B

    Write Errors in ZFS, but not in Ceph

    What experiences does everyone else have with failed drives in ceph? Does ceph keep it running until a complete failure where the drive is marked as down/out? Any other places where failures may crop up before a complete failure? Thanks!
  3. B

    Write Errors in ZFS, but not in Ceph

    A drive on one of my nodes is constantly throwing ZFS write errors on my rpool (triple mirror, so not currently worried about data loss). Based on SMART self test, age of the drive, and the fact that I think it started when I was moving some cabling around, I'm pretty confident the issue is due...
  4. B

    Typical Ceph OSD Commit Latency for HDDs?

    So even with large sequential writes RocksDB generates a bunch of random write IO? What has a biggest bang for the buck, more nodes or more OSDs? So it sounds like even though I plan on using this mostly for cold storage/sequential writes I need to test out reserving some of my SSD storage...
  5. B

    Typical Ceph OSD Commit Latency for HDDs?

    They will be used primarily for cold storage and for some large continuous writes so I don't need them to be super speedy. I have the SSD based pool for anything I need to be particularly fast. However currently with 9 HDD OSDs across 4 nodes (3/3/2/1) I am getting a max of ~55MB/s on a large...
  6. B

    Typical Ceph OSD Commit Latency for HDDs?

    What is the typical commit latency I should expect from HDDs in a cluster? I'm currently in the process of migrating from my ZFS pools to a Ceph pools. For the initial migration I have a random mix of hardware. Once the migration is complete, I'll shutdown the ZFS pools and migrate that...
  7. B

    pveceph status vs zpool status

    I'm starting to play around with ceph pools to replace one of my zfs mirrored pools. One of my biggest questions is how to detect errors from flakey drives/cables/etc. zpool status can provide the following information and I want to see if similar data is available for ceph pools (using...
  8. B

    Should there always be lock files in the /var/lock/* folders?

    Replication for the container that failed migration typically runs < 15 seconds and at least one of the times it failed I tried a few minutes later with the same error message. In any case, I just removed replication for the container and there are no pending tasks for it, but there is still a...
  9. B

    Should there always be lock files in the /var/lock/* folders?

    Well another migration failed for the same issue. Can someone please at least do an ls on the /var/lock/pve-manager folder and let me know if you see any files? It would be helpful to see this on nodes with and without replication running. Thanks!
  10. B

    Should there always be lock files in the /var/lock/* folders?

    I was doing some server maintenance today and migrating some containers/VMs and twice I ran into issues with migration due to the following error: TASK ERROR: can't lock file '/var/lock/pve-manager/pve-migrate-xxx' - got timeout This happened with 2 different servers and 2 different...
  11. B

    Migrating between nodes with different network configurations

    Ah so I can set bond0 as a Bridge Port for vmbr0? I thought I would have to remove vmbr0 and replace it with bond0.
  12. B

    Migrating between nodes with different network configurations

    I have 4 nodes that are all currently setup with vmbr0 and all VMs/Containers are connected to vmbr0. Migration works well between all of the nodes. One of the nodes I am planning on converting to a bonded network so VMs/containers will be connected to bond0 for this node. How will migration...
  13. B

    Proxmox VE 7.0 Installation Error

    I tried installing Proxmox using the 7.0 installer and I received the following error: bootloader setup errors: - unable to init ESP and install proxmox-boot loader on '/dev/sdb2' Hardware - ASUS H97M-E/CSM - i7-4790t - 32GB RAM - 1TB SATA drive - 250G M.2 Samsung 850 EVO Installation details...
  14. B

    Replication with secure SSH configuration (root login disabled/pw auth disabled)

    Thanks! I've never heard of the Match Address stanza before. It looks like that is a reasonable mitigation solution.
  15. B

    Replication with secure SSH configuration (root login disabled/pw auth disabled)

    Thanks for the info. I'm surprised that there is no support for this SSH best practice. If i set PermitRootLogin to without-password instead of no, how can I configure Proxmox replication to use key based auth? EDIT: Nevermind, it's already configured for this. Also, is there a list of...
  16. B

    Replication with secure SSH configuration (root login disabled/pw auth disabled)

    I am setting up a new proxmox cluster for my homelab and I would like to use replication for some of my critical VMs. However for all of my servers, one of the first changes I make is to modify the SSH Server to disable root logins and to disable password authentication. This seems to break...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!