Search results

  1. A

    Scrub won't complete on degraded ZFS pool

    Post the content of smartctl --all /dev/disk/by-id/ata-WDC_WUH721414ALN6L4_9RHJR6WC If the disk passes a long test, you have problems with your SATA host/cabling.
  2. A

    Low disc performance with CEPH pool storage

    I cant give you any feedback on results without the test strings, which appears to have contradictory directives.
  3. A

    Proxmox Cluster and Nimble iSCSI issues

    according to https://support.hpe.com/hpesc/public/docDisplay?docId=sd00006071en_us&page=GUID-C1B11878-9AB6-45BB-9F58-B89C0CEEE262.html&docLocale=en_US the recommended method is group_by_prio. post your multipath.conf for further analysis. You didnt ask, but best practices suggests keeping...
  4. A

    Ceph - Which is faster/preferred?

    Larger NVMEs tend to perform better, BUT ceph performance increases more linearly with more OSDs. It would probably wash if you create twice as many OSDs on the larger drives (one faster disk vs 2 slower ones,) but the disk failure domain would be a multiple of OSDs.
  5. A

    Risk of data corruption with iSCSI and ZFS over iSCSI

    Yes, but not exclusively. the ZFS FILE system is not, but zfs can export zvols (block devices) which are no different then any other block device for the purposes of cluster operations. when use as zfs over iscsi, iscsi just facilitates the transport of zvols, and the cluster manages contention.
  6. A

    Low disc performance with CEPH pool storage

    A couple of things jump out. 1. retest with cache disabled. 2. the memory allocation is concerning. is this a single or dual socket host, and how much total system ram? limit the VM to a MAXIMUM of total cores of a single socket, and maximum half the host ram. disable balooning if on. Also turn...
  7. A

    PVE Cluster with Netapp storage

    https://docs.netapp.com/us-en/netapp-solutions/proxmox/proxmox-ontap.html
  8. A

    PVE Cluster with Netapp storage

    I stand corrected. thank you @spirit
  9. A

    PVE Cluster with Netapp storage

    /pedantic. nconnect has been supported on ONTAP since version 9.9.1 (nconnect IS multithreaded nfs.) Unfortunately since pvesm doesnt support nconnect you'd need to manually manage the mounts to use it.
  10. A

    Ceph reading and writing performance problems, fast reading and slow writing

    Its literally a dialog box. I think you mean hot tier cache- which is true but pointless. The storage subsystem either works or it doesnt for your use case. getting caught up in how the filesystem works isnt likely going to produce anything useful to you.
  11. A

    Moving to Proxmox questions

    The LUNs themselves can be thin provisioned on the storage; it would just not apparent to PVE (similarly to how a thin provisioned virtual disk is not apparent to the guest.)
  12. A

    Moving to Proxmox questions

    a node is a member of a cluster. Its not Proxmox speak. Yes. I think this was already answered, but principally it will work either as iSCSI or NFS. there are caveats to both- see https://pve.proxmox.com/wiki/Storage iscsi is block storage, so it stands to reason the storage device has no...
  13. A

    Low disc performance with CEPH pool storage

    What is the type (NTFS/ReFS) and block size of the guest file system? Also, you mentioned you set up the VM using best practices but it might be useful to validate it. post its vmid.conf.
  14. A

    Ceph reading and writing performance problems, fast reading and slow writing

    ceph-volume create is the command to create OSDs. it has nothing to do with how you use the filesystem. What does "power hungry" mean in this context? do you have metrics? (block size, latency, potential iops per core, ?) so... why not do that? why? you're already doing it, all you need to do...
  15. A

    Underlying Debian OS

    So the answer is: it depends. if you are using proxmox as a hypervisor, it completely circumvents your hypervisor tooling and resource awareness which means you introduce elements external to its C&C. If you are deploying a virtualization environment in production this is OBVIOUSLY a bad idea...
  16. A

    Ceph reading and writing performance problems, fast reading and slow writing

    Caching is a method to provide performance consistency for a given set of parameters. Its inclusion (or absence) isnt a feature in and of itself. define your minimum performance criteria and then proceed to tune your storage subsystems with its available tunables. Ceph exposes a LOT MORE tunable...
  17. A

    Low disc performance with CEPH pool storage

    not 50gbit, 2x25. a single io request cannot exceed 25gbit a single channel on a lagg, and ceph transactions are still single threaded. the good news is that it wouldnt really make a difference anyway, since each of your OSD nodes need two transactions per IO anyway (one on the public interface...
  18. A

    Low disc performance with CEPH pool storage

    Are you SURE? when benchmarking 4k performance, note that - MB/S is irrelevant. what are the IOPs? - data patterns (sequential/random) will have a large impact on this perceived performance. Sequential large read/write performance numbers get the warm and fuzzy but are largely inconsequential...
  19. A

    Fibre Channel (FC-SAN) support

    Understandable. In my view, deploying a product that is effectively unsupported (by anyone) is a bad solution, regardless of budgetary requirements. outages, loss of service, or loss of data are more expensive than upfront spending. Given your set of constraints, I'd probably be looking at...