Search results

  1. A

    issue with rocky linux lxc

    If you can ping the containers from each other, your issue exceeds the scope of this forum and is a normal linux admin question. have you installed and enabled openssh-server on the destination?
  2. A

    mount cephfs into a lxc

    Can confirm Not in the PVE implementation ;) at least not OOB. maybe you want to add it? more seriously, this have little purpose. Any nfs capable client would be better served by attaching cephfs directly (Windows can and should use smb.)
  3. A

    Scrub won't complete on degraded ZFS pool

    A SMART test doesnt put any significant load on the drive, and will typically not even impact disk performance at all. A disk under smart test would not impact heat generation. You can cook a disk drive AT IDLE without adequate cooling. Dont believe me? pull the data sheet.
  4. A

    Scrub won't complete on degraded ZFS pool

    There is no record of a test being performed. smartctl --test=long /dev/disk/by-id/ata-WDC_WUH721414ALN6L4_9RHJR6WC IF the disk is good, the test should take about a day to complete, but, do you see all those errors trapped by your HDD firmware? chances are if will fail in the first 10...
  5. A

    Scrub won't complete on degraded ZFS pool

    Post the content of smartctl --all /dev/disk/by-id/ata-WDC_WUH721414ALN6L4_9RHJR6WC If the disk passes a long test, you have problems with your SATA host/cabling.
  6. A

    Low disc performance with CEPH pool storage

    I cant give you any feedback on results without the test strings, which appears to have contradictory directives.
  7. A

    Proxmox Cluster and Nimble iSCSI issues

    according to https://support.hpe.com/hpesc/public/docDisplay?docId=sd00006071en_us&page=GUID-C1B11878-9AB6-45BB-9F58-B89C0CEEE262.html&docLocale=en_US the recommended method is group_by_prio. post your multipath.conf for further analysis. You didnt ask, but best practices suggests keeping...
  8. A

    Ceph - Which is faster/preferred?

    Larger NVMEs tend to perform better, BUT ceph performance increases more linearly with more OSDs. It would probably wash if you create twice as many OSDs on the larger drives (one faster disk vs 2 slower ones,) but the disk failure domain would be a multiple of OSDs.
  9. A

    Risk of data corruption with iSCSI and ZFS over iSCSI

    Yes, but not exclusively. the ZFS FILE system is not, but zfs can export zvols (block devices) which are no different then any other block device for the purposes of cluster operations. when use as zfs over iscsi, iscsi just facilitates the transport of zvols, and the cluster manages contention.
  10. A

    Low disc performance with CEPH pool storage

    A couple of things jump out. 1. retest with cache disabled. 2. the memory allocation is concerning. is this a single or dual socket host, and how much total system ram? limit the VM to a MAXIMUM of total cores of a single socket, and maximum half the host ram. disable balooning if on. Also turn...
  11. A

    PVE Cluster with Netapp storage

    https://docs.netapp.com/us-en/netapp-solutions/proxmox/proxmox-ontap.html
  12. A

    PVE Cluster with Netapp storage

    I stand corrected. thank you @spirit
  13. A

    PVE Cluster with Netapp storage

    /pedantic. nconnect has been supported on ONTAP since version 9.9.1 (nconnect IS multithreaded nfs.) Unfortunately since pvesm doesnt support nconnect you'd need to manually manage the mounts to use it.
  14. A

    Ceph reading and writing performance problems, fast reading and slow writing

    Its literally a dialog box. I think you mean hot tier cache- which is true but pointless. The storage subsystem either works or it doesnt for your use case. getting caught up in how the filesystem works isnt likely going to produce anything useful to you.
  15. A

    Moving to Proxmox questions

    The LUNs themselves can be thin provisioned on the storage; it would just not apparent to PVE (similarly to how a thin provisioned virtual disk is not apparent to the guest.)
  16. A

    Moving to Proxmox questions

    a node is a member of a cluster. Its not Proxmox speak. Yes. I think this was already answered, but principally it will work either as iSCSI or NFS. there are caveats to both- see https://pve.proxmox.com/wiki/Storage iscsi is block storage, so it stands to reason the storage device has no...
  17. A

    Low disc performance with CEPH pool storage

    What is the type (NTFS/ReFS) and block size of the guest file system? Also, you mentioned you set up the VM using best practices but it might be useful to validate it. post its vmid.conf.
  18. A

    Ceph reading and writing performance problems, fast reading and slow writing

    ceph-volume create is the command to create OSDs. it has nothing to do with how you use the filesystem. What does "power hungry" mean in this context? do you have metrics? (block size, latency, potential iops per core, ?) so... why not do that? why? you're already doing it, all you need to do...
  19. A

    Underlying Debian OS

    So the answer is: it depends. if you are using proxmox as a hypervisor, it completely circumvents your hypervisor tooling and resource awareness which means you introduce elements external to its C&C. If you are deploying a virtualization environment in production this is OBVIOUSLY a bad idea...
  20. A

    Ceph reading and writing performance problems, fast reading and slow writing

    Caching is a method to provide performance consistency for a given set of parameters. Its inclusion (or absence) isnt a feature in and of itself. define your minimum performance criteria and then proceed to tune your storage subsystems with its available tunables. Ceph exposes a LOT MORE tunable...