There is no record of a test being performed.
smartctl --test=long /dev/disk/by-id/ata-WDC_WUH721414ALN6L4_9RHJR6WC
IF the disk is good, the test should take about a day to complete, but, do you see all those errors trapped by your HDD...
Post the content of smartctl --all /dev/disk/by-id/ata-WDC_WUH721414ALN6L4_9RHJR6WC
If the disk passes a long test, you have problems with your SATA host/cabling.
according to https://support.hpe.com/hpesc/public/docDisplay?docId=sd00006071en_us&page=GUID-C1B11878-9AB6-45BB-9F58-B89C0CEEE262.html&docLocale=en_US
the recommended method is group_by_prio. post your multipath.conf for further analysis.
You...
Larger NVMEs tend to perform better, BUT ceph performance increases more linearly with more OSDs. It would probably wash if you create twice as many OSDs on the larger drives (one faster disk vs 2 slower ones,) but the disk failure domain would...
Yes, but not exclusively. the ZFS FILE system is not, but zfs can export zvols (block devices) which are no different then any other block device for the purposes of cluster operations. when use as zfs over iscsi, iscsi just facilitates the...
A couple of things jump out.
1. retest with cache disabled.
2. the memory allocation is concerning. is this a single or dual socket host, and how much total system ram? limit the VM to a MAXIMUM of total cores of a single socket, and maximum half...
/pedantic. nconnect has been supported on ONTAP since version 9.9.1 (nconnect IS multithreaded nfs.) Unfortunately since pvesm doesnt support nconnect you'd need to manually manage the mounts to use it.
Its literally a dialog box. I think you mean hot tier cache- which is true but pointless. The storage subsystem either works or it doesnt for your use case. getting caught up in how the filesystem works isnt likely going to produce anything...
The LUNs themselves can be thin provisioned on the storage; it would just not apparent to PVE (similarly to how a thin provisioned virtual disk is not apparent to the guest.)
a node is a member of a cluster. Its not Proxmox speak.
Yes.
I think this was already answered, but principally it will work either as iSCSI or NFS. there are caveats to both- see https://pve.proxmox.com/wiki/Storage
iscsi is block storage, so...
What is the type (NTFS/ReFS) and block size of the guest file system? Also, you mentioned you set up the VM using best practices but it might be useful to validate it. post its vmid.conf.
ceph-volume create is the command to create OSDs. it has nothing to do with how you use the filesystem.
What does "power hungry" mean in this context? do you have metrics? (block size, latency, potential iops per core, ?)
so... why not do that...
So the answer is: it depends.
if you are using proxmox as a hypervisor, it completely circumvents your hypervisor tooling and resource awareness which means you introduce elements external to its C&C. If you are deploying a virtualization...
Caching is a method to provide performance consistency for a given set of parameters. Its inclusion (or absence) isnt a feature in and of itself. define your minimum performance criteria and then proceed to tune your storage subsystems with its...
not 50gbit, 2x25. a single io request cannot exceed 25gbit a single channel on a lagg, and ceph transactions are still single threaded. the good news is that it wouldnt really make a difference anyway, since each of your OSD nodes need two...