in the case of op, he set up the entirety of his only disk(s) as a single zfs pool. he is using it for both his root partition AND his virtual disk space. since zfs zvols deployed by pve are thin provisioned by default, its possible to...
this can happen ;)
since there is no room for further writes, you have to "create" some. The only way to do that is to overwrite an existing file with a smaller one. find a large size candidate (like in /var/log) and overwrite it like so:
dd...
High availability is a function of a failure tolerance within a given failure domain. If you want/need a host failure domain that can sustain two node failures just size for it- you'd need 5 nodes and shared storage. If your tolerance is a single...
The reason is pretty clear in his network file. vmbr0 uses bond0, which in turn has the nics labelled "
<span>#Daughter Card - NIC1 10G to network switch</span><br>"
If you want to use your 25gb nics, use them.
On an unprivileged CT, you won't be able to run fstrim AFAIK. However (& depending on how you've mounted that disk inside the CT) you could possibly try pct fstrim <ctid> from the host node as shown here:
Whats the point of doing this?
Fix your filesystem problems first. Your zfs layout isnt the issue, but the subsystem is somehow defective.
Once we past the "dont store your payload and backups on the same storage," If you really want to use PBS...
What I would do in your situation:
1. since you have a limited amount of ssd's, I'd install PVE on disks 0+1. yes that means a 2.4TB of raw disk for such a small use but the alternative will be more impactful.
2. "Slow" Pool: disks 2-13, striped...
You can, but shouldn't for multiple reasons. keep your OS on local disks.
ceph is designed to provide the same storage services as your SAN. It will reduce your usable capacity, obviously be substantially slower than the native storage, and is...
The means of reducing risk involve reducing fault potential out of the solution and providing means of mitigation. At no point does anyone say that fault CANNOT result in failure. Your so called "super reliable" data storage vendors invest...
It seems like this question is asked daily. I dont know who to @mention but it might be a good idea to post a sticky.
yes. see https://pve.proxmox.com/wiki/ISCSI_Multipath for multipathing setup. As for storage pool setup you have 2 options (3...
everything is relevent relative to what you do with it. For homelab you're unlikely to "quickly" get to wearout, if ever.
It uses as much or as little as you allocate. If it makes you feel any better, I ran a 5 drive raidz on a 2006 vintage...
If you can ping the containers from each other, your issue exceeds the scope of this forum and is a normal linux admin question. have you installed and enabled openssh-server on the destination?
Can confirm
Not in the PVE implementation ;) at least not OOB. maybe you want to add it? more seriously, this have little purpose. Any nfs capable client would be better served by attaching cephfs directly (Windows can and should use smb.)
A SMART test doesnt put any significant load on the drive, and will typically not even impact disk performance at all. A disk under smart test would not impact heat generation. You can cook a disk drive AT IDLE without adequate cooling. Dont...