Pretty please, describe the solution to me then using iSCSI without ZFS over it. NVM you mentioned OCFS2 and GFS2, but if there are any already employed by PVE please let me know.
Yes, that's entirely true but I did not mention those as they are not within the scope of our available options...
No, because you specify the target you want to remove e.g. "iqn.2005-10.org.freenas.ctl:LUN". Also, this methodology seems to apply in 2022 but I'd love to hear that I am wrong or will soon be wrong about that.
Tmanok
Hi Everyone,
We've been struggling moving away from a mal-performing CEPH cluster at a customer site for a little while now, however, our NAS supports iSCSI and NFS, but not CEPH or ZFS over iSCSI. Is there truly no solution to include HA, Live Migration, and PBS Live Backups using an iSCSI...
Hi Oguz,
Yes. That is what we have been doing in the meantime, going back to 5.11 without issue but we have noticed poor disk performance.
BIOS is not latest, but we don't have a copy of latest or latest HBA firmware admittedly.
No HW changes at all.
Hi Neobin,
You may be on the right track...
Hi Proxrob,
Two things immediately come to mind:
1. Check the iPMI logs for hardware failure (fan, CPU, PSU, etc). For HP Proliant this would be iLO, Dell PowerEdge it would be iDRAC, etc refer to your owners manual otherwise for out of band management logs.
2. Linux has a few important logs...
Lifeboy, licensing and freedom still clash with reality- those who do not follow requirements will not receive community or professional support.
Anyone can do what they want, but their systems are likely to be unstable, insecure, or poorly performant if they do not meet a product's...
Please consider that Ceph needs somewhere around:
1-2GB/monitor
1-2GB/manager
3-4GB/OSD
This may also increase per TB by default... You should not be using CEPH with 8GB per machine, this is an enterprise software, I would strongly recommend against <48GB.
Tmanok
Hi Michael,
I've also been interested in improving my available CEPH performance. Do the Proxmox engineers have any guidance here?
There are many performance tuning guides but I am not certain for their compatibility, for instance would modifying the following help...
Why Hello There!
This evening I performed a regular upgrade to a client's long-standing (~2 years) hyperconverged PVE cluster. This cluster has some history but for the most part has been trouble free. The cluster is made up of homogeneous hardware, all Gen8 HPE Proliant servers with one node...
Super excited to upgrade to 7.2! All of the new features look fantastic!
Question about VirGL:
Does this mean that VM graphics processing can be sent to the host and processed as OpenGL commands? If so, does this mean that the host's GPU(s) can process guest graphics??
Thanks a ton for your...
Restarting the pvestatd daemon and disabling remote storages that were offline have helped me a few times with this. The issue often arises when one node believes that NFS is active but then "panics" when an administrator queries the remote storage that is in fact unreachable.
Thanks!
Tmanok
Labynko!
Thank you so much! This helped me restore two mislabeled OSDs! Wrong serial printed on the caddy/tray and I only noticed about 2 minutes later! Safe to say that if you can run these commands within 5 minutes you won't crash your OSD. I crashed one after about the 5-8 minute mark.
Tmanok
Zapping the disk preserves the partitioning but not the data. The --destroy flag removes both, effectively making it a full wipe. Either way, consider your data toast.
This was my solution...
On the topic of file systems, with APFS, snapshots are a native feature in the file system, so the solution might be made a lot easier. Additionally, Timemachine has been designed to make use of these snapshots so perhaps it could be leveraged with the MacOS client? Might be a good idea to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.