yeah thats a bad way to go. keep logs where they are and mount your ramdisk as a unionfs at boot. I would still do a periodic commit.
dont do that. backup /etc/pve/storage.cfg, /etc/pve/lxc, and /etc/pve/qemu-server and reinstall as new- you can...
osd.1 appears to be unreachable; in case this is a networking issue, since you are paranoid about posting your actual IP addresses (or at least within their respective subnets) I cant really help you.
That said, check the host of osd.1 to see...
So it is. apologies.
your hypothesis is flawed. For one thing, what makes you think you SHOULD get a specific number on THIS specific test? who are these "others" that you are comparing to? maybe ask them what they are doing different.
that is patently impossible to a single channel. 16gbit/s yields a maximum THEORETICAL 409.6 kIOPs at 4k/iop. In PRACTICE your actual performance depends on multiple factors and will never actually reach the theoretical max.
All your iops test...
Why does any of this matter? does your use case require a specific throughput or is it just a matter of "thats what the specs say, so should the benchmarks!"
You cannot overstate this. a fast disk subsystem that can fail and take your data with...
You do not want PVE to be mapped to the same LUNs and Zones as your VMware. You need to create new LUNs, new Zones, new mappings.
If the LUNs are properly mapped , you should see the disks in "lsblk" and "lsscsi" output. If you don't - the...
I think we may be experiencing some semantical difference.
"SAN" or Storage Area Networks typically mean shared block storage- eg iSCSI, FC, NVOF or even SAS/SCSI.
"Directory Storage" means (to me) locally mounted filesystem, either on...
Lets begin with the obvious. what are you using now? I presume it some sort of NAS since you're using "directory-based storage."
Is this besides the above? in addition to?
Either a NAS or a SAN can handle this, so it looks like you already have...
I think you might want to re-evaluate the basis of troubleshooting from belief to, ahem, troubleshooting.
Well that is the problem isnt it; maybe post those too?
neat idea, but I think you may want to pick EITHER homelab/smb use vs actual enterprise use, as their usecase will dictate widely different requirements.
IF you want to target home/SMB use, you need a gui for user and ACL management; maybe...
Hi... Here is a systemd unit that I use for ocfs2.
I hope you can use it to adapt to your needs
# /etc/systemd/system/data.mount
[Unit]
Description=Data mount
After=drbd.service
After=o2cb.service
After=ocfs2.service
[Mount]...
"Best Practices" in this context are largely dependent on your use case.. While performance is an important part of your strategy, its not the sole objective.
As you should be aware, LVM on shared storage is supported on lvm thick only. Snapshot...
Hello @baalkor, this really depends on the capabilities of your backend storage and your specific configuration. Personally, I would consider multiple LVM pools if:
You are dealing with very high-performance storage, where LVM itself can be a...
That probably depends on your customers more than it does you. Having some exposure to the industry, I can tell you most studios will effectively give you their policies when you'll submit a vendor security questionnaire. "Proxmox" isnt really...
payload on the host. you can share directories to your containers as mountpoints. Instead of your data living INSIDE a VM logical volume, it can live on your normal filesystem (eg, /srv/my_important_data)
that makes things easier and safer...
ok, but what happens when you run the workload on host B for a while? do you have any form of check to make sure you're synchronizing the right source to the right destination?
The above procedure is fairly simple to set up with two PVE hosts...