So, I was thinking about this a bit more.
As far as I understand, you would have been happy if you could have specified the partition label on which the answer.toml file should be searched?
Because it would be one possibility to make that configurable when preparing the ISO for the auto...
Does the network for Ceph actually give you the expected speeds? iperf can be used to benchmark the network.
If recovery / rebalance brings the cluster to a point where you see performance issues in the guests, it could also be, that the cluster does not have enough reserves to handle the...
If the ZFS storage is not thin provisioned, the datasets for the disk images will be created with a reservation.
I suggest you read up on man zfsprops to see what each property does. The reservation and/or quota property might be of interest. Be careful though if you modify the system, there...
If you set the autoscale mode to "warn" you will at least know if you should change the pg_num :)
So yes, I would suggest you switch the .mgr pool to the main rule as well, to get that.
timing ;)
What you can see is that each dataset in a ZFS pool shares the free space (AVAIL), unless you make reservations.
Therefore, for each storage, the total space is usually calculated by currently used + free.
Actual usage on ZFS is a bit less, most likely due to compression.
Check zfs...
Besides my earlier comment regarding different OSD sizes and number of OSDs per node, these screenshots show a few more issues.
CRUSH rules: if you start using device specific rules, all pools need to be assigned one, as the default replicated_rule does not distinguish between device classes...
How are your pools configured in regards to PGs per pool? Did you configure target_ratios for the pools so that the auto scaler knows what you expect them to grow into in size?
Also, you have cluster nodes with every different numbers of OSDs and sized OSDs. This makes it quite a bit more...
No, but OSDs will issue a lot of sync writes, which these SSDs will handle the best regarding performance, as they can ACK the write operation once the data is in the local cache/RAM of the SSD.
Which is why you will see in the Ceph docs a section on how to enable the cache on HDDs to improve...
Without much infos to go on. To get good performance on Ceph, you need a few things:
fast datacenter SSDs with PLP
low latency network with enough bandwidth. Fast SSDs can make the network quickly a bottleneck, see Ceph 2023 Benchmark Paper
configure BIOS to low latency / high performance...
You want to pass through a physical serial port to the Windows VM?
That cannot be done through the web UI, but via the CLI:
qm set {vmid} --serialX /dev/ttyUSB0
This is just one example. Check which of the /dev/tty devices you want to pass through.
Instead of going through the pains of modifying the automated installer, wouldn't it be easier to set up a small server that hosts the answer files and use just one ISO that fetches them?
With access to the iDrac you might even be able to further automate it as you may be able to fetch certain...
True, but that doesn't make RFM (Read (the) Fucking Manual?) a more helpful response to work with in my opinion :)
At least some pointers where in the documentation one might find the information would be a lot more helpful already.
IIUC, pass through a physical disk directly to a VM?
Not sure if VMware might be doing more, but to pass through a physical disk, you can run:
qm set {vmid} --scsiX /dev/disk/by-id/...
Instead of --scsiX you can use any other bus type like ide or sata.
See...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.