Those are your disks and partitions. The sizes don't align with your df -h output, so there's still a discrepancy between your block devices versus your mounted filesystems.
What does this command say: findmnt -lo source,target,fstype,used,avail
edit:
Just to add, this is an old post. The OP's...
It sounds like you may have installed a 1 TB drive for VM hosting, but that is not where PVE is installed or where your ISO repository is configured. Nothing in your df -h output is anywhere near that size.
If you cannot write a reasonably sized file directly to the directory via scp or ssh, I...
This is a just dead issue. Novices just expect to be able to freely upload files via the GUI with no situational awareness or any regard for how much space is available versus the size of the payload.
Numerous solutions have been provided in the thread. I don't expect PVE team to provide a...
Haswell-EP is what you have in your host. Use Haswell-noTSX-IBRS for now, and see if there is any unknown hardware in device manager, or how Windows identifies the VM CPU.
all the fake CPUs have similar performance but lack instruction set extensions. kvm64 should be one of the oldest/slowest.
I have 4 options:
Haswell
Haswell-IBRS
Haswell-noTSX
Haswell-noTSX-IBRS
I would just use Haswell, that is what your physical processor is.
What was the processor of the...
PVE-ceph versus regular ceph does not apply in this discussion.
One of the most important best practices you might want to consider is that your separation of "storage" network from your production VM network is not sufficient. Your storage network must (should be seriously considered to) be...
I just want to be sure when you say "Data and storage networks are separated of course." we are referring to ceph front and ceph back are separate, on dedicated 25 GbE pairs, right?
What device class do you have on your existing pools? If a different device class is assigned to the new, there...
It's unfortunate, but uploads are spooled to a local dir and then moved to your destination.
If your root storage is not sufficient, which is often the case, do not use the GUI to upload content, use SSH/SCP directly to your desired CephFS directory.
All I'm saying is that I have seen it on quincy, with sufficient network and NVMe drives, and a modest 3k steady iops from fewer than 150 VMs, and I told myself I probably won't do it again just in terms of risk/reward.
For what appears to be a relatively green user to undertake a 2x on their...
You can absolutely operate on a shared net, that is why it is still permitted, and merely not recommended, but the recovery impact on client operations are more likely to be higher.
And even on dedicated dual 40 GbE OSD network with recovery at the lowest priority I have seen slow ops during a...
The doc says to split the nets! And I do, but most people don't, or at least a majority of PVE setups I've seen.
Wherever they do discuss recovery loads, it's not getting through to the users.
The 100 GbE examples given in the 2023/12 PVE document were also not split, I imagine because the...
https://docs.ceph.com/en/latest/rados/operations/add-or-rm-osds/
ehh, the opportunity cost of recovery performance reconciled with client performance is not discussed.
The documentation is incomplete relative to the number of scenarios that can be encountered, especially with respect to split...
Where do people even get the term "best practice" from? This isn't a cartel like Microsoft or Cisco where they give exams and awards to their little disciples. It reeks of escapism, like nobody ever got fired for following a best practice, regardless of outcome.
There are currently only 3...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.