We recently uploaded a 6.14 kernel into our repositories. The current 6.8 kernel will stay the default on the Proxmox VE 8 series, the newly introduced 6.14 kernel is an option.
The 6.14 based kernel may be useful for some (especially newer)...
Proxmox VE is the newest addition to the NVIDIA vGPU supported hypervisors, beginning with NVIDIA vGPU Software v18.0, released today.
NVIDIA vGPU software enables multiple virtual machines to share a single supported physical GPU, learn more at...
Proxmox Mail Gateway 8.2 is available! The new version of our email security solution is based on Debian 12.9 “Bookworm”, but defaulting to the newer Linux kernel 6.8 and allowing for opt-in use of kernel 6.11. The latest versions of ZFS 2.2.7...
Should I use ZFS at all?
For once this requires a disclaimer first: I am using ZFS nearly everywhere, where it is easily possible, though exceptions do exists. In any case I am definitely biased pro ZFS.
That said..., the correct answer is...
10% is spare, you should never overprovisioning storage space, which could result into data loss.
I think you are able to unmount while running the Server. You need to take the Volume offline via Windows, detach via Proxmox and recreate it with...
Habt ihr einen Link für mich, wo über dieses Problem diskutiert wird?
Ich hatte ein ähnliches Verhalten auf einem sehr langsamen ceph storage nachvollziehen können, welches auf HDDs basiert und die Queue für HDDs auf ein nicht auszuhaltendes...
you should never set the storage of a VM higher or equal to that of the storage, always leave at least 10% free. In addition, your VM does not have “Discard On”, so that deleted data is not reported to the host, so that your storage is now...
das ist meiner Meinung nach vollkommen Absurd, sowas wird man vielleicht in sehr kleinen Unternehmen mit kleinen VMs umsetzen können, jedoch alles was über 200Gb groß ist, ist mit einem erhöhtem Zeitlichen Verlust zu rechnen, ganz zu schweigen...
Great news..!
This is most probably due to fixes included with v266.
The change in cache makes sense. This should really be left to the "lowest" level possible, i.e. closest to bare metal.
(Oh no, I hope I didn't start a flame war with that...
For those interested, in the last few days there have been many PRs raised for fixes in viostor (the VirtIO Block driver).
These are still pending review, but if approved and merged it will likely mean that the block driver will outperform the...
A lot of things here...
You are using 6 monitors, which isn't supported. Use either 3 or 5 (preferred as it would allow 2 mons to fail and still keep quorum).
You have a 3/2 pool set to drive class "hdd", but only have 2 servers with "hdd"...
Are the time settings of each Cluster member correct, Mons and Manager Up?
Can you provide the follwing Details:
ceph status
ceph osd tree
ceph osd df
ceph pg dump pgs_brief
You can try to:
ceph osd set norecover
ceph osd set nobackfill
ceph...
i think we got several problems here:
1. uneven alligement of HDDs only on 2 nodes.
2. 83 remapped pgs (which is working right now?)
pool 4 'hdd-pool' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 128 pgp_num 120...
i think he does, because of :
data: pools: 3 pools, 289 pgs
1. .mgr
2. hdd
3. ssd
correct me if im wrong.
This could help you, but should really only be the very last method with complete backups...
yes, works flawlessly
the only thing i noticed with v266 is, that HDD backed Ceph with very high queue kills the specific volume and it because unresponsive and stuck forever. Best way to trigger that is having a Fileserver with deduplication...