systemd-userdbd where the change( adding an optional field you can set to any date you like) occured isn't even used in ProxmoxVE. But even if it would be used: The whole outcry is way overblown.
Even if one get rids of systemd ( imho a bad idea...
Hey all.
We have put together a storage plugin for S3 storage for PVE.
This comes from us having used a local directory and s3fs (fuse) with mixed results - bad things happen when the S3 storage goes offline, either via a proxy outage or...
Warum sollte Multipathing den discard-Befehl nicht weitergeben?
Bzw sollte das Discard ja die qcow2-Datei im OCFS2 kleiner machen. Auf der Ebene hat das mit Multipathing zum SAN ja noch gar nichts zu tun.
Wenn das SAN-Storage selber die LUN...
@spirit: Thank you for the concrete numbers and references — that's very helpful. I was wrong to be dismissive earlier about the allocator angle.
The Ceph blog you linked (QEMU/KVM Tuning) reports a ~50% improvement on 16KB random reads (53.5k →...
thank you for your clarification.
little bit unpractical that the event is beeing skipped and not beeing executed after the time change.
but mostly no big deal to avoid the problem with setting another time.
even that this is not a pbs related...
This is another example of systemd "Knows Better" dogma :-(.
Cron is more sensible:
"Daylight Saving Time and other time changes
Local time changes of less than three hours, such as those caused by the start or end of Daylight Saving Time, are...
Welcome, @Astrodata
I have the same output in a system which has nothing common to Proxmox:
@kubuntu:~$ systemd-analyze calendar "Sun *-*-1..31 02:00:00"
Original form: Sun *-*-1..31 02:00:00
Normalized form: Sun *-*-01..31 02:00:00
Next...
In a 3 node cluster with size=3, min_size=2 and the default replica rule, Ceph won't rebalance anything if one node fails. To comply with the default rule "three replicas en three OSD located in three different servers" you need 3 servers, if...
Yeah, you can switch the network in the Datacenter menu under Options. Just keep in mind that 2x10G isn’t a lot for Ceph, if a migration starts running, you can hit bottlenecks pretty fast. I think, there’s also an option to limit the migration...
The package cloud-init is part of Debian. This version normally fixed for each Debian release, so with the advent of Debian 14 Forky and upcoming PVE 10 the current version available right now is 25.3, so not that much newer. That may for Debian...
What is FTT in this context? The memory target for OSDs should be 8 GB, which means 8GB per daemon. Anything lower usually sacrifices performance during recovery. With the memory target using EC or replica doesn't make a difference.
Yes...
By default /etc/ceph/ceph.conf is a symlink to /etc/pve/ceph.conf which makes it the same on each cluster node.
AFAIK it is easier to use ceph config set to set the values in the config db for each Proxmox node.
ceph config set client.HOSTNAME...
I misspoke.
PVE does not have any such functionality. What I MEANT was to do it at the local debian level (and since thats where the entirety of the pve stack resides it was convenient shorthand)
there are a bunch of tutorials for "setting up...
Proxmox VE is a bare-metal hypervisor platform aimed at IT professionals and system administrators. It is not a product or service directed at children ...
Überrannt trifft es ziemlich gut, wenn Leute anstatt mal versuchen ihr Problem zu verstehen und mit eigenen Worten zu beschreiben sich das durch die KI vorformulieren lassen und nicht mal mehr checken, ob das überhaupt ihr Problem richtig...
Da bin ich deiner Meinung. Ich nutze sie beispielsweise für Rechtschreibprüfungen bei längeren Texten und für Übersetzungen bei englischen Beiträgen. Manchmal nutze ich sie auch zur Ideensammlung oder um erste Abklärungen zu treffen, was das...
Looking at the graphs you posted, a few things stand out.
The Ceph pool metrics show almost no writes — IOPS and throughput on `pve_ceph_prod_3az` are nearly 100% read. Cross-AZ write latency is not what's hurting you here. The good news is that...
I was thinking abaou PBR (policy-based routing)—i.e., forcing SDN traffic out through a different NIC.
Right now, in my cluster the management VLAN/interface has the default route, which means the SDN traffic would break out into my management...