Das reine Betriebssystem braucht auf keinen meiner Knoten mehr als sieben GB. Templates, iso-images o.ä. kann man auf NFS auslagern, beim eigentlichen Betriebssystem würde ich davon die Finger lassen. Am Ende kann ein Dienst nicht aufs NFS...
Ich würde /var/lib/vz komplett auslagern. Da liegen z.B. Templates und ISOs drin. Bei mir 50GB. Ohne dieses Verzeichnis ist mein PVE nicht mal 7GB groß.
Dann aber die ISO-Files nicht mehr über die PVE-GUI hochladen, sondern immer direkt aufs NFS.
Proxmox has the ability to incorporate patches and build updated packages as needed. Given that Proxmox VE is frequently deployed as an appliance, and is relied upon by many enterprise environments, there may be cases where, depending on the...
Oh, indeed Wazuh is saying so. Looks very much like Wazuh's inaccuracy.
The original message from the researchers clearly states that the vulnerability is in telneld (the server)...
Package inetutils-telnet contains client only, so it doesn't make the machine vulnerable.
While telnet server is delivered by other package: inetutils-telnetd
systemd-userdbd where the change( adding an optional field you can set to any date you like) occured isn't even used in ProxmoxVE. But even if it would be used: The whole outcry is way overblown.
Even if one get rids of systemd ( imho a bad idea...
Hey all.
We have put together a storage plugin for S3 storage for PVE.
This comes from us having used a local directory and s3fs (fuse) with mixed results - bad things happen when the S3 storage goes offline, either via a proxy outage or...
Warum sollte Multipathing den discard-Befehl nicht weitergeben?
Bzw sollte das Discard ja die qcow2-Datei im OCFS2 kleiner machen. Auf der Ebene hat das mit Multipathing zum SAN ja noch gar nichts zu tun.
Wenn das SAN-Storage selber die LUN...
@spirit: Thank you for the concrete numbers and references — that's very helpful. I was wrong to be dismissive earlier about the allocator angle.
The Ceph blog you linked (QEMU/KVM Tuning) reports a ~50% improvement on 16KB random reads (53.5k →...
thank you for your clarification.
little bit unpractical that the event is beeing skipped and not beeing executed after the time change.
but mostly no big deal to avoid the problem with setting another time.
even that this is not a pbs related...
This is another example of systemd "Knows Better" dogma :-(.
Cron is more sensible:
"Daylight Saving Time and other time changes
Local time changes of less than three hours, such as those caused by the start or end of Daylight Saving Time, are...
Welcome, @Astrodata
I have the same output in a system which has nothing common to Proxmox:
@kubuntu:~$ systemd-analyze calendar "Sun *-*-1..31 02:00:00"
Original form: Sun *-*-1..31 02:00:00
Normalized form: Sun *-*-01..31 02:00:00
Next...
In a 3 node cluster with size=3, min_size=2 and the default replica rule, Ceph won't rebalance anything if one node fails. To comply with the default rule "three replicas en three OSD located in three different servers" you need 3 servers, if...
Yeah, you can switch the network in the Datacenter menu under Options. Just keep in mind that 2x10G isn’t a lot for Ceph, if a migration starts running, you can hit bottlenecks pretty fast. I think, there’s also an option to limit the migration...
The package cloud-init is part of Debian. This version normally fixed for each Debian release, so with the advent of Debian 14 Forky and upcoming PVE 10 the current version available right now is 25.3, so not that much newer. That may for Debian...
What is FTT in this context? The memory target for OSDs should be 8 GB, which means 8GB per daemon. Anything lower usually sacrifices performance during recovery. With the memory target using EC or replica doesn't make a difference.
Yes...
By default /etc/ceph/ceph.conf is a symlink to /etc/pve/ceph.conf which makes it the same on each cluster node.
AFAIK it is easier to use ceph config set to set the values in the config db for each Proxmox node.
ceph config set client.HOSTNAME...
I misspoke.
PVE does not have any such functionality. What I MEANT was to do it at the local debian level (and since thats where the entirety of the pve stack resides it was convenient shorthand)
there are a bunch of tutorials for "setting up...