You don't. I just hope most people are running those servers air-gapped and not to host some webservers. But probably it's more like a case of "never change a running system".
Node uptime. See some peoples posts asking for support:
Here for example 1264 days of node uptime running kernel 4.13 in 2023 ;)
https://forum.proxmox.com/threads/permission-denied-error-for-vzdump-command.130885/#post-574612
Why not? Did you test it? Its not uncommon here that I'm able to read/write faster to/from my pool than the hardware would allow. You just need lots of well-compressible data. See for example some LZ4 compression benchmarks:
If my SATA protocol can handle 550MB/s but my CPU can decompress with...
No experience here with that specific hardware. But as a rule of thumb: If the hardware supports the latest Ubuntu LTS and Debian it should also run PVE. Because PVE is based on Debian but uses the Ubuntu LTS kernel. So you might want to have a look if that servers hardware is on the Ubuntu...
One problem might be missing RAM defragmentation when you never swap out data from RAM to disk and later back but reordered/compacted. You might end up with "free" RAM that can't be used because it is too fragmented and then OOM killer kicks in while still having lots of free RAM left that can't...
There is a dedicated storage for virtual disks (so not stored on the failed system disk)? Did you create a backup of your "/etc/pve/qemu-server" folder on your OS disks root filesystem? Or is that PVE server part of a cluster? Or do you have some VZDump/PBS backups of those VMs? Because the VMs...
It's not always about if your data is still there after 10 years. Another question is, if the data is still the same after all those years. Without ECC and some checksumming filesystem like ZFS you will never know how much of your data corrupted over all those years. Lets say you stored 100...
No idea. I personally would have coded it with such an option and also with a MAYBE useful default blocksize preselected depending on the pool layout and number of disks (like TrueNAS does it).
Only possible by destroying and recreating that zvol (so for example backup+restore of a VM).
4K is...
See the first post where I quoted @wbumiller (one of the PVE staff members):
But I never got an answer if this only happens to swap on virtual disks or to swap on physical disks as well.
And you should use something like this to tell PVE that "/mnt/backup" is a mountpoint so it won't try to write backups to the local storage in case that folder isn't mounted: pvesm set <IdOfYourDirectoryStorage> --is_mountpoint /mnt/backup
Maybe your root filesystem is full as well? This could cause the webUI to fail becasue of a read-only filesystem. You could check that via CLI with "df -h" and look for the like with the single "/" if it is at 100% utilization.
Man kann einen beliebigen Ordner vom Host per Bind-Mounts in verschiedene LXCs bringen. Dann könnten beiden LXCs auch gleichzeitig auf den selben Ordner zugreifen. Macht aber nur Sinn, wenn du keinen Cluster hast, da du dir sonst nervige Abhängigkeiten einfängst. Auch kann es mit den Rechten...
Not so sure about the audio quality. The HDMI audio output of my PCIe passthroughed GPU got crackling noises while the image quality is perfectly fine. I guess you simply have to test it with your hardware.
Als Workaround habe ich jetzt ein "@reboot root /usr/sbin/qm start 10001 --timeout 120" in die /etc/crontab gesetzt sowie das "Start on Boot Delay" in den Node-Optionen auf 240 Sekunden. Damit startet die OPNsense VM jetzt grob 2 Minuten vor meinen anderen VMs die Netzwerk brauchen...
ZFS ist halt auch kein einfaches Software Raid, sonder Software Raid + Volumenmanagement + Dateisystem in einem. Wenn man sich da aber mal richtig eingearbeitet hat und auch all die Features zu schätzen und nutzen weiß, dann will man auch kein einfaches Software Raid mehr. ;)
Ja, wäre auch total OK für mich, wenn ich das manuell in die Config-Datei editieren könnten. Wäre halt nur schon, wenn es so eine Option wie beim "qm start" Befehl geben würde (weil gibt es ja anscheinend bisher nicht?).
Yes
Thats one of the reasons why I prefer to have dedicated system disks (or at least partitions) and dedicated VM storage disks. So you don't have to backup all those VMs again you already got backups of, when backing up the PVE system.
So one option with single disk nodes would be to tell the...
Wenn man in der VM den QEMU Guest Agent installiert, dann sind auch Snapshot-Mode Backups konsistent, da dann der PVE-Host dem GastOS sagt, dass da vor dem Backup der Schreibcache geflusht werden soll. Software die in der VM läuft ist aber trotzdem in keinen geregelten Zustand, da das Backup...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.