That's "just" a counter. What content is inside the log?
man smartctl
...
smartctl -l error /dev/nvme1
You may also want to run a (long) selftest, see that man-page.
Of course there may be sources of trouble outside of that NVMe, like on...
Hello, I can explain for you:
1. No, ZFS pools (like local-zfs or rpool) are local storage, tied to the disks physically attached to a node. You cannot directly share a ZFS pool across Proxmox nodes unless you're using a networked ZFS (like over...
Windows has a very specific driver support requirement. You either need to provide the VIRTIO drivers during the install phase, or install on a supported type of the controller.
This article/chapter may be helpful...
There's also best practices articles in the wiki. For example
https://pve.proxmox.com/wiki/Windows_11_guest_best_practices
https://pve.proxmox.com/wiki/Windows_2025_guest_best_practices
We'd need to see the whole no boot device message and the...
Never tried it but based on this page, I guess this ought to work:
sun *-1..7,15..21 02:00
It will do the backup on the 1st & the 3rd Sunday of each month at 2AM
Hi badsectorlabs, DaveFisher,
I could reproduce the issue. It's not directly qemu-server v8.3.14's code (which IS an enhancement), but we missed starting the services `qmeventd.service` and `pve-query-machine-capabilities.service` on install...
Dein Server heißt prox2 aber der Name in /etc/hosts lautet prox. Behebe das (nano /etc/hosts), starte neu und prüfe noch mal ob manche der Fehler nun weg sind.
Wenn möglich nutze Code blocks anstatt Bilder von Text.
You are using a cow fs and so you need free space on the fs to delete which isn't enough free for yet.
cat /sys/module/zfs/parameters/spa_slop_shift # maybe is 5 or 3
echo 2 > /sys/module/zfs/parameters/spa_slop_shift
Remove unneeded qcow2...
Removable datastores are officially supported since Proxmox Backup Server 3.3. Instead of manually toggling maintenance mode or suppressing logs, you can define the datastore as removable and let PBS handle it. Full documentation: PBS Backup...
Sei nicht immer so fundamental... ;-)
...sage ich, der ich fünf eigenständige PBS' betreibe. (Einen primären mit einem Kurzzeitgedächtnis und einen sekundären mit langer Haltezeit. Und dann noch drei weitere - nur weil hinreichend viel alte...
Nun, da du nicht kontrollieren kannst, ob die Daten sich für Dich unbemerkt, verändert haben.
I.a. wirst Du kein ZFS nutzen und keine Idee davon haben, was Bit Rot bedeutet.
Das nächste ist, nur ein interner Lagerort der Daten ist keine...
May be. I've been there when I changed the members of an existing cluster.
You nodes have names. You did not tell us anything about your cluster, so let's assume there are three nodes named pveh / pvei / pvej. Now this must run without any any...
Es gibt sicher sehr etliche akzeptable Antworten für diese Frage.
Ich verwende (im Homelab) "Zamba": https://github.com/bashclub/zamba-lxc-toolbox. Damit bekommt man einen ausgereiften AD-kompatiblen Fileserver, der für die Windows-Nutzer...
Nun ohne viele, ich meine 8x SATA3 und 2x NVMe PCIe 4.0 x4 Ports ist das kein Server.
Ich will mich immer auf meine Systeme und meine Daten verlassen können, also setze ich auf Selbstbau mit guter Hardware.
ZFS als Dateisystem und Volumemanager...
It does not hurt to link to the other (German) post: https://forum.proxmox.com/threads/pve-installation-auf-neuem-sys-rtl8126-als-nic-keine-treiber-somit-kein-netz.168380/
That would be a little bit surprising, but everything is possible...
On my systems I can test a connection, showing successful connectivity:
$ curl http://51.91.38.34
<html>
<head><title>Index of /</title></head>
<body>
<h1>Index of...
If you're running a server, you should have at least ~1GB of swap, and reallocate as needed. You left no room for the kernel to use, so it killed a RAM-hog process instead of panic'ing the whole server.
Also suggest maxing out the RAM that your...