actual errors would be useful- this can refer to the zfs pool or the storage pool in pve.
No. each pool must have a different name on the same host. same goes for pve stores.
Of course, no one can test every variation. But there is a middle ground between exhaustive testing and echoing AI outputs without the necessary critical oversight.
This is my last OT post here as this discussion is not helping the OP.
ARC is for a default installation on a PVE 9 with 32 GiB much lower. So the example is missing clear directions and could be phrased much better to be helpful.
Would you be so kind to not post untested ai generated answers?
Parts like this
are simply not correct. This would potentially increase the used memory for ARC.
Hi,
anything unusal in the logs dmesg -T or journalctl -xe before or at the time of the crash?
You could connect to the qemu monitor with qm monitor <vmid> (or via GUI) and run commands like info status and info block to retrieve more details...
Super. Das probiere ich aus und melde mich dann.
Besten Dank.
Die Warnung in der OPNSense bei den "globalen Optionen" habe ich gesehen: "Warnung: Diese Option wird auch gespeichert, wenn der DHCP-Server deaktiviert ist. Nur die Maschinen, die...
Das scheint es gewesen zu sein. :) Ich kann den RustDesk-Server jetzt sofort erreichen -- ohne vorher etwas in der Konsole machen zu müssen, um Traffic zu erzeugen. Top!
no, I suggest you one way.
ZFS is the way for multi boot disks, but it's slower because the overhead for checksum and integrity, even more with none datacenter drives.
ZFS is enterprise oriented.
no, I suggest you one way.
ZFS is the way for multi boot disks, but it's slower because the overhead for checksum and integrity, even more with none datacenter drives.
ZFS is enterprise oriented.
I feel so inadequate....
for years, I just run a postinstall script that sits on a nfs mount. yes, I could have done more automation but in the end it seemed the better part of valor to just type mount followed by /path/to/mount/postinstall.sh...
I feel so inadequate....
for years, I just run a postinstall script that sits on a nfs mount. yes, I could have done more automation but in the end it seemed the better part of valor to just type mount followed by /path/to/mount/postinstall.sh...
Replacement HDD gets here tomorrow. In the mean time I'd like to save the mirrored good drive from working overtime to resilver the failing drive. I also don't understand how to swap the new drive with the old drive, since I don't have any open...
has nothing to do with network between guests.
triple check within your guests.
No third party app within guest ?
Have you multiple Windows accounts ? Windows 11 lock out account after multiple failure.
edit: do you use hostname to connect ? if...
Reviewing previous posts to see if there was anything I overlooked.
None of the VMs have had Static IP addresses assigned to them before or after the Subnet change they are all dynamic IPs. The issues are all inside of one host/node. Physical...
I am on the latest proxmox 9.1.5 with ceph 19.2.3 squid. These warnings bothering me a lot and I am scared to touch anything related to ceph.
systemd‑sysv‑generator: SysV service '/etc/init.d/ceph' lacks a native systemd unit file…
Please...