I just got it working this week, and after having 401 errors myself, I found out that on some tutorials, the value of the Macro '{$PVE.TOKEN.ID}' was wrong.
It has to be 'USER@REALM!TOKENID' (as mentioned on the template page by Zabbix), and some...
Why is everyone using passive voice when recounting that "consumer drives are discouraged?" by whom? and under what circumstances? There is nothing INHERENTLY wrong with using non enterprise drives for zfs pools, as long as you understand the...
Einfach beide Anleitungen parallel lesen und ausführen. Das hat zumindest bei meinem kleinen Mini-PC einwandfrei funktioniert. :-)
https://pve.proxmox.com/wiki/Upgrade_from_8_to_9
https://pbs.proxmox.com/wiki/Upgrade_from_3_to_4
Huh???
Du...
Did you test https://forum.proxmox.com/threads/proxmox-datacenter-manager-0-9-beta-released.171742/ ?
While it may copy more data than technically required it is made for multi-cluster management...
Das wäre die Aufgabe eines Router oder einer Firewall.
Netzwerke sollen immer getrennt sein und nur über FW regeln miteinander reden.
eine VM erstellen mit OpnSense ;-)
you can also use crontab -e on your Proxmox and insert this to your daily job:
#VM 200 (daily reboot 07:30 Uhr)
30 7 * * * /usr/sbin/qm reboot 200
200 is the ID of your VM.
If the "NonRAID" is HBA mode then yes.
I would go for ZFS mirror (like RAID-10), not RAID-Z1 (like RAID-5) because that will perform way better.
Are this enterprise grade ssd's? If you use consumer grade ssd's ZFS is not a good idea, then you...
@UdoB You are 100% correct and thanks for calling it out! Tables without units are incomplete. I've updated the table headers to clarify appropriate units.
Cheers
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
"shutdown" ;-)
Eigentlich ist heutzutage ja "systemd" das aktuelle Werkzeug - siehe systemctl list-timers. Aber der klassische Cron fühlt sich oft einfacher an.
Falls du den PBS "irgendwann mal" tagsüber einschaltest und ungefähr weißt, wann...
I am sure some people do.
Three nodes is the absolute minimum, as official documents tell. While I do not run Ceph currently I did use it last year - in my Homelab; some findings...
Each K and M must be in a different host because you want your fault domain to be host (the default), not disk: i.e. if fault domain was disk you may end up with too many K or M (or both!) for some PGs in the same host and if that host goes down...
The best practice is to have a dedicated HBA-Controller for your storage devices (e.G. a SATA-HBA Controller or a RAID-Controller in HBA/IT-Mode) and pass it through via PCI...
For reference: @UdoB explained why RAIDZ is a bad idea for vm storage here:
https://forum.proxmox.com/threads/fabu-can-i-use-zfs-raidz-for-my-vms.159923/
High peak speeds on consumer NVMe look great in benchmarks but don’t matter in real workloads. Enterprise SSDs are built for consistency, with stable performance even under sustained load, while desktop drives quickly drop off once their cache is...
I really appreciate your posts about testing storages, even though it is one (or two) levels above my world.
But please, a bare numerical value without knowing the unit is... problematic just incomplete.
Hi @pmvemf,
Following up on this, I asked our performance team to review the kernel iSCSI stack (what you referred to as the "Proxmox native initiator," which is in fact the standard Linux initiator).
Our testing with PVE9 showed no functional...
Well PBS will need less storage space and can be leveraged for Ransomware protection:
https://pbs.proxmox.com/docs/storage.html#ransomware-protection-recovery
It also allows live-restore, if i recall direct that's not possible the other way...