I agree, especially since TrueNAS not only supports the Apps from the app store but also allows setting up containers with docker-compose files. Most applications in the self-hosted sphere already provide ones for copy/paste. Or like you said: Using directly Debian or another Linux distribution...
Is the internal flash a sd (mmc) card? Then you won't be able to install ProxmoxVE with the regular iso image see https://forum.proxmox.com/threads/unable-to-get-device-for-partition-1-on-device-dev-mmcblk0.42348/
In theory you could first install Debian stable and afterwards ProxmoxVE ( see...
Thanks a ton for such ammount of information. I'll go into it and post what I find.
My interest in ZFS comes from threads like the ones I've read around here, but it is true that I have to find how to make it work.
I would rethink this part. Different to older ESX versions or pfsense/opensense ProxmoxVE is NOT designed to be run from a flash drive in a kind of read-only mode. The operating system writes a lot to the operating system drive (logging data, configuration database of proxmox cluster file system...
Ja, alleine wegen so Sachen wie Snapshots. Die sind mit LVM zwar auch möglich, kosten dort aber Performance, wen man die zu lange aufbewahrt. Plus die von Udo in seinen verlinkten Beitrag genannten Sachen ( Kompression, Skalierbarkeit etc)...
The logfiles will be rotated (meaning that old files will be removed) after some time, so your disk won't run out of space normally. This doesn't help with the SSDwearout though, for this you can go with these options:
Get used enterprise-ssds with powerloss protection
Rotating HDDs shouldn't...
what is the point ?
install PVE on HDD because no speed is required for PVE itself, rpool will be the slow zpool.
second zpool for whole ssd.
balancing data between the two disks will be done with moving vDisks between the two Storages.
(don't forget ZFS eat / quickly wearoutconsumer grade SSD...)
Would love to, but there's only one NIC on the mainboard. It's a homelab setup using consumer hardware.
There is also nothing indicating the cause of the outage, journalctl on both nodes, ernie & bert look .. okay:
ernie (the remaining one):
The failed connect on bert after reboot was...
I had many problems related to VERY HIGH iowait on my Proxmox VE Systems using Crucial MX500, which is a ConsumerSSD (and one with very little DRAM, so once that fills up, Performance drops like a Rock).
Now I got 2 x Intel DC S3610 1.6 TB SSDs which should be very good for VM Storage or also...
Please note, that there are several reports here, that ZFS RAIDZ isn't good at providing performance. For performance and ZFS it's best to setup the devices as mirrors. For improving performance on HDDs it might be a good idea, to setup two SSD partitions as a special device to improve the...
That's an interesting way to test it, but keep the following things in mind:
If there are any other IOPS going on on your system, they might affect how both caches behave.
The best way to ensure that the ARC is definitely cleared is to export your pool first and then unload the ZFS kernel...
This is - essentially - an intentional anti-feature, but it's much worse in a cluster scenario:
https://forum.proxmox.com/threads/proxmox-and-ssds.153914/#post-700255
ZFS is a filesystem that was never designed for SSDs, any copy-on-write filesystem will do poorly. I would use XFS on mdadm...
Proxmox can wear consumerSSDs quickly (also depending on your VMs) but 1% per month will still last you eight years. It tends to increases more at the beginning because it's empty and you are writing new VMs to it. Give it some time before drawing conclusions.
Yes, it has high write...
interesting side note - I have a new proxmox box that I configured with ZFS 1 mirrors using new SSD's - more consumer class Samsung 870 EVO and, after maybe 1 month of use SMART reports 1% wearout. That seems soon for any wearout.
I've read that ZFS writes a lot to disk - would use of Debian...
Hello,
I currently have Proxmox running my home servers but have been doing a bit of storage maintenance.
I upgraded my VM (zfs mirrored pool) storage to 2 x consumerSSDs last year due to some crazy IO delays I was facing at the time and for a while it worked well enough but recently they...
because they use the HW cache (that's why disk's cache is disabled) + many spinning disks and/or SSD provided/recommended by the vendor which are datacenter SSD drives.
ConsumerSSD drives like your Samsung 860 aren't designed for any HW RAID, even if its cache is enabled, missing the TRIM...
You might want to have a look at this thread here:
https://forum.proxmox.com/threads/2-node-cluster-with-the-the-least-amount-of-clusterization-how.140434/#post-628788
Or just use cache vdev. If you have enough RAM on that system.
See above.
Have a look at 2024 consumer NVMes, find a...
Hi everyone,
Happy new year :)
I have begun to see a disturbing trend in both my Proxmox VE nodes, that the M2 disks are wearing out rather fast.
Both nodes are identitical in the terms of hardware and configuration.
6.2.16-12-pve
2 x Samsung SSD 980 Pro 2TB (Only one in use on each node for...
I wouln't say exorbitant but given that it is normal for Proxmox VE to write 30GB/day (as I have read on the forum and also being the reason why consumer grade disks are not recommended) when idle and possibly having just one SSD for Proxmox with no redundancy it is best to minimize the wear on...
Warum nicht? Bei mir bootet keiner der PVE Rechner von einer NVME. Alle "nur" eine SATA-SSD (und default was PVE so einrichtet bei der installation). Nur mein alter HP DL 360 hat gespiegeltes HW Mirror aus zwei SAS 15k platten als Boot (die waren halt so bei der gebrauchten Kiste dabei). Auf dem...