Du hast, sobald die OPNSense läuft 2 DHCP-Server am laufen.
Klar, dass du dann unterschiedlich die IPs vergibts.
Sobald die OPNSense läuft, musst Du dort dein Netz konfigurieren. Netzadresse, DHCP-Bereich etc.
Und sofort / im selben Zug in der...
Also both EXT4 and ZFS have reserves you can reduce in case of emergency. You can also let your system notify you on reaching a threshold.
Creating a hidden 1-10G or so file that you can delete when you run out of space is also a simple and...
Maybe one possible direction could be external key sources for PBS storage entries.
Instead of storing the actual PBS encryption key in `/etc/pve`, PVE could fetch it when needed from some configured external source, for example SFTP or an API...
@fabian thank you for the answer. I agree, once I looked at the whole setup, it makes sense. What surprised me was the practical effect of adding PBS backups - I simply did not realize that unattended backups would weaken/bypass my previous...
The point is: you can not know that.
Of course you are free to use any available technology you want - and a lot of installations are using hardware Raid (edit: ... without a checksumming filesystem). No fight here; if it works for you/them...
No it depends on the single NVMe PCIe 3.0/ 4.0 with random 4k r/w access.
On consumer Hardeware this will maybe 50 MB/s.
So a NIC with 1 GBit/s oder 2.5 GBit/s is enogth.
Glad it's working for you.
A little preview: For Sanoid / Samba users, I started another project as a modern Sanoid replacement. It's still rough and not feature complete.
- policies live on the dataset (like zpbs-backup)
- independent - no...
No it depends on the single NVMe PCIe 3.0/ 4.0 with random 4k r/w access.
On consumer Hardeware this will maybe 50 MB/s.
So a NIC with 1 GBit/s oder 2.5 GBit/s is enogth.
Hi Waltar,
Thank you very much for your quick response. I willl try RAID 1, if that doesn't work, I'll start planning some kind of storage upgrade for the PVE hosts.
Best regards
Claude
I have mirrored ZFS SSDs in my Proxmox nodes. What I did was:
disconnect one SSD at a time using zfs
attach SSD to a Windows VM
Use Samsung Magician in the VM to upgrade the SSD
disconnect SSD form VM
reattach SSD to ZFS mirror
This worked...
Unfortunately not. I had an extra NIC on the motherboard so I put that on passthrough for the VM which hides the problem.
Problem NIC is a AQtion AQC113CS.
Would love an actual fix.
das ist das greylisting feature von PMG - siehe die Dokumentation:
https://pmg.proxmox.com/pmg-docs/pmg-admin-guide.html#pmgconfig_mailproxy_greylisting
die mail sollte dann beim naechsten zustellversuch einfach angenommen werden.
bei manchen...
Raid set auto-scrubs would not match data vs parity if you got corruption on a disk - so the ctrl would know and you would be able to read the events and logs - but would only be useful auto-repairable in raid6 config as in raid5 the parity would...
Hi!
Thanks for your work. I FULLY agree with you!
For about 20 years, all my customers’ OSs have run on top of HPE RAID controller logical drives, and I have NEVER LOST a single bit/file/block.
About 5 years ago I started using Proxmox (new...
DE2000H is an old low cost external ha-raid storage system (with very old Netapp E-series ctrl's, tech spec max 3GB/s read seq, max 100k/r + 35k/w iops) and you are running it in raid5 as mentioned. Assuming you run iSCSI in 10 and not 1Gbit mode...
Hi everyone,
I’m seeing a strange CPU scheduling/load balancing behavior with Proxmox VE 9.1 on Windows Server 2022 Terminal Server VMs.
Environment:
Proxmox VE 9.1
AMD EPYC 9375F hosts
Windows Server 2022 RDS/Terminal Server VMs
Multiple VMs...