Hello,
I’m looking for feedback from the community regarding an architecture decision I’m currently considering after encountering several issues with Windows Server 2025.
Context
The server was initially installed with Windows Server 2025...
The main question is not "will this work", which we cannot judge with certainty, as too many factors are involved.
The real question is, why your statement is:
Technically speaking, your idea might work. With the emphasize on the word "might"...
I have done what you are describing before, on two separate builds (office machines for me and my partner and me), and while they were successful in having a NAS as well as 2 workstations "built" within a single computer tower. There were some...
Mhm kurze Frage, habt ihr Backup fleecing aktiviert, für den PBS aktiviert? ggf. auf die interne NVME? Das könnte die PBS Backups deutlich beschleunigen. (https://pve.proxmox.com/wiki/Backup_and_Restore)
In a Node Down scenario, you are left with two nodes that can still communicate directly with each other, since this is a mesh network. Additional load only becomes a factor in a Link Down scenario. In that case, depending on the topology and...
Yes, but that might only happen with a routed setup, not in a broadcast setup, and should be tested before using it productive.
Anyways FS switche or MikroTik (for low budget) or any other switches you have configuration experience with might...
We understand and can accept the risk of a 3-node cluster and losing a single node. Our current cluster is 2 nodes, and can function on a single node, so the risk profile is about the same. I like to run 5 nodes to have more fault tolerance...
IMHO, some of the best and cheapest options may be the Intel 82599EN based NICs, like the X520-DA1 and X520-DA2. Being so old, they usually have great driver support for almost any OS (FreeBSD, Linux and Windows alike, although for Windows 11...
Hello, exact same issue here, but even for 120tb
update VM 123: -scsi1 Truenas-HDD:120000,format=qcow2,cache=none
Rounding up size to full physical extent <117.21 TiB
Logical volume "vm-123-disk-1.qcow2" created.
Formatting...
well, proxmox-boot-tool here thinks it's driving systemd-boot, and so does the UEFI setup. You could re-enable it (install the tooling, then make sure it boots the proper kernel). But I'd suggest going back with GRUB if you don't need...
yeah, re-init with grub using proxmox-boot-tool, as I suggested, that should do it. Or switch to UEFI entry 0007 (cue the James Bond theme..), possibly with efibootmgr, but more likely in your UEFI/BIOS setup.
Also meet this, misconfigured 4 disks to top-level with a raidz vdev, after upgrade to pve9 with zfs-2.3.4-pve1, zpool remove still response not support.
invalid config; all top-level vdevs must have the same sector size and not be raidz...
As ZFSPoolPlugin.pm got some additional lines during the major update from PROXMOX 8 to 9 the most current patch is as follows (only the line numbers have changed):
820,826c820
< my \$cmd = ['zfs', 'send'];
< my \$encrypted =...
The main question is not "will this work", which we cannot judge with certainty, as too many factors are involved.
The real question is, why your statement is:
Technically speaking, your idea might work. With the emphasize on the word "might"...
I’m curious about how you’re creating those kernels! Google seems to be having trouble finding “6.18.6-pbk,” so it looks like you might be using a custom build.
Proxmox typically uses a modified and patched version of Ubuntu kernels. I have...
understandable, the only mitigation I can currently think of is by utilizing a hook script, but that won't catch every case in the guest lifecycle.
I'll look into creating patches that resolve this problem by making the guest wait for a firewall...