Was für eine SSD ist das? Der NUC kann nur PCIe 3.0 x4. Evtl. ist die SSD zu neu.
Aber testweise kann man ja auch noch eine M.2 SATA oder eine 2,5" SATA einbauen. ;-)
Doch das geht schon (zumindestens ab ZFS 2.3.3, das ist ab ProxmoxVE 9 dabei), siehe: https://pve.proxmox.com/wiki/ZFS_on_Linux#_zfs_administration ->Extend RAIDZ-N
In deinen Fall müste also (nicht getestet) zpool attach datengrab raidz1-0...
1:- After reboot ip a command is not working, earlier it was working.
2:- Interface is visible after reboot.
3:- Unable to restart networking, it says Permission denied whereas the interfaces file under /etc/network has root permissions.
4:- I...
Environment
Proxmox VE cluster with 2 nodes (node94: 10.129.56.94, node107: 10.129.56.107)
Ceph cluster running on the Proxmox nodes (public_network: 10.129.56.0/24)
Proxmox SDN EVPN zone (madp) for VM networking
VMs are on the EVPN overlay...
Thanks for your in-depth Explanation :) .
Maybe to add yet another Attack Surface related to Mountpoints: what about the Case of a shared GPU via one or more of the following
dev0: /dev/dri/card0,mode=0660
dev1...
by default all unprivileged containers map to the same host range (typically 100000:65536). The actual isolation relies on multiple kernel layers, not just
UIDs:
1. Mount namespaces: each container has its own filesystem view. A process inside...
FYI: the proxmox stop-gap was applied https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=6960b5e033fa911f9882751950df28a193255683
pending 10.1.2-7+ to test the patch on my community subscription nodes.
Edit: I've become one of those people...
I'm running Proxmox 9.1.5 on an Intel 10th-gen setup where the iGPU is split via GVT-g (mediated PCI device) between a permanently running TrueNAS VM and an occasionally running Windows 11 VM. Everything works in principle albeit the Win 11 being...
This just started biting me for the first time ever in the past week.
First event was in a newly provisioned virtual machine running LXQt on Debian 13. Okay, I haven't used that software combo on Proxmox before, not too surprising.
Second...
now I'm baffled. I removed eno1np0 from the Networking GUI and rebooted, but no change. I still see "Timed out waiting for device sys_subsystem-net-device-eno1np0.device - /sys/subsystem/net/devices/eno1np0." in the console during Proxmox boot...
Thanks for the detailed response! I believe you are right. I do see a "Timed out waiting for device sys_subsystem-net-device-eno1np0.device - /sys/subsystem/net/devices/eno1np0." in the console during Proxmox boot. I removed eno1np0 in the...
I have found myself in this identical situation after a recent drive failure. Can you clarify "what" you moved to 6.17? I am assuming the kernel, but you mentioned several things.
If so, curiously, I'm on kernel 6.17 and ZFS 2.40 and still...
Doch das geht schon (zumindestens ab ZFS 2.3.3, das ist ab ProxmoxVE 9 dabei), siehe: https://pve.proxmox.com/wiki/ZFS_on_Linux#_zfs_administration ->Extend RAIDZ-N
In deinen Fall müste also (nicht getestet) zpool attach datengrab raidz1-0...
All of what I am about to say is just a matter of preference, and not necessarily the right way or the best way. I use NVME drives for my VMs. Much better performance. I tend to not put a lot of data inside my VMs, which keeps them small, and as...
There's no reason not to take up spce in the thread for stuff that might help out some other folks in the future. Just know Proxmox is very forgiving. I have run it on a DIY server based on a Ryzen 5 Pro chip, a HP Elite Mini 800G9 with an Intel...
Sorry for the late response. Seems that your PF priv flags are not set to allow VF to handle any MAC settings:
VF trust should be true for each VF
ip link set dev enpXsXfXnpX vf N trust on
OR using Trust=true under each of your VF [SR-IOV]...