Also wir haben einen PBS mit 8x NVMe
Zwar trotzdem 24 Slot aber nur die ersten 8 Slots belegt. Hier dürfte dann ja kein PCI Switch genutzt werden. Der ist genauso langsam.
Was habt ihr für ein Setup bei dem es richtig schnell läuft und wie...
I was able to make this work with an NFS share + a bind mount, however, neither jellyfin nor plex detects when new files/folders are added to the library so I had to resort to scanning the library every 15 minutes. Do you know which mount options...
thanks for your reply, but the issues sort of solved itself....
I did an 'apt update; apt upgrade' and ~20 packages or so got updated, when I rebooted the pve-host that was not getting proper ipv6 addresses it now suddenly did.
btw when it wasn't...
Note: I only quickly reviewed your post.
Your pve gui is located, per your network config, at 192.168.1.250 with your gateway at 192.168.1.253. First step is to verify that these are correct.
One thing that happened to me when pve had network...
With regards to the E610 you might be interested in this Intel Community Support thread.
https://community.intel.com/t5/Ethernet-Products/E610-XT2-No-Longer-Seen-in-Proxmox-9-PC/m-p/1725698
This isn't a question about PVE, it is about configuring your thin client.
From the thin client point of view the VMs are the same as 40 machines. It might be more productive to ask on that vendor forum.
No worries...thanks!
I have opened a ticket with Intel. I am betting they will tell me they do not support Proxmox and they will do nothing. I am hopeful but also realistic.
If your configuration is identical, how about the physical connection? Some switches can drop multicast etc... Also what type of network cards?
Does your non working host have a link-local address? Hosts should ALWAYS have a link-local address...
Here are the steps:
root@pve1:~# wget https://github.com/intel/ethernet-linux-ixgbe/releases/download/v6.3.4/ixgbe-6.3.4.tar.gz
root@pve1:~# tar xvpf ixgbe-6.3.4.tar.gz
root@pve1:~# cd ixgbe-6.3.4/src/
root@pve1:~/ixgbe-6.3.4/src# apt install...
Somehow my adapter is coming up unitialized. Here is the log from the fail:
Jan 10 12:15:00 pve01 kernel: ixgbe 0000:02:00.0 0000:02:00.0 (uninitialized): ixgbe_check_options: FCoE Offload feature enabled
Jan 10 12:15:00 pve01 kernel: ixgbe...
OMG this worked perfectly! Thank you for this. I see in the new driver package that there is an updated NVM for the E610. I will get that updated also.
Ever feel like tech is like 2 steps forward and 1 step back?? Well I certainly do. Upon...
Ok, I can confirm. I just made another privileged LXC, same Ubuntu 24.02.
#1: IPv4 - DHCP
#2: IPv6 - static/blank
#3: nesting=1
This does work.
I don't know, I was probably just staring at it too long last night.
The original guide worked for me when my primary (Dell R420) PVE server was on Proxmox 8 (8.3, iirc). I've since upgraded that server to PVE 9 and it's still working, but I've noticed some errant behavior that might be related to this. Since...
Actually thin client do not install windows os.zero client thin machine .Therefore how can i go to my specific vm(unique IP) using customized liux installed thin client or anything else.Not using mstsc.When thin client turn on ,it will goes into...
I literally just woke up, so please bear with me.
But I just tried to create a privileged LXC with Ubuntu 24.04-2_amd64.tar.zst, IPv4 DHCP (IPv6 static/blank), set nesting=1 and it's working fine.
Due to my network settings, VLANs and stuff, I...
I don't really mind. I don't create and restore LXC that often.
I just wanted to know why it behaves like that in PVE9, when it was OK in PVE8.
The HW is exactly the same.
I accidentally installed PVE8 in ZFS RAID0.
Now in PVE9 I have it...
does editing both /etc/network/interfaces and /etc/network/interfaces.d/sdn is the correct way to go?
/etc/network/interfaces
iface nic3 inet manual
mtu 9000
iface nic4 inet manual
mtu 9000
/etc/network/interfaces.d/sdn...