Hi JacobHM,
welcome to the forum,
can you add the output of:
## Host network configuration
cat /etc/network/interfaces
and
## Container configuration
cat /etc/pve/nodes/<nodename>/lxc/<vmid e.g. 100>.conf
BR, Lucas
https://forum.proxmox.com/threads/jellyfin-lxc-with-nvidia-gpu-transcoding-and-network-storage.138873/
that should get you the rest of the way, and applies to other containers as well.
ok, so I was trying to use ChatGPT earlier, and it was confusing me, so I tried co-pilot, and it gave me something simpler. I will see if this works..
1. Update Proxmox and reboot
apt update
apt full-upgrade -y
reboot
2. Install kernel...
Alright. It appears to be working. I think I was not giving it enough time to re-establish the connections when I was pulling the cables out to test and then plugging them back in.
You can see in my second attempt to ping below it fails 8...
I've added a Notification target "arkthis" type "sendmail" - and configured my SMTP server login.
I presume this is necessary for "mail-to-root" to even leave the localhost's sendmail, right?
I've now also enabled (=check) the "arkthis"...
Great, thanks for sharing.
You can mark the thread as solved by editing the first post and selecting an appropriate subject prefix
Cheers
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I figured it out. There was a symlink missing under /etc/systemd/system/multi-user.target.wants for mnt-pve-dir1.mount Once this was created, dir1 shows up after a reboot.
I figured it out. There was a symlink missing under /etc/systemd/system/multi-user.target.wants for mnt-pve-dir1.mount Once this was created, dir1 shows up after a reboot.
Hey guys, so I am kind of new to linux in general. I am trying to get my nvidia 3060ti installed so I can pass it thru to couple LXCs.
Is there any good guides on what specifically to do. I tried some but didn't work. I think i removed...
@Lukas Wagner : Thank you so much!
That's quite likely exactly what I was looking for.
I'll check the the notification matching configuration.
PS: And thanks for "assuming absolutely right!" and in my interest and with best intentions. I'm used...
Oops! I have been in IT for 29 years. Almost as long as you ;-)
Anyway, let me just say I have been so keen on the Minisforum brand for as long as I can remember. I landed my UM790Pro a few months before hell broke lose with RAM and storage...
Then check why the disk is not mounted on boot, if indeed it is not mounted:
journalctl -u mnt-pve-dir1.mount
systemctl status mnt-pve-dir1.mount
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
You should check and match /etc/fstab from old nodes to new node.
The filesystem will not be mounted by PVE, it has to be mounted already when PVE starts.
If you are mounting through Systemd, you should check the log for any errors related to...
Hi, every new unmodified LXC on the debian-12-standard_12.12-1 template will not connect to my network, local or otherwise
With static ip
root@CT100:~# ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.
From 192.168.0.101...
Das Thema Hardware-RAID hat sich eh erledigt. Die Preissteigerung der Controller in den letzten Jahren ist fast so schlimm wie aktuell bei RAM und SSD. ;-)
I have a 4 node 9.1.5 cluster where each node has a 300GB boot disk and second disk (1TB SSD). The second disk on all nodes is defined as DIR storage named dir1. /etc/pve/storage.cfg shows the following
dir: dir1
path /mnt/pve/dir1
contents...
A ZFS pools consists of one or more vdevs. Vdevs implement the known topologies: single drive/mirror/RaidZx. When you have more than one vdev the combination in a pool is called "striped". (Nearly) every ZFS pool I have has more than one vdev -->...
Das Thema Hardware-RAID hat sich eh erledigt. Die Preissteigerung der Controller in den letzten Jahren ist fast so schlimm wie aktuell bei RAM und SSD. ;-)
Everything HDD
I want to prevent striping.
Thats the reason why i think about using the a similar model like now but instead of using mdadm i would use zfs