I have a server with 5 nvme disks, 1 for the OS and 4 for data.
# lsblk /dev/nvme{0..4}n1
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 894.3G 0 disk
├─nvme0n1p1 259:1 0 894.2G 0 part
└─nvme0n1p9 259:2 0 8M...
Kannst du doch machen. Du erstellst eine VM, legst die Festplatte des Betriebssystem auf sdb (local-lvm) und ein weiteres Laufwerk für deine Daten legst du auf den zusätzlich erstellten Speicherplatz auf sda.
Anders. Ich wollte das "kleine" RAID für Proxmox und die VM nutzen (egal wie jetzt) und das große RAID nur für die Daten der VM. Ich habe beide Varianten versucht. Ich habe das komplette kleine RAID bei der Installation von proxmox genommen mit...
Im ersten Post habe ich herausgehört, du willst das kleine RAID für Proxmox und das größere für die virtuellen Maschinen verwenden. Oder habe ich dich falsch verstanden? Falls ja, was genau willst du?
Du hast jetzt PVE auf dem kleineren RAID...
I updated via UI from version pve-manager/9.1.5/80cf92a64bef6889 (running kernel: 6.17.4-2-pve) to pve-manager/9.1.5/80cf92a64bef6889 (running kernel: 6.17.9-1-pve).
Yes.
I am used to use "Virtio-GPU" with Spice. Now I switched a Windows 11 VM to "VirGL GPU" - just for this test. Works the same. For me, in my environment. I do not see/feel any subjective difference and I won't bother to run performance tests.
Something is odd after the upgrade. ifupdown2 DHCP entries do not work; I had to assign static ip via the LXC network entry; otherwise, containers won't start. I know dhcp is not the way but for many services or even fresh users, this is the way...
Fyi, you can attach text files to your posts, that makes sharing a lot of text simpler. Since you ethernet controller is in the same iommu group as the audio device I'd not try to pass the whole group. There is one option [0] to isolate the audio...
Liest sich wie ein problem mit der display auflösung.
Teste doch mal einen vga parameter beim starten aus
https://pierre.baudu.in/other/grub.vga.modes.html
Tritt das problem auch im GRUB Bild auf?
Falls nicht einfach mal mit „e“ in den editor...
Yep - i know it would work...just wondered why the driver thought it was off but it was enabled in the Bios.
Im in the office today so ill do that when i get home tonight.
I don't think this is a good or stable approach. And I know its open source and we could contribute the technical side of the code. But this seems to be a pretty important architectural decision on the core of Proxmox and Backup Server and much...
a small follow up to anyone who might also run into this issue... here is how I fixed it for now:
Instead of passing through the entire PCIe SATA device to the TrueNAS VM I followed this guide Passthrough HDDs to VM
to pass my HDDs individually...
Just upgraded myself. Went just fine no issues. 3 OSDs. I got some interesting data after upgrading:
Ceph Squid → Tentacle Upgrade Benchmark Summary
Cluster: 3-node Proxmox (Intel NUC14, NVMe)
Pool: ceph-vms (replicated, size 2 / min_size 2)...
(I've dropped loads of terms that can be searched on here)
Proxmox SDN does not include a router. It looks to be designed for layer 2 and tunneled layer 2. To be honest I haven't bothered with it. However I have been fiddling with networks...