I currently have a 2-node PVE 6 cluster (and 3rd node QDevice), with the 2 nodes operating headless with PVE installed as ZFS on EFI (systemd-boot). If I want to test the new PVE Linux 5.3 kernel on one of the nodes, I understand that I just need to do:
apt update && apt install...
I'm trying to install Proxmox on a new build PC via bootable USB and as soon as it hits "Starting Proxmox installation" the text garbles and becomes unreadable.
Things I've tried so far:
Ctrl + Alt + F(N) to switch to a different X window, all the same garbled mess.
Booting with debug options...
I was getting this error in syslog:
nf_conntrack: nf_conntrack: table full, dropping packet
To solve this issue I found this:
CONNTRACK_MAX = RAMSIZE (in bytes) / 16384 / (ARCH / 32)
Having Mellanox NIC installed on my server I followed the recommendation to improve performance.
I have a system setup been working awesome for about a year now, yesterday I performed a shutdown correctly to install a power strip, when I turned it back on, the system went through all it's checks but would die when it came time to boot proxmox, I checked the server out, all checks said it...
I installed PVE 6 with ZFS on two SSD drives on an IBM x3650 M3 server with a H200 SAS controller cross-flashed to IT mode.
The installation process ended correctly but after the reboot the server does not boot, nor in UEFI mode nor in legacy mode.
If the USB drive is plugged into the server...
We have a three node cluster, the storage for the vms is ceph. I have migrated lot of physical server to pve with clonezilla, i have also converted round about 15 vmware vms to pve. In the past without issues.
Now we had the problem (the third problem/server after a while) - an ubuntu...
I've got a Proxmox installed on a ZFS RAID1. I've also noticed that the two root disks have 3 partitions: "BIOS boot", "EFI system" and "ZFS".
I wonder what happens, when one of the root disks fails. For ZFS it should not be a problem, but what about booting the system? Are the first 2...
Running into some problems getting the system to work properly now that release is out and I want to make a ZFS Mirror instead of just a single drive. For clarity, the system showed no issue booting using ext4 or xfs on NVME or SATA SSDs (if used by themselves) but now that I'm...
In my home lab, I have 2 Intel NUCs (both only allow 1 * SSD and 1 * NVMe) and I’m currently booting them (UEFI) from the internal SSD. The 2 NUCs are setup in a cluster with an external qdevice for corosync quorum votes.
I would like to use these internal SSDs as VM storage to maximise...
I have some strange issue with PVE 5.3 and 5.4.
I can install Proxmox without any issue. But the PVE Kernel will not boot. After GRUB Loader I can see "normal" boot messages and after that (the moment where normally the prompt appears) the screen stays black and nothing...
ich habe ein Proxmox frisch installiert (EFI-boot, Debian9 mit Upgrade zu Proxmox).
In dem Server sind 2 Grafikkarten installiert:
- OnBoard (BMC/IPMI): ASPEED AST2400
- PCIe: Nvidia GTX 970
Wenn der Server jetzt startet, werden nach der Auswahl in Grub die Boot-Ausgaben & die Konsole...
I recently tried to install Proxmox VE 5.3 on an older SuperMicro Node:
SuperMicro H8SGL-F Motherboard with integrated Matrox G200E Graphics
AMD Opteron 6348
128GB DDR3 ECC Ram
The Problem: Proxmox wouldn't boot, neither the installer nor a finished install from another node (Blackscreen...
Today i'm co-fronting a issue, i have followed the tutorial like step by step but i can use my Nvidia Geforce on the VM with x-vga i don't know why
dmesg | grep ecap to be sure than remaping is supported/or not supported
Recent OVH SYS servers are now configured to use UEFI, and their Proxmox 5 (ZFS) installer template manages the post-install setup so Proxmox can boot.
The EFI system partition doesn't appear to be synced across multiple disks, and it's unclear how this will deal with a kernel update. Asking...
I've made a mistake today.
I was trying to modify l2arc_write_max and l2arc_write_boost.
What i did was: CAREFUL THIS IS WRONG
options zfs l2arc_write_max 67108864
options zfs l2arc_write_boost 134217728
it should be
options zfs l2arc_write_max=67108864
I have a running Proxmox server running some VM images with GPU passthru for some time. I have a desire to access an old linux installation from my old desk top pc running Centos5. I have now installed the hard disk into my server and was able to add the disk to an existing ubuntu VM in...
we want to install some VM's automatically.
For this we have created a shell script which use the qm create command to create the VM.
qm create $ID -name $NAME -bootdisk scsi0 -scsi0 NVME:$DISK1SIZE -scsihw virtio-scsi-pci -scsi1 HDD:$DISK2SIZE -memory 8192 -ostype l26 -sockets $SOCKETS...
I am using Proxmox VE 5.2-1 and installed a Windows Server 2016 VM. Everything works fine except restarting the VM i.e. after a Windows update. If the VM is restartet it keeps rebooting at the very beginning of the boot proces - just after finding the harddisk. The same happens if I select...
Have been fighting with this for 2 days now. In order to get more free space for more hdds inside the whitebox cluster I have, I bought external disk caddies that I thought I'd boot from.
Booting from the Proxmox USB stick and installing it on the external USB drive, with the old drive...