Are you booting the host using UEFI or legacy bios boot? If using UEFI, you need to add the kernel command line parameters in /etc/kernel/cmdline rather than /etc/default/grub. You then need to run pve-efiboot-tool refresh and reboot.
The following wiki should be useful (note the link under...
I've tested this on both Coffee Lake (NUC8i5BEH) and Gemini Lake (NUC7PJYH) and both exhibit the same issue - The Ubuntu 18.04.3 LTS VM boots using i440fx Machine Type but fails to boot on both NUCs using either q35 or pc-q35-3.1 Machine Types
I've raised this as a bug at...
Hi Guys,
Anyone got any suggestions on how to fix this since I'm stumped?
The VM (Ubuntu 18.04.3 LTS Server) fails to boot (hangs at the following screen in the console) when I set the Machine Type to Q35:
agent: 1
bios: ovmf
boot: dcn
bootdisk: scsi0
cores: 4
cpu: host
efidisk0...
I don’t believe there’s any solution to this other than Proxmox fixing it and seeking the 2FA token for the Mobile UI as they do for the full UI. The only workarounds are (A) Disable 2FA or (B) Request the full UI and not the Mobile UI from the Mobile device.
I’ve already raised this is a bug...
I ended up monitoring the package0 temperatures with grafana /snmp and they seemed to be above what I would expect as “normal”... I therefore raised a support case with Intel who identified that it was a hardware fault with my particular NUC and replaced the unit. The replacement NUC (same...
Anyone any ideas on this? I've since tried the following combinations:
Adding root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on video=efifb:off to the kernel command line and running pve-efiboot-tool refresh before rebooting
Changing the machine type between q35 and pc-q35-3.1 using qm set...
Hi There,
I currently have a headless Intel NUC7PJYH and can passthrough the Intel IGP to a VM running Ubuntu 18.04.3LTS as long as the machine type is set to “Default (i440fx)”. I don’t use the passthrough Intel IGP for graphics output, I just pass it through as a secondary graphics card (the...
I’ve seen the same type of issue before where I’ve had two noVNC windows open to different servers and one of them thinks the shift key is always pressed (capitalising all letters and changing 1-9 to their shift symbols). I never got to the bottom of why this was the case and, if I remember...
“Note: Manual mounting of the NFS share over NET-2 still works, but via Proxmox storage it fails (see error in the first post)”
So, I assume from the above you can mount the synology NFS share via the Proxmox host CLI/SSH (for each host) but it doesn’t work from the Proxmox GUI? If so, this...
I assume the IPs didn’t change on your PVE hosts from PVE5.4 to PVE6? For example is the NFS export of the shared folder on the synology configured to enable the share for both incoming NET-1 IP address/subnet (eg x.x.x.0/24) and NET-2 IP address/subnet (eg y.y.y.0/24)?
Also assume that the...
I had an NFS 4.1 mount to my Synology NAS working under PVE5.4. When I upgraded (actually rebuilt) to PVE6, the same NFS 4.1 mount works to the Synology so assume it must be some config issue in your environment...
I added the NFS mounts through the GUI for both PVE5.4 and PVE6 using the IP of...
Hi There,
I have a NUC8i5BEH (Intel Core i5-8259U CPU) node and noticed that my proxmox syslog (running Proxmox PVE 6 but also occured on PVE 5.4) often outputs the following set of critical errors throughout the day:
23/07/2019 23:26:56 crit pve-host1.local kern kernel...
I can’t really recall now. I was only getting crashes every so often on 5.4 and after I put netconsole on, I didn’t see any further crashes (I’m not suggesting netconsole affected this), even after I did the kernel upgrade to the last 5.4 version I mentioned above. I do recall doing some BIOS...
Hi All,
I did a clean install from Proxmox 5.4 (using single disk ext4) to Proxmox 6 (using single disk zfs) and notice that I don’t seem to be able to get IOMMU enabled under PVE6?
I followed the following instructions (as I had with PVE5.4) but suspect that because I’m not booting zfs, it’s...
Hi,
I currently have a Proxmox 5.4 homelab cluster (2 nodes + external qdevice) and plan to rebuild the cluster as Proxmox 6 via a complete re-install of each Node (since I want to move the underlying local storage to ZFS on UEFI).
I’ve read the PVE6 upgrade guide...
I’ve had no problems for the last few weeks and, just before reading this post, I updated to the following kernel.
Linux pve-host1 4.15.18-18-pve #1 SMP PVE 4.15.18-44 (Wed, 03 Jul 2019 11:19:13 +0200) x86_64 GNU/Linux
I’ve got netconsole running in case of any similar kernel panics and will...
Thanks for the detailed response!
This sounds like a good alternative. If this is not being worked on currently, should I raise it as a bug/feature request through bugzilla?
Thanks!
Hi There,
Is there a technical reason why Online / Live Migration of VMs that are replicated across local disks (from Node 1 local-zfs to Node 2 local-zfs) does not currently seem to be permitted? If this is not currently permitted / available, is it planned to be made available in Proxmox 6...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.