Here's part of the syslog, after the reboot (note put hte HA to ignored). It is VM 118 that get's started, while it was set top stop just before the reboot of the pve node, note that vm 117 is always on should be started...
My entire dmesg output is actually just these 8 messages repeating:
I don't know if this would be pointing to an issue or if this is the expected behaviour for my multipath iscsi connection.
Edit:
I've rebooted my secondary Proxmox Node in...
Hello,
It's because your using this functionnality the wrong way.
You have to set your maximal number of vcpu you want on your vm, this value is the total core (socket * cores). You need to restart to set this value.
Next you activate the...
Hi,
Do you get the same error as on the thread:
https://forum.proxmox.com/threads/opt-in-linux-6-17-kernel-for-proxmox-ve-9-available-on-test-no-subscription.173920/page-5
Awesome, you can mark the thread solved by editing the first post and selecting appropriate subject prefix.
Cheers
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
If you want to create new volume as part of VM creation, you specified the syntax wrong, from "man qm":
Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume.
It should be: data:vm-2114-disk-0:10
The command only accepts...
Vom Support kam die Rückmeldung, dass es bei VMs mit NFS zu einem Einfrieren kommen kann.
Daher wurde in der neuen Version das Feature „snapshot-as-volume-chain“ umgesetzt.
Es ist erst eine preview, das erste Feedback ist jedoch durchweg positiv.
I think ASRock is a sister company of ASUS. But nevertheless there may of course be differences.
By the way the installation of 6.8 was easy - but of course needs a reboot.
I temporarily added the repo for Proxmox 8 to a new file...
Not yet. I have Home Assistant and the NAS on this server, so I need to find some time when I can reboot without affecting others.
My mobo is ASRock tough - not ASUS.
I found the root cause namely I restored the LXC on different machine on which the attached disk has not been mounted mp0: /mnt/pve/smb/share,mp=/mnt/share that cause the issue. When femoved it then it works fine.
@fabian We are currently building a Proxmox ManageIQ Provider together with the ManageIQ Team.
But once again we face the issue of reusing the VMIDs.
As you mentioned above, something outside of PVE should keep track of it. But PVE does not...
I Run Proxmox VE and BS with ZFS for some years and had no Problem with internal ZFS Upgrades.
But not all my desktop System run the same zfs version for access the newer zfs disks.
Your way is to go to Proxmox VE 9.x, then this is only one step.
Hello.
I have solved my issue, by setting mtu=1500 on all nic's in all vm *.conf files. using sed inline. This is just to understand if there is something strange in my enviorment. or if this is as expected.
if a vm, started on a proxmox 8.x...