Hi @LG-ITI , welcome to the forum.
First, let’s make sure we are aligned on what “ZFS-over-iSCSI” means: This approach allows you to programmatically expose virtual disks as iSCSI LUNs. These virtual disks are backed by ZFS. The consumers of...
There are two ways to accomplish what you're after (three if you include zfs replication but that doesnt use the external storage device.)
1- zfs over iscsi- as @bbgeek17 explained.
2. qcow2 over nfs- install nfsd, map the dataset into exports...
Same here - brand new installed server with proxmox 9.1.4.
After rebooting, commands like "qm list" are very fast - like ~0,7 seconds.
But the longer the uptime, the slower the commands become.
Uptime 3 days:
time qm list
real 0m21.239s
user...
I have upgraded several hosts from Proxmox 8 to 9 and since then I have very long loading times when I use the command
pvesh get /nodes/{node}/qemu
and the same for
qm list
The problem is it takes 40-60s to get a result on different hosts...
Hi Chris,
Thanks for your input, I used ncdu to check for the large sparse file but the only one I could find of anything like 2TB was the kcore file, which as 120TB & normal as far as I can tell.
I did change this CT from unprivileged to...
Meine Erfahrung sagt mir, dass Proxmox im Vergleich zu einem Plain Debian aufgrund des pvestatd einen höheren Wearout hat, nicht aufgrund des Dateisystems.
Man kann pvestatd ggf. so konfigurieren, dass er seine Daten nur im RAM ablegt. Man könnte...
So i consulted uncle chaty "GTP and he give this?
is this what is happening ???
This part:
Means:
Chris has already submitted a NEW patch
That patch tells PBS:
This patch is not yet released in a stable package
It’s currently in...
Hello,
I can't find the right answer to my question right away, so I'm opening a new post.
I've had my Proxmox host suddenly become unreachable several times now.
The log shows the following message:
Dec 31 19:55:58 Proxmox01 kernel: e1000e...
@Glowsome, thanks for the 'ansible-fix',
we're using saltstack (slightly different architecture - client connects to server, not vice versa) that's why we currently workaround using the hook script.
btw. i reported the bug and it seems to be...
That's the new VM.
The old one installed in UEFI mode. The new one clearly said it couldn't boot from the ISO because it wasn't UEFI compatible, so I had to use seabios.
@Glowsome, thanks for the 'ansible-fix',
we're using saltstack (slightly different architecture - client connects to server, not vice versa) that's why we currently workaround using the hook script.
btw. i reported the bug and it seems to be...
What is your nic? I have two computers with intel e1000e and learned about the offloading bug which results in the conditions you have, if so search around on here for the fix.
Ah, thank you for finding that. I now see in the original bug tracker for implementing S3 support that object locking was not included initially but may be considered in the future.
Press any key to enter the Boot Manager menu, then enter EFI Firmware Setup.
Open Device Manager - Secure Boot Configuration and uncheck “attempt secure boot”.
edit : Oh. If that screen doesn't even display, I can't tell.
I wrote this because...
I could see that being handy…maybe add to the bugzilla site as a suggestion if it is new.
It’s not what you asked but I expect you should be able to grant permission in PVE to just their VM so they can do the shutdown.