I have one node that reboots on its own.
I haven't pinned down whats causing the system to shutdown/reboot
Ive replaced the all of the memory (which tested fine before replacing it), temps appear okay on the CPUs, otherwise I have a 10 gbe fiber card and an infiniband card that will next...
No-VNC does not sync numlock and capslock with the host.
In every situation it appears to be exactly opposite.
I wonder if this information may be useful?
https://www.cendio.com/bugzilla/show_bug.cgi?id=400
I was trying to mount an lxc raw disk from another container as read-only.
In the instant case, my originating container is acting as a LetsEncrypt SSL generator with RW access to a raw container disk. I want to share that disk as a RO mount-point across several containers.
I believe this used...
Its impossible to mount a read-only mount-point on NFS storage.
Workaround is removing Read-only option -- which allows the container to boot.
I really wish to mount read-only and this used to work in PVE 5+
● pve-container@20005.service - PVE LXC Container: 20005
Loaded: loaded...
This does appear to be resolved in pve-qemu-kvm: 4.0.1-3
I earlier reported that it wasn't working in this latest version but after rebooting I am not seeing this problem anymore.
I believe 3~6 seconds is more accurate with Left-Shift
It would seem Left-Ctrl key is doing the same thing. This is even more annoying than Left-Shift as it will start taking using your key strokes as hotkeys.
Ive tested an LXC container whose mount point is stored on the same NFS synchronous share on a PVE 5.4-1 node -- and, it is not suffering from the synchronous write errors that I am seeing on PVE 6.0-2
Writes are hitting my SLOG/ZIL and at expected speeds.
I've also created a bug report #2409...
I can't seem to get any attention to this thread.
If someone could verify my findings, it would be helpful in troubleshooting.
I've created a bug report #2411.
UPDATE:
This applies to PVE 6. My nodes are latest 6.0-2;
I just tested PVE 5.4-1 and QCOW2 disks are properly syncing to SLOG on the...
I have purchased two nvme disks with PLP that I am using for my ZFS SLOG/ZIL. I have room for several more if it turns out to be beneficial that I do so.
While I have successfully added two log devices, my question is if multiple log devices will stripe or give me the performance of more than...
I have now tested all caching modes.
The caching mode is completely ignored in all circumstances.
RAW will cache as writeback, regardless of config setting.
Likewise, QCOW2 is caching as writethrough, none, or directsync every time regardless of disk caching mode.
RAW disk set to...
I have been testing my VM disk performance on an NFS synchronous shared storage. The results have left me scratching my head trying to figure out whats going on. The results may be the expected behavior, but even so I am lost how that may be.
On my NFS synchronous share I created a VM (Linux...
I am mounting several NFS shares. For my LXC and QEMU images I wish to mount the NFS share as synchronous.
QEMU guests are working well on the NFS sync share.
For LXC, however, I noticed my sync writes dropped down below 10 MB/s and would hang for several minutes after writing test files.
The...
The IP to your PVE node is 192.168.1.18 ?
I am assuming your last modification restricted the export to your NFS server only (may as well have been localhost) if it is 192.168.1.142
Try opening the export to your entire network to troubleshoot:
/volume1/Ordination...
Well, I was able to get those commands to work by first issuing:
iscsiadm -m discovery -t st -p 10.0.0.100:3260
The other commands then worked. This makes me believe, then, that the ZFS over iSCSI LIO plugin is not using iscsiadm at all.
Which leaves me wondering how its mounting the storage...
I have a mounted ZFS over iSCSI storage device using the LIO plugin.
It is successfully mounted to my nodes
I took a loot into the /Storage/ISCSIPlugin.pm to see how the storage is being mounted. It looks like it is using iscsiadm
But when I try to see the devices
# /usr/bin/iscsiadm --mode...
Where else you might need to specify an IP is at the NAS, as it may be restricting access to clients.
You have shell access to the NAS? Looks like Synology is running Linux. If so, show whats in /etc/exports on the NAS.
Run # pvesm nfsscan 192.168.1.142
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.