Hi, ich hatte das Problem auch nachdem ich meine TrueNAS von 12 auf 13 angehoben hab.
Mir ist aufgefallen, dass das aber nur ein share betroffen hat, das ich angelegt hatte als es noch FreeNAS Version 10 war.
Ein Share das ich später angelegt hatte (FreeNAS Version 11) ging nach dem Update...
Hi, I have noticed the a similar issue with a Fedora (Gnome) and a Manjaro (KDE) VM
SPICE display becomes slow and sluggish, sometimes it takes 3-4 seconds between mouse click and corresponding action.
I know - without a GPU in a VM and with moderate hardware power you can't expect to watch 4k...
I am seeing the exact same issue with PVE 6.4.8
Always happens after finishing one of the VMs in the job (not the same)
INFO: transferred xxx GiB in xxx seconds (xxx MiB/s)
Did anyone find a solution for this issue ?
Hi! I am still nor sure if I can upgrade my zfs pool safely. I think my PVE was initially installed with 6.0 but not 100% sure.
"efibootmgr -v" shows the following:
Timeout: 1 seconds
Hi @ all!
thanks for your help!
@Rassillon: I am already using VeeAm agent for the windows VMs. I also used Veeam to migrate the VMs from XenServer to PVE :-) The downside is that I have to manage and monitor the jobs on each VM individually. It’s ok as there are only a few but a complete VM...
I got an issue with backups from my proxmox PVE system to a file share. The problem is, that the file share is on an older model HPE MicroServer. If I write to it over a long period of time with gigabit speed it tends to „take a break“ to write all the data to the disks. When copying from...
thanks for your reply!
In my home lab I use ZFS on a small server that has only a onboard HBA. (ASRock Rack X470D4U) It works great, I am a real big ZFS fan!
But the HPE server offers the iLO Interface that can monitor the system from outside the OS and trigger email alerts. It...
I am planning to set up a new proxmox VE on a stand alone HPE DL380 Gen9 that has a Smart-Array hardware RAID controller for storing the VMs. (no SAN or NAS storage)
I know I could set the controller to HBA mode, but I’d like to use the hardware raid features for easier raid management and...
Hm. I moved the containers disk to a new storage (CIFS)
After that the container won't start
run_buffer: 323 Script exited with status 255
lxc_init: 797 Failed to run lxc.hook.pre-start for container "108"
__lxc_start: 1896 Failed to initialize container "108"
TASK ERROR: startup for...
OK, so the next thing that I will try is to move my container to a storage that does not need to be unlocked after reboot (network share or local unencrypted zfs pool)
Then I will see if the fact that the storage is unavailable during the boot process of the PVE host is the reason...
just to make sure: You do have the same issues because you are also using an encryptet ZFS as Storage for the LXC ?
Because I havn't been able to get the container running after a reboot.
My workaround is still to delete and restore the container from backup :-/
I just did an apt-get upgrade and a reboot but still no luck. LXC won't start after reboot.
Here is the output of the command:
root@pve01:~# ls -lart "/$(zfs get -H -o name mountpoint datapool-01/vm-crypt/subvol-108-disk-0)"
drwxr-xr-x 3 root root 3 Sep 12 09:57 ..