Retested now with removing the lxc.mount.auto: sys:rw option and the drive still gets recognized and is still working correctly.
I've also filed a bug report though I wouldnt deem it vital for operations considering i've gone away from your guidelines (has to be running on the host)...
Hi,
I managed to get around it by this to the container config. For some reason it wouldn't detect with sys:ro or sys:mixed for that matter. I will try and remove as the the tape.cfg file has now been created.
lxc.mount.auto: sys:rw
Hi,
For specific reasons i've seperated out pbs into a seperate lxc container - This is mainly due to the host being used for virutalization and other reasons.
I've been trying to get proxmox-tape to recognize my tape drive to no avail. Please note that the cephfs bind mount is only being...
Sure, you can check out an example here. This is from one of the NVMe drives which kept failing until I changed the bluestore allocator function (thanks alot for explaining that part)
root@sh-prox04:~# ceph crash info 2021-07-14T23:35:41.251654Z_7f7bd234-3dbe-4b33-a769-49a8d0c1928d
{...
In my specific case it didnt seem to matter if the OSD was previously created on Nautilus or Octopus.
In my specific case it was only my pure Nvme (Intell P3500 u.2/pcie) drives which failed. These were created while on 15.2.4. I could try and delete and recreate it. Considering it seems to be...
For what its worth, I had a similar issue which presented itself with;
Jul 15 00:28:53 sh-prox02 systemd[1]: ceph-osd@2.service: Scheduled restart job, restart counter is at 6.
Jul 15 00:28:53 sh-prox02 systemd[1]: Stopped Ceph object storage daemon osd.2.
Jul 15 00:28:53 sh-prox02 systemd[1]...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.