Search results

  1. F

    PVE OIDC callback url doesn't seem to work for me.

    Hi. Im currently trying to setup a SSO with my cosmos-cloud openid. It seems that the login event on cosmos cloud works 2025/12/06 20:24:39 [REQ] [36m[0m[95mGET [0m[36mhttps://cosmos.domain.tld/cosmos/api/users HTTP/2.0 [0mfrom 192.168.88.106:39178 - [42m [0m[32m 200[0m[34m 317B[0m in...
  2. F

    missing ~20 GB of the 32GB usb stick

    I installed PVE on a 32GB stick. but pve root is barely 10GB Filesystem Size Used Avail Use% Mounted on udev 63G 0 63G 0% /dev tmpfs 13G 3.0M 13G 1% /run /dev/mapper/pve-root 13G 9.6G 2.7G 79% / tmpfs...
  3. F

    Reversed proxy stopped working

    Hi. I have cosmos-server in an LXC for reverse proxying stuff. I added a wildcard cert to it. Then i pointed pve.domain.tld -> https://192.168.88.3:8006/ Set to accept invalid certs (pve has self signed). For a few hours i could access pve.domain.tld and now without any change to my...
  4. F

    Mass storage for LXC

    Thank you, this helped alot. Made it much easier to share the folders between different privileged CT:s
  5. F

    Mass storage for LXC

    Hi. I have this zpool with 3 mirrors, giving me about 50 TB. On this one i want to put among other stuff my media library, and make selected LXC share access to the same folder(s) (Arr, Jellyfin, NFS) the data zpool is set for disk images and containers (cant select other stuff here). How/where...
  6. F

    Disk layout for mostly LXC?

    Hi have a server im about to backup and reinstall and thinking proxmox. proxmox on either the 2TB ssd, or something with the two 500gb nvme:s then i have 6 mechanicals 4 x 14.6TB, 2x16.4 TB disks, on these i would put LXC stuff with NAS (samba or nfs or both as server for network). I was...
  7. F

    Replaced failed nvme boot raid, one(old) is by id, new is /dev/nvme1n1

    Yes. but i have already attached the new nvme per /dev/nvme1n1. I guess i have to remove it from the raid, and reattach with the by id? Or does it not matter now (not going to add more drives to this raid, only replace when/if one fails)
  8. F

    Replaced failed nvme boot raid, one(old) is by id, new is /dev/nvme1n1

    Hi, i just replaced a DOA-ish nvme, old drive(s) was zfs mirrored by id, now the new drive is by /dev/nvme1n1. Any way to turn this nvme1n1 into by-id now when its reslivered back into the zfs pool?
  9. F

    [SOLVED] pve node summary never loads / forever spinner

    Ah yeah. Forgot i tried to show host temp on the pve page (using this) https://new.reddit.com/r/homelab/comments/rhq56e/displaying_cpu_temperature_in_proxmox_summery_in/ Removed the stuff and pve loads as expected.
  10. F

    [SOLVED] pve node summary never loads / forever spinner

    looks normal in both places. Updatime is about 35 minutes because of the reboot test.
  11. F

    [SOLVED] pve node summary never loads / forever spinner

    root@pve:~# date Sun Jul 21 02:30:24 PM CEST 2024 Seems correct for me. There is nothing in the pve updates, all seems updated and fine.
  12. F

    [SOLVED] pve node summary never loads / forever spinner

    Also, is there a wayto perhaps reaquire the summary page files, perhaps mine gone bonkers (without losing other config files and such)
  13. F

    [SOLVED] pve node summary never loads / forever spinner

    still spinns: root@pve:~# systemctl status pvestatd ● pvestatd.service - PVE Status Daemon Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; preset: enabled) Active: active (running) since Sat 2024-07-20 23:39:49 CEST; 12h ago Process: 1981 ExecStart=/usr/bin/pvestatd...
  14. F

    [SOLVED] pve node summary never loads / forever spinner

    I just tried firefox, that never have visited the pve page, still spins there. currently using Vivaldi, and the spinner-thing worked earlier yesterday. icon:
  15. F

    [SOLVED] pve node summary never loads / forever spinner

    Im not sure what happened, but this part of summry has stopped to load, the spinner is spinning forever
  16. F

    10-11% I/O delay

    After hours and hours of googling and testing. What im 98% sure fixed, was to change the hardware to virtio and the disks to "Default (no cache)", now its speedy speedy.
  17. F

    10-11% I/O delay

    Hi. The Debian Guest with NextCloud, when i start to sync (i have like 8TB to sync) that guest becomes quire unresponsive and load average around 8.x Guest have 12 GB ram, 4x4 cores/threads. Guest resides on the mechanical raid10 (4 drives) My precious server i rented at hetzner, have 2...
  18. F

    10-11% I/O delay

    I see on my node that it has IO delay of around 10%, is this normal? Root is mirrored nvme. all the VM:S are raid10 7200rpm mechanical drives. -- Also the "status" part (top) keeps having the "loading" spinner,
  19. F

    4 x 16TB - Better setup?

    Where did flash come from? Or do you mean the m.2 nvme as flash?