Search results

  1. J

    Proxmox 8 to 9 - NFS on ZFS not working anymore

    I have now tested some things. 1) all my nodes have been restarted 3) it mounts, but it shows all contents with ???: root@blake:/mnt/pve/BackupData35# ls -lah ls: cannot access 'template': Stale file handle ls: cannot access 'longtime': Stale file handle ls: cannot access 'backup': Stale file...
  2. J

    Proxmox 8 to 9 - NFS on ZFS not working anymore

    Have 4 node setup (1 node is just a miniPC for management). one node has ZFS volume with shared folder over NFS that other nodes use. After upgrade other nodes cannot access the NFS anymore. They kinda mount it and show files with "?" in beginning. But i cant read or write on volume. GUI...
  3. J

    Recommended amount of swap?

    For last 15 years i have been running all server swapless. There is no point to delay inevitable - if you are out of ram , you are out of ram. It will make everything slow, trash the hard drives (now we have ssd/nvme that actually wear out significally) . Better a horrible end than endless...
  4. J

    Windows 10 VM: Stop Code Hypervisor Error

    If you run nested virtualization and gpu passhtrough you suppose to set cpu = host, so that you will have all the needed cpu features
  5. J

    [SOLVED] Proxmox stuck after pve kernel reboot

    Thanks. this was it "indeed stuck" made me rethink everything. Switched to AMD 6600XT as it was quick way to find out and it worked straight away. There was additional problem going on - after installing pve kernel my network adapter changed from ens7s0 to ens6s0. This was the reason for no lan...
  6. J

    [SOLVED] Proxmox stuck after pve kernel reboot

    x470 Prime Pro AMD Ryzen 3900 4x32GB DDR4 3600Mhz RTX4060/GT730/RTX4070 ( tried multiple) Will get stuck after installing pve kernel: List of kernels. If i boot any other it works. This is debian 12 install, fully updated and following wiki
  7. J

    Node with question mark

    It means you have something that intermittently takes too much time to query for pvestatd daemon (mountpoint, some other info shown in gui). Usually slow disks or unreliable nfs mount.
  8. J

    "No IOMMU detected, please activate it." I did AFAIK.

    So tested all the bioses upto 3.1 and still no go - I would like to say that's something was broken in kernel , rather than MB support. Is there anything to try out that would be out of ordinary? EDIT: Got it working with 3.4 bios. Under PCI/PCIe settings there is SR-IOV . This needs to be...
  9. J

    "No IOMMU detected, please activate it." I did AFAIK.

    Same problem. I know it worked before (and it was 3.1 bios indeed). Now machines are in production and did not needed IOMMU until now. Bios has been updated to 3.4 in the meantime. X10DRi-T4+ MB which is basically same bios. All the same things done as OP. root@blake:~# dmesg | grep -e DMAR -e...
  10. J

    Proxmox VE 8.1 released!

    If you follow exactly the guide on ceph upgrade it is all good. I made the upgrade. Yes, it's ok to have some nodes running older version. It will report it on ceph status page also, but works. Make the upgrade as guide says and restart the daemons.
  11. J

    Proxmox VE 8.1 released!

    Same here on my node on little 1L server running nvme+ssd 1TB zfs. Machine is just idle, no machines running.. Something seems to add +1 Load average on top of what already happends. Cpu usage is nonexistent. On my other nodes (44c, 88t cpu) I don't see any difference.
  12. J

    Proxmox VE 8.1 released!

    Check if apt update and apt dist-upgrade will give you these 3 packages. It seems they were added later on.
  13. J

    Proxmox VE 8.1 released!

    Well , initially it seemed fix to get managers running was to: mkdir /usr/lib/ceph/mgr But now it complains that modules are not available: HEALTH_ERR: 10 mgr modules have failed Module 'balancer' has failed: Not found or unloadable Module 'crash' has failed: Not found or unloadable Module...
  14. J

    Proxmox VE 8.1 released!

    After 8.04 to 8.1 upgrade my ceph managers won't start anymore: Nov 25 05:35:02 quake systemd[1]: Started ceph-mgr@quake.service - Ceph cluster manager daemon. Nov 25 05:35:02 quake ceph-mgr[166427]: terminate called after throwing an instance of 'std::filesystem::__cxx11::filesystem_error' Nov...
  15. J

    Network drops on new VMs, not old

    For me it was RTL8111/8168/8411 driver issue. I saw quite a few people with same problems and it seems to boil down to power management of network card. If this was turned off , it would keep working normally (at the expense of a little higher power consumption on idle). I moved on from this...
  16. J

    storage is not online (cifs)

    I have the same problem except I'm trying to add storage and get the access denied error. I can connect from command line with smbclient just fine. Running proxmox 8.0.4.
  17. J

    [SOLVED] Remove node from cluster

    I believe permission denied error came from you not having chorus - You should have at least 3 nodes so that chorus would work. If you have two - as soon as one breaks , the whole cluster will go read-only, because remaining node does not have confirmation if he's "in" or "out". Third node can...
  18. J

    ZFS on Proxmox and VM

    Putting this as experience - i did see around 2x worse compression ratio if guest system would use xfs and it would be put on zfs storage in proxmox. Just making and LVM storage on proxmox and using zfs in geuest yealded around 2x better compression. This difference comes probably because of...
  19. J

    Running CEPH? on cheap NVME

    I'm running 10gbe ethernet All nvme's are in either dual or quad carriers that go onto pcie x8 or x16 sockets (running pcie bifurcation). They either have separate forced cooling or full aluminium double sided block heatsink on whole assembly. Temps are also monitored to ensure that there is no...