Search results

  1. jsterr

    Promox shuts down by itself

    Hello this is a english speaking forum, plöease post in english. There are multiple reasons why a PVE system could shut down / reboot. Is this a cluster or single-node system? Have you checked your logs (journalctl)? Is network stable? Any errors on the server/ipmi/bmc?
  2. jsterr

    Open VSwitch Broadcast bond mode?

    Why would you prefer broadcast mode over frr or routed setup? Why are you needing open-v switch for your ceph meshed network?
  3. jsterr

    Firewall Not working

    Please post your firewall-rules in your security-group so people have the possibility to check it out in more detail. Did you put the security-group on a specific level (datacenter / host / vm)-level?
  4. jsterr

    [SOLVED] Proxmox VE 9.0 BETA iSCSI Thick-LVM Shared-Storage VM Snapshot not wroking

    Your VM-100-Disk-2 is raw, and is only allowed to be qcow2. Seems like a vTPM-disk. I personally also did not set "Use LUNs directly" not sure, if its a requirement though.
  5. jsterr

    Proxmox VE 9.0 BETA released!

    Ill add my forum-post to this list: https://forum.proxmox.com/threads/pve-9-beta-pve-nested-upgrade-from-8-to-9-breaks-boot.168662
  6. jsterr

    [PVE-9 BETA] PVE Nested Upgrade from 8 to 9 breaks boot

    I just tested the PVE-8 to PVE-9 Upgrade on a freshly installed PVE-8 Installation (Nested, VM) so not on physical hardware. After upgrading and rebooting, the VM does not boot anymore and goes to VM-BIOS. Thats the proxmox-boot-tool status output before rebooting: root@pve-2:~#...
  7. jsterr

    Proxmox VE 9.0 BETA released!

    After upgrading to the beta, I get an error on all my vms regarding cloudinit + also for my ct, that are on a lvm-thin pool. all vms and ct cant start: TASK ERROR: activating LV 'pve/vm-103-cloudinit' failed: Check of pool pve/data failed (status:64). Manual repair required! TASK ERROR...
  8. jsterr

    Proxmox VE 9.0 BETA released!

    The wiki-post says: How to do that? running apt policy seems not to be the full command? Might be useful adding an example on howto check. Also apt update brings up the following, might be related to check seeing if the new repo-format is used or not. My sources.list still contains pvetest...
  9. jsterr

    Ceph does not recover on second node failure after 10 minutes

    Yes thats it! Thanks, it was not much of a problem I outed them manually. At least we know now why that happened, thanks!
  10. jsterr

    Ceph does not recover on second node failure after 10 minutes

    Hi I tested a scenario with 5 pveceph nodes: * 5 PVE-CEPH nodes * 4 OSDs per node * 5 Ceph MON * SIZE 3 / MINSIZE 2 If I shutoff one of the 5 pveceph nodes, ceph will automatically recover after 10 minutes and sets osds down & out. everythings green again. After shutting of another one, ceph...
  11. jsterr

    App used for photo viewing

    Yes, Immich works fine, im running it on a homeserver.
  12. jsterr

    pve-firewall no logs even on level debug

    Is this still valid in 2025. Can we somehow show accepts as well?
  13. jsterr

    Ceph Public Network without PVE Management IP

    You can define what network(s) should be used for ceph-public and ceph-cluster network. This has nothing to do with where your proxmox ve managment-ip is configured. if you want to limit the web-ui access, check: https://pve.proxmox.com/pve-docs/pveproxy.8.html#pveproxy_host_acls
  14. jsterr

    How To Passthrough All VLAN To Guest Nic

    Have you tried not to tag the virtual adapter, afaik if you do so, you can then and must tag INSIDE the vm (os) to actually get vlans working.
  15. jsterr

    Hardware [HPE DL 380Gen 12 with Dual CPU, controller] compatablity with proxmox

    Just know that if you wanna using a fc storage with proxmox ve, you cant snapshot vms and ct that are placed on fc-based storage-luns. https://pve.proxmox.com/wiki/Storage
  16. jsterr

    how to achive ceph pool allocating osds to specific ceph pool

    Hello, I would recommend testing this in a virtual pve enviroment first! At first, you need to assign a different device class to the 5 different osds. like hdd-2 for example. Then create two custom crush rules for the setup, as the default crush-rule takes all hosts/osds in consideration. After...
  17. jsterr

    removed node still shows on Datacenter Server View

    https://www.thomas-krenn.com/de/wiki/Proxmox_Node_aus_Cluster_entfernen_und_erneut_hinzuf%C3%BCgen might use translation of your browser, this is a step by step tutorial.
  18. jsterr

    [SOLVED] Issue with Ceph 5/3 configuration and corruption

    I cant see any problem, you should be able to loose 2 nodes without interrupting vms that are NOT running on those 2 now offline nodes. 8.1d1 1204 0 0 0 302431820 0 0 2629 3000 active+clean 11m 34115'48729 34298:60681...
  19. jsterr

    [SOLVED] Issue with Ceph 5/3 configuration and corruption

    Thanks and also: ceph osd pool ls detail