Search results

  1. L

    After installation with Ventoy rdinit=/vtoy/vtoy

    Thanks that helped me a lot. changed /etc/kernel/cmdline and run proxmox-boot-tool refresh
  2. L

    After installation with Ventoy rdinit=/vtoy/vtoy

    I got stuck removing "rdinit=/vtoy/vtoy" I am able to start the System by removing "rdinit=/vtoy/vtoy" from the bootmenu. I followed this post: https://forum.proxmox.com/threads/ventoy-install-of-proxmox-8-1-halts-at-loading-initial-ramdisk.143196/#post-686898 Making the settings permanent...
  3. L

    Had to remove 1 Node from Proxmox/Ceph

    Thanks for your help, I figured out there is redundancy exactly as described by you. But what is this command used for: ceph osd ok-to-stop 2 I thought it is useful to check if the redundancy is actually working, what is it really used for?
  4. L

    Had to remove 1 Node from Proxmox/Ceph

    Once I had 4 nodes running with Ceph and one could fail. Now that one has failed and is removed. Is it possible to establish redundancy with the remaining 3 nodes? From ceph -s data: volumes: 2/2 healthy pools: 7 pools, 193 pgs objects: 139.48k objects, 538 GiB usage: 1.6...
  5. L

    Adding cluster_network to an existing ALL public_network configuration

    I planned for 2.5Gb. But 3 of the 5 USB 2.5Gb where not able to operate correctly. After managing to set it up by removal of the fault USB Ethernet it is running. But I am using 1Gb ethernet. Recovery doesn't seem to use even full bandwith of 1Gb, I am curious if the remaining USB Ethernet is...
  6. L

    Adding cluster_network to an existing ALL public_network configuration

    Since new Hardware has arrived I wanted to configure a separate network for the OSDs Its 4 Hosts each has one OSD # ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 3.72595 root default -3 0.93149 host aegaeon...
  7. L

    [SOLVED] Proxmox deleted unreferenced disks

    Could there be some text added where "Referenced disks will always be destroyed" is written when the host is deleted. In a second line something like "Unused disks/mounts count as referenced" ??? I know its my fault, but could you add something or a question mark to be clicked on describing the...
  8. L

    [SOLVED] Proxmox deleted unreferenced disks

    Sounds logical to me. the solution was so easy, just give the disks to any arbitrary container .... ... but I didn't
  9. L

    [SOLVED] Proxmox deleted unreferenced disks

    The host had serveral Mount points, I added another one to copy over new data before getting my backup. Then I deleted the host: The sad thing is the disk with my new data even it was detached got deleted two, THATS a BUG! Here is how it looked before deletion: ZFS gives me the confidence...
  10. L

    4-Node-Ceph-Cluster went down

    Thats only true to the point that it doesn't meat the requirements to have 2 hosts failing. If only one fails all is good.
  11. L

    4-Node-Ceph-Cluster went down

    yes I put most of my hosts in HA mode, but what do you mean with "ignore" specifically?
  12. L

    4-Node-Ceph-Cluster went down

    So the conclusion for me is I enjoy Ceph as storage more than BTRFS or ZFS. Still waiting for the 2.5G Ethernet switch in my lab to arrive
  13. L

    4-Node-Ceph-Cluster went down

    A look into Proxmox Web Interface showed it was not on auto. And ofcourse not in the mentioned file an "auto vmbr" was missing :rolleyes:
  14. L

    4-Node-Ceph-Cluster went down

    I think I figured out what caused the unexpected shutdown, but some guessing is included. First: It all happend because of a broken Network cable, its clip was missing. But here comes the guessing as I didn't find evidence: Because cable wasn't seated properly the connection speed might have...
  15. L

    Limit Ceph Network speed

    Yeah, I am waiting for a new switch so things will improve then, I just hopped for a solution till then.
  16. L

    Limit Ceph Network speed

    Is it possible to artificial limit throughput for each Ceph network (public and cluster) independently? My problem is I have only one connection of 1Gb. To remove Pressure on other Networks I want to limit public and/or cluster Network from Ceph, so corosync has no problems.
  17. L

    4-Node-Ceph-Cluster went down

    My plan is to have additional one 2.5Gb Ethernet connection. But is a connection just for corosync kind of mandatory? Which means I need two additional connections one is data transfer the other corosync?
  18. L

    4-Node-Ceph-Cluster went down

    From what you write I have the impression one network for all things is not sufficient? # pvecm status Cluster information ------------------- Name: saturn Config Version: 4 Transport: knet Secure auth: on Quorum information ------------------ Date: Wed...
  19. L

    4-Node-Ceph-Cluster went down

    Okay it happend again. Here is what I did: Shutdown one node: atlas (yesterday) - all worked as expected HA did its job starting node: atlas - working Nodes stopped working even restarting wait it took like 30 minutes - all Nodes and VMs come back In between step 2 and 3: pvecm status showed...
  20. L

    4-Node-Ceph-Cluster went down

    How do I check if done correctly? ceph status gives me the following: services: mon: 4 daemons, quorum aegaeon,anthe,atlas,calypso (age 33h) mgr: anthe(active, since 33h), standbys: atlas, aegaeon, calypso mds: 2/2 daemons up, 2 standby osd: 4 osds: 4 up (since 33h), 4 in...