Search results

  1. L

    4-Node-Ceph-Cluster went down

    Is it possible to give every of my nodes a QDevice? I want to avoid to have another machine just for the QDevice.
  2. L

    4-Node-Ceph-Cluster went down

    It took me a while to figure out what went wrong. First it was me who did things wrong. But let me explain: After the initial setup of my cluster consisting of 3 Nodes I made a test run and figured out it had no problems stopping one Node. Then for some reason I thought with the fourth Node I...
  3. L

    4-Node-Ceph-Cluster went down

    I haven't yet figured votes out completely. Here is my thoughts from a storage perspective: Systems with Raid 1 equivalent have 4 votes Systems with single disks have 2 votes systems with raid 0 equivalent have 1 vote Is this approach feasible?
  4. L

    4-Node-Ceph-Cluster went down

    All node stopped there VMs, LXCs. Some Nodes where even unreachable.
  5. L

    4-Node-Ceph-Cluster went down

    # pvecm status Cluster information ------------------- Name: saturn Config Version: 4 Transport: knet Secure auth: on Quorum information ------------------ Date: Mon Nov 6 23:26:38 2023 Quorum provider: corosync_votequorum Nodes: 4...
  6. L

    4-Node-Ceph-Cluster went down

    # ceph status cluster: id: ddfe12d5-782f-4028-b499-71f3e6763d8a health: HEALTH_OK services: mon: 4 daemons, quorum aegaeon,anthe,atlas,calypso (age 12h) mgr: anthe(active, since 12h), standbys: atlas, calypso, aegaeon mds: 2/2 daemons up, 2 standby osd: 4 osds...
  7. L

    4-Node-Ceph-Cluster went down

    On my 4-Node Cluster with Ceph I shut down one system to make some BIOS changes. The issue is the cluster came to a complete stop while doing this. What I checked beforehand on the shutdown node: No HA rules are applied to any of the VMs, LXCs All are on Ceph Storage No Backup is running on...
  8. L

    [TUTORIAL] MTU - Jumbo Frames - Bridge Interface - EXAMPLE Post #3

    I want to use Jumbo Frames. From my understanding I have to set the MTU to a value supported by the hardware. But which Interface has to be changed, Bridge Interface or eth0? When changes are set how can it be tested?
  9. L

    Ceph mount a PG/pool for "Images & ISOs"

    I managed to get it to work, but not with the name "proxmox" it was complaining its already used. But I was unable to figure out where cleanup is needed By the way many thanks!
  10. L

    Ceph mount a PG/pool for "Images & ISOs"

    I couldn't figure out what to prepare to make this Guide work and have ceph mounted into a local folder on each node: https://pve.proxmox.com/wiki/Storage:_CephFS
  11. L

    Ceph mount a PG/pool for "Images & ISOs"

    My Conclusion is it makes sense to have multiple MDS and managers on standby incase one dies because that node is down.
  12. L

    Ceph mount a PG/pool for "Images & ISOs"

    Does it make sense to have multiple MDS on each node? how about the managers does it make sense to have multiple in standby
  13. L

    Ceph mount a PG/pool for "Images & ISOs"

    Yes I did, but I am not sure if I did right: Note "(ceph1)" happend while trying to do something different: " ceph fs volume create test1" Later I renamed it. I is that needed? before both where in standby. cluster: id: ddfe12d5-782f-4028-b499-71f3e6763d8a health: HEALTH_OK...
  14. L

    Ceph mount a PG/pool for "Images & ISOs"

    wanted to mount a ceph PG for Image and ISOs. Its just to have all images and ISO the same on every node. I named the PG "proxmox" To check and put the mount somewhere I edited file: "/etc/pve/storage.cfg" cephfs: proxmox path /mnt/pve/proxmox content iso,images fs-name...
  15. L

    Proxmox Ceph - ITX Mainboard 2x SATA

    Last time I had to go back to snapshot there was something wrong with the identity of that host. But its to long ago to remember correctly. How are you doing your snapshot just create a snap of "rpool" is there something special on later re-using that snap. I am only regularly using snapshot...
  16. L

    Proxmox Ceph - ITX Mainboard 2x SATA

    my biggest problem with a single node is what if a hypervisor update goes wrong. on the other hand the 2 node solution doesn't feel like a solution to the problem. For example I didn't figure out how to enable pvecm expect 1 before reboot of the other node.
  17. L

    Proxmox Ceph - ITX Mainboard 2x SATA

    I bet the old CPU is the bottleneck, network i'll fix with USB 2.5 Gb Ethernet: https://www.biostar-europe.com/app/de/mb/introduction.php?S_ID=950 In general I have to be energy sensitive. My PV can only compensate when the sun is shining.
  18. L

    Proxmox Ceph - ITX Mainboard 2x SATA

    Somebody gave to me 5 ITX mainboards with 2x SATA each. Can I build Proxmox with Ceph on them where a total of 2 hosts can fail? Do I really need 2x SATA ports? Isn't one enough? The nice thing with these boards they all come equiped with 32 GB of RAM and an Old AMD Bulldozer CPU. Two of them...
  19. L

    Playing with experimental features: btrfs

    That guide helped a lot, thanks for the link. In addition there is an obvious command which did show me it must be working: btrfs subvolume list /btrfs1 In my case outputs: ID 261 gen 1817 top level 5 path images/10011/vm-10011-disk-0 ID 262 gen 1817 top level 5 path...
  20. L

    Playing with experimental features: btrfs

    I tried that, but I couldn't provide it the correct partition intended for use