Search results

  1. L

    4-Node-Ceph-Cluster went down

    # pvecm status Cluster information ------------------- Name: saturn Config Version: 4 Transport: knet Secure auth: on Quorum information ------------------ Date: Mon Nov 6 23:26:38 2023 Quorum provider: corosync_votequorum Nodes: 4...
  2. L

    4-Node-Ceph-Cluster went down

    # ceph status cluster: id: ddfe12d5-782f-4028-b499-71f3e6763d8a health: HEALTH_OK services: mon: 4 daemons, quorum aegaeon,anthe,atlas,calypso (age 12h) mgr: anthe(active, since 12h), standbys: atlas, calypso, aegaeon mds: 2/2 daemons up, 2 standby osd: 4 osds...
  3. L

    4-Node-Ceph-Cluster went down

    On my 4-Node Cluster with Ceph I shut down one system to make some BIOS changes. The issue is the cluster came to a complete stop while doing this. What I checked beforehand on the shutdown node: No HA rules are applied to any of the VMs, LXCs All are on Ceph Storage No Backup is running on...
  4. L

    [TUTORIAL] MTU - Jumbo Frames - Bridge Interface - EXAMPLE Post #3

    I want to use Jumbo Frames. From my understanding I have to set the MTU to a value supported by the hardware. But which Interface has to be changed, Bridge Interface or eth0? When changes are set how can it be tested?
  5. L

    Ceph mount a PG/pool for "Images & ISOs"

    I managed to get it to work, but not with the name "proxmox" it was complaining its already used. But I was unable to figure out where cleanup is needed By the way many thanks!
  6. L

    Ceph mount a PG/pool for "Images & ISOs"

    I couldn't figure out what to prepare to make this Guide work and have ceph mounted into a local folder on each node: https://pve.proxmox.com/wiki/Storage:_CephFS
  7. L

    Ceph mount a PG/pool for "Images & ISOs"

    My Conclusion is it makes sense to have multiple MDS and managers on standby incase one dies because that node is down.
  8. L

    Ceph mount a PG/pool for "Images & ISOs"

    Does it make sense to have multiple MDS on each node? how about the managers does it make sense to have multiple in standby
  9. L

    Ceph mount a PG/pool for "Images & ISOs"

    Yes I did, but I am not sure if I did right: Note "(ceph1)" happend while trying to do something different: " ceph fs volume create test1" Later I renamed it. I is that needed? before both where in standby. cluster: id: ddfe12d5-782f-4028-b499-71f3e6763d8a health: HEALTH_OK...
  10. L

    Ceph mount a PG/pool for "Images & ISOs"

    wanted to mount a ceph PG for Image and ISOs. Its just to have all images and ISO the same on every node. I named the PG "proxmox" To check and put the mount somewhere I edited file: "/etc/pve/storage.cfg" cephfs: proxmox path /mnt/pve/proxmox content iso,images fs-name...
  11. L

    Proxmox Ceph - ITX Mainboard 2x SATA

    Last time I had to go back to snapshot there was something wrong with the identity of that host. But its to long ago to remember correctly. How are you doing your snapshot just create a snap of "rpool" is there something special on later re-using that snap. I am only regularly using snapshot...
  12. L

    Proxmox Ceph - ITX Mainboard 2x SATA

    my biggest problem with a single node is what if a hypervisor update goes wrong. on the other hand the 2 node solution doesn't feel like a solution to the problem. For example I didn't figure out how to enable pvecm expect 1 before reboot of the other node.
  13. L

    Proxmox Ceph - ITX Mainboard 2x SATA

    I bet the old CPU is the bottleneck, network i'll fix with USB 2.5 Gb Ethernet: https://www.biostar-europe.com/app/de/mb/introduction.php?S_ID=950 In general I have to be energy sensitive. My PV can only compensate when the sun is shining.
  14. L

    Proxmox Ceph - ITX Mainboard 2x SATA

    Somebody gave to me 5 ITX mainboards with 2x SATA each. Can I build Proxmox with Ceph on them where a total of 2 hosts can fail? Do I really need 2x SATA ports? Isn't one enough? The nice thing with these boards they all come equiped with 32 GB of RAM and an Old AMD Bulldozer CPU. Two of them...
  15. L

    Playing with experimental features: btrfs

    That guide helped a lot, thanks for the link. In addition there is an obvious command which did show me it must be working: btrfs subvolume list /btrfs1 In my case outputs: ID 261 gen 1817 top level 5 path images/10011/vm-10011-disk-0 ID 262 gen 1817 top level 5 path...
  16. L

    Playing with experimental features: btrfs

    I tried that, but I couldn't provide it the correct partition intended for use
  17. L

    Playing with experimental features: btrfs

    right now I am playing with LXC machines. There snapshots do work, but how should I add my btrfs partition to make full use of its features? Is creating a subvolume the way to go?
  18. L

    Playing with experimental features: btrfs

    Playing around with btrfs I noticed replication doesn't work. Here is my questions; Its not working because its not fully implementet Its not working because I am using a mounted btrfs partition (didn't think about a possible subvolume approach if there is any) Is this where the experimental...
  19. L

    Proxmox & Ceph

    so I have only one disk for ceph and three nodes sounds like the fourth node is needed to keep up with a traditional zfs cluster of three nodes in case of a disk failure?
  20. L

    Proxmox & Ceph

    even with no VMs/Containers running there?