gurubert's latest activity

  • gurubert
    gurubert reacted to leesteken's post in the thread Proxmox user base seems rather thin? with Like Like.
    Proxmox's Linux kernel (6.14) is based on Ubuntu instead of Debian and since drivers come with the kernel, maybe try an Ubuntu (installer without installing it) with the same kernel version (25.04). EDIT: The user-space is indeed based on Debian...
  • gurubert
    You already found the answer. the fact you're moving the goalposts isnt helping you. I'd advise to get rid of your "wants"- the newer kernel is probably providing you with no utility at all. Given that the issues with your NIC are known and...
  • gurubert
    U = FOS Lots of us here are in tech-support related positions, so forum support gets to seem like "more work" after a while. A) Watch proxmox-related youtube videos B) Read the last 30 days of forum posts, here and on Reddit (free education)...
  • gurubert
    gurubert reacted to pvps1's post in the thread Proxmox user base seems rather thin? with Like Like.
    the forum is community so it is a highlight that staff members are even present and answer questions patiently what do you expect (seriously, literally)?
  • gurubert
    gurubert reacted to SteveITS's post in the thread Ceph Storage question with Like Like.
    To be (much) clearer I was referencing 3 hosts and assuming multiple OSD on each, with at least one left running, not 3 hosts with only 1 OSD. For the former, Ceph will use any other OSD on the same host (technically any unused host, but there...
  • gurubert
    gurubert reacted to UdoB's post in the thread Ceph Storage question with Like Like.
    Does it? With the failure domain being "host" this does not make sense...? I am definitely NOT a Ceph expert, but now I am interested in the actual behavior: I have a small, virtual Test-Cluster with Ceph. For the following tests three Nodes...
  • gurubert
    gurubert reacted to alexskysilk's post in the thread Ceph Storage question with Like Like.
    no. if you lose three disks on three separate nodes AT THE SAME TIME, the pool will become read only and you'll lose all payload that had placement group with shards on ALL THREE of those OSDs. BUT here's the thing- the odds of that happening...
  • gurubert
    gurubert replied to the thread Ceph Storage question.
    You will only lose the affected PGs and their objects. This will lead to corrupted files (when the data pool is affected) or a corrupted filesystem (if the metadata pool is affected). Depending on which directory is corrupted you may not be able...
  • gurubert
    You may be able to extract the cluster map from the OSDs following this procedure: https://docs.ceph.com/en/squid/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds But as you also changed the IP addresses you will have to change...
  • gurubert
    IMHO you do not need pool separation between VMs for security reasons. You may want to configure multiple pools for quota or multiple Proxmox clusters. Or if you want to set defferent permissions for users in Proxmox. AFAIK Proxmox does not show...
  • gurubert
    gurubert reacted to ness1602's post in the thread Single node ceph with Like Like.
    I wouldn't run it in test even.