Search results

  1. S

    Why I don't see DROP logs at HOST level?

    The log level you set is emerg, for the DROP try changing this to another level : https://pve.proxmox.com/wiki/Firewall
  2. S

    Odd Ceph Issues

    Do you need the data on that pool? If you delete the PGs your be deleting small chunks of a good % of your data making most of your data corrupt on that pool.
  3. S

    Split temporarily cluster in 2 clusters due to network instability

    Proxmox wont let you join a node that has VM's on it already. So yes technically you can breakup the cluster, however you would then have to backup every VM remove it off the node before you could add the node to the cluster and then restore the VM.
  4. S

    Ceph OSD

    I don't think proxmox supports it even via CLI, you will have to do it using your own separate CEPH cluster and ceph-deploy. But I may be wrong and it may have been added / support in the Proxmox CLI.
  5. S

    fault tolerance of ceph

    What others are saying is in the small likely hood you ended up with all 4 servers up but 2 not able to talk to the other two. You would end up with two separate half's of your cluster both working both making changes. When the 4 servers then are able to start talking again you would end up with...
  6. S

    Ceph OSD

    Yes bluestore is definitely the way to go, and has been around long enough now to be pretty stable and the new default for any new clusters. However as previously said if you can't get at least a 30ish GB partition per an OSD Bluestore will end up moving all the DB data to the slow disk anyway...
  7. S

    Ceph OSD

    It provides a much less of a benefit, also the DB works differently to how the journal did. Ideally the min space you will want for a DB partition is 31GB. I guess your node / replication can survive a whole host OSD failure? Fact a single SSD will cause the whole host of OSD's to go offline.
  8. S

    Ceph OSD

    Yes by default bluestore will try and use the whole physical disk for both the OSD / DB Disk, if you want to use just a partition you need to do this via the CLI as from my understanding/experience it's not supported by Proxmox. However, one thing to note, with Bluestore the benefit of an SSD...
  9. S

    Odd problems: storage space being consumed & UI errors / timing out, etc.

    Have you checked the browser console (normally F12) when this happens and see what errors if any are being shown? On the apt error, the /run folder has 1.6GB on ramdisk, there must have just been alot of temp files in /run folder that had maxed the tmpfs out. Hence a reboot fixed this due to...
  10. S

    Setting up HOST only firewalls in my cluster

    Make sure you have the firewall disabled/unticked on the VM NICs. Then the rules will only be applied to the hosts
  11. S

    Mixed Cluster 5.4 / 6.x during upgrade?

    If you mean machines by VMs? Then yes you can always move a VM from the older node to a new upgraded node during the rolling upgrade of a cluster. I think earlier people understood you wanted to add a 5.4 node to a 6.x cluster.
  12. S

    New 6.1 install on previous pve hardware - backup drive mount missing

    mkdir /mnt/backups mount /dev/sdb /mnt/backups If the above fails then try /dev/sdb1 however from your output looks like the filesystem is on the root disk and not on a partition.
  13. S

    [SOLVED] Ceph create OSD on unused disk failed, disk marked as used now

    Have you tried to restart the server? Sounds like a process still has a lock on /dev/sdd
  14. S

    Odd problems: storage space being consumed & UI errors / timing out, etc.

    Is this server running on a very small amount of RAM? Quite a few of the /run folders run on tmfps which is storage based on a RAM disk.
  15. S

    Booting from hard disk hang.....

    To me that sounds like the filesystem of the disks are corrupt. What exactly did you do to cause the issue? Then what did you do to fix it? Just because you can see the disks again does not mean the data behind them is readable/functioning.
  16. S

    Booting from hard disk hang.....

    So the VM is showing as running in Proxmox?
  17. S

    Booting from hard disk hang.....

    At the bottom of the GUI you can see previous tasks and their output. What error does it show on the start command?
  18. S

    Since 6.0 backup hang vms

    Also make sure your running the latest qemu agent in the VM. Either apt / yum if you installed it from a repo.
  19. S

    Proxmox/Ceph/Cache Tiering/Bug?

    Correct, hence why I said heading that way. It does not get as much love and support as the rest of the CEPH code base. And I know they keep talking about replacing it with something new. Was more just a heads up to the OP