Search results

  1. N

    Ceph RBD Storage Shrinking Over Time – From 10TB up to 8.59TB

    And maybe you are using cephfs to store something
  2. N

    Best practice for replacing all OSDs in CEPH cluster

    Destroy all osds per node, that would be easiest.
  3. N

    New Proxmox Cluster with Ceph

    This is okay setup for me, usually we don't have more than 4xdisks per one nvme for caching . If you need more than that we usually recommend adding more nvme caching drives. Yes, if this nvme dies, all osds which are cached on it die, but this is acceptable in CEPH.
  4. N

    Can I run vmware in proxmox?

    I have esxi running under proxmox for more than 5 years, currently version 7x
  5. N

    [SOLVED] Help! Ceph access totally broken

    Thats okay if the backups are right. Who cares. Just demolish ceph cluster and bring it up again.
  6. N

    [SOLVED] Help! Ceph access totally broken

    Can you do the backups of the current VMs?
  7. N

    [SOLVED] Help! Ceph access totally broken

    reinstall ceph. Not whole proxmox, but just ceph inside it. And restore backups from PBS or whatever you have. This is usually procedure for it; systemctl stop ceph-mon.target systemctl stop ceph-mgr.target systemctl stop ceph-mds.target systemctl stop ceph-osd.target rm -rf...
  8. N

    Monitoring proxmox and hardware?

    Yeah, usually with Zabbix, nagios has some great scripts: https://github.com/nbuchwitz/check_pve
  9. N

    how to check the current IOPS usage on Ceph cluster

    Easiest way is to look at ceph dashboard inside proxmox.
  10. N

    Tasked with Proxmox optimization

    You didnt specify ssds models, i would like that before any recommendation about ceph.
  11. N

    Proxmox Metric Server: InfluxDB 1 versus 2?

    Use Influx v1 or v2 ,usually it depends on pre-built dashboard on grafana. v3 is lets say opensource, but cripppled, so we are waiting for influx v1 or v2 fork in the future.
  12. N

    Project: Fileserver on Proxmox

    As for storage-replication , HA is working yes ,but automatic failover isn't because it is not a HA storage, and as you can see you can lose the data versus regular HA storage(CEPH etc)> .
  13. N

    Project: Fileserver on Proxmox

    I'll answer some misconceptions from Johannes, and give some recommendations: 1) CEPH demands 3+ nodes, with more than 100tb data i would recommend probably 5-7-9 nodes. So out of your scope. 2) Clustering doesnt add complexity since it is already included, and adding quorom nodes is just fine...
  14. N

    Backup physical desktops/laptops with PBS?

    I usually work with Duplicati on linux/windows workstation. It is really easy gui, with numerous restore options.
  15. N

    Project: Fileserver on Proxmox

    Strange and interesting idea. I would use CEPH for that, it seems that it would suit better. i've never seen single node zfs pool with that size on one machine in Proxmox. But if you stick 20x20tb drives in Raidz2/3 i guess you could do it. And replication only works on ZFS, not on LVM...
  16. N

    HELP! Deleted everything under /var/lib/ceph/mon on one node in a 4 node cluster

    Thats why i sad, do the backups, and afterwards just delete CEPH and reinstall it.
  17. N

    HELP! Deleted everything under /var/lib/ceph/mon on one node in a 4 node cluster

    If you have more monitors do not restart them before backing all up!
  18. N

    HELP! Deleted everything under /var/lib/ceph/mon on one node in a 4 node cluster

    Are the backups working? Do all VM/CTs backups if they are.
  19. N

    HELP! Deleted everything under /var/lib/ceph/mon on one node in a 4 node cluster

    Just delete that monitor from other nodes, and start clean.