Recent content by Lucian Lazar

  1. L

    [SOLVED] CEPH keeps recreating a pool named .mgr

    Thank you, that makes perfect sense. As the cluster is new so no data is yet stored on ceph, deleting the .mgr pool have caused something that can byte me in the long run? The pool .mgr has been recreated automatically but not sure if the previously deleted one will cause issues.
  2. L

    [SOLVED] CEPH keeps recreating a pool named .mgr

    Hi all, as per title, I have created a new Ceph 7 node cluster and noticed that there was a default pool named ".mgr" there. I deleted that pool and created a new one. After some restarts of the managers and monitors, i saw that the pool ".mgr" was recreated all by itself. Is this intended...
  3. L

    Migrate to a new cluster

    Thank you, will try with a test VM just to be sure. Nice thing about backups, it is one thing that i could probably have missed!
  4. L

    Migrate to a new cluster

    One last question, isn't there some kind of lock or issues with pmxcfs that proxmox uses when scanning storage? Just want to be extra cautious before attempting to mount the same storage id and path to 2 clusters.
  5. L

    Migrate to a new cluster

    Thanks, much appreciated
  6. L

    Migrate to a new cluster

    Thank you, so it is safe even if the same mount point is mounted and active at the same time on both clusters?. Good. Removing the VM/CT config file it is sufficient from /etc/pve/xx/100.conf or must be removed from some other paths too? Thank you again.
  7. L

    Migrate to a new cluster

    Hi all, We are currently running an outdated PVE cluster (version 6.4) consisting of 5 nodes. All VMs and containers are using an external NFS share for both disk storage and backups, mounted at /mnt/pve/NFS. We’ve recently acquired 2 new nodes, and instead of adding them to the existing...
  8. L

    Tracking center no successful delivered logs

    I confirm, it is working aas intended now, thank you again for your support. Any timeframe for allowing this option?
  9. L

    Tracking center no successful delivered logs

    Thank you, will disable this and report back if solved. Stay safe!
  10. L

    Tracking center no successful delivered logs

    Thank you for your reply, please find attached screenshots. I have selected the sender and the timeframe since today at 10 AM as the email has arrived at 14.58 more or less as you can see from console log. Here is the complete mail processed line: Jun 25 14:58:47 antispam.myvdc.it...
  11. L

    Tracking center no successful delivered logs

    Hi all, as per title in the tracking center i am unable to find any successful dlivered messages anymore. It was working fine like 4 months ago but after the latest update it only shows the rejected/bounced emails. Searching into /var/log/syslog the successuflly delivedred emails are logged just...
  12. L

    Ceph 75% degraded with only one host down of 4

    Thank you so much, this clarified my doubts. In order to afford a 2 nodes down I should have at least 5 nodes for ceph if I understood correctly. Thank you again
  13. L

    Ceph 75% degraded with only one host down of 4

    Thank you very much, it makes sense, my concern is that once i try filling up that storage pool (10Tb/3 =3,33 usable total) so when i will have let's say the maximum 3 TB provisioned, what will happen when one node will go down.
  14. L

    Ceph 75% degraded with only one host down of 4

    Hi all i am struggling to find the reason why my ceph cluster goes into 75% degraded (as seen in the screenshot above) when i reboot just one node. The 4 node cluster is new, with no Vm or container so the used space is 0. Each of the node contains an equal number of SSD OSDs(6 x 465gb)...