Search results

  1. A

    Mount LTO 7 drive to pmb backup server

    Thanks for the reply . 1/ AS you said , i re install a PMB server on the external save serve ( actually a windows server) 2/ I put physically all my hdd disk's of the save server on the PMB server to increase my datastore volume storage as i need.
  2. A

    Mount LTO 7 drive to pmb backup server

    Hello, I 'am using proxmox backup server , I've created a mounted DATASTORE ( /mnt/datastore) to backup all my vm' on an external Save Server. It works very well. No I want to backup this storage on a LTO7 drive. I've try to install physically my LTO drive on the PMBackup server. But...
  3. A

    CEPH - Add SSD pool and SATA pool

    Thank you. I change the rule , ceph is rebalancing ( active+remapped state) but its veryyyyyyyyyyy long . ( about 4days left !? ) Is it normal ?
  4. A

    CEPH - Add SSD pool and SATA pool

    Hello, I am using CEPH 17.2.7 with 3 Hosts ( CEPH01 CEPH02 CEPH03 ) and only 1 POOL ( named rpool in my example) . These POOL use the default crush rule . All my disks ( x12 ) were only SATA HDD. Now i want to have 2 new POOLS : - SATAPOOL ( for slow storage) - SSDPOOL ( for fast storage)...
  5. A

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    thanks, it's done. Health OK is back. I will enable it again if i see any issues.
  6. A

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Hello, I am running two Promox clusters, one PVE only for the benefit of having a Ceph cluster running the RBD pool, so no VMs on that one. Plus, my actual VM cluster connected to this RBD pool. I update my PVE Vm'cluster to 6.4.13 , then I update my Pve Ceph Cluster( 3 nodes) to PVE 6.4.13 and...
  7. A

    Questions about Ceph Nautilus Security Update "insecure global_id reclaim"

    Hello , Thank you. i ve just update my 3 CEPH Cluster nodes to 14.2.22. I ve only the warning : mons are allowing insecure global_id reclaim. ( no clients warning , because i use external rdb pool certainly?) At this step , can i set the : ceph config set mon...
  8. A

    Questions about Ceph Nautilus Security Update "insecure global_id reclaim"

    Hello, I am running two Promox clusters, one PVE only for the benefit of having a Ceph cluster running the RBD pool, so no VMs on that one. Plus, my actual VM cluster connected to this RBD pool. I update my PVE Vm'cluster to 6.4.13 , now i want to update my Pve Ceph Cluster( 3nodes) . I am...
  9. A

    Backup very slow

    I forgot to say that I ve tried to create a vm' in the local disk of a PM' Node. No problem, the speed is correct : 1Gbit/sec. ( snapshot mode) Whereas the speed on vm's stored in my Ceph pool : 30 Mbits/sec. Its not normal..
  10. A

    Backup very slow

    1/ My PBS use a local storage -> so on my HDD SAS 15000K disk mirror. 2/ I im using VM backup and snapshot Mode. I ve tried to use the "stop" mode , its faster indeed . But That's not what i want. I dont want to stop vm' during backup. I know people who have a similar environnment ...
  11. A

    Backup very slow

    Hello, I'am using a PVE Cluster of 4 nodes ( v6.2.15) with an external ceph storage of 3 nodes. My Proxmox nodes and ceph nodes work in a 10Gbits environnement. I've tried several backup destination , NAS, Proxmox Backup Server but the result is the same. My Backup are very very slow...
  12. A

    Migration PVE 6.x to 7.x and Ceph nautilus to octopus

    hello, I'm using actually a PVE Cluster 6.2.15 with 4 nodes and an external ceph storage based on a CEPH Nautilus Cluster 14.2.11 with 3 nodes where Disks VM's are stored. I would like to upgrade my PVE cluster from 6.x to 7.x and my ceph cluster nodes from nautilus to octopus. What i ve...
  13. A

    PBS infrastructure recommendations

    hello, I'm using actually a PVE Cluster 6.2.15 with 4 nodes and an external pool storage based on a CEPH Nautilus Cluster with 3 nodes , where Disks VM's are stored. (+ I have an HP physical backup windows server with 8 SATA 4To disks in raid5 ) This Infrastructure is working on a 10Gbits...
  14. A

    pve5to6 questions

    ok thank you. I upgraded from pve5 to 6 on the first ceph node. Its works but i have an error on the PM interface : mon_command failed - command not known (500) I think its normal because i'am in migration procedure, luminous cannot work with pve6 isn't it ? Can I upgrade my 2 last nodes...
  15. A

    pve5to6 questions

    Hi , i'am using three PM nodes with a ceph cluster. ( PM 5.4.13 and CEPH luminous 12.2.12 ) I would like to upgrade PM 5 .x to 6. x and then, upgrade luminous to nautilus. I read the upgrade procedure : https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 When i run the pve5to6 script, i...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!