Thanks for the reply .
1/ AS you said , i re install a PMB server on the external save serve ( actually a windows server)
2/ I put physically all my hdd disk's of the save server on the PMB server to increase my datastore volume storage as i need.
Hello,
I 'am using proxmox backup server , I've created a mounted DATASTORE ( /mnt/datastore) to backup all my vm' on an external Save Server.
It works very well.
No I want to backup this storage on a LTO7 drive. I've try to install physically my LTO drive on the PMBackup server. But...
Hello,
I am using CEPH 17.2.7 with 3 Hosts ( CEPH01 CEPH02 CEPH03 ) and only 1 POOL ( named rpool in my example) .
These POOL use the default crush rule . All my disks ( x12 ) were only SATA HDD.
Now i want to have 2 new POOLS :
- SATAPOOL ( for slow storage)
- SSDPOOL ( for fast storage)...
Hello,
I am running two Promox clusters, one PVE only for the benefit of having a Ceph cluster running the RBD pool, so no VMs on that one. Plus, my actual VM cluster connected to this RBD pool.
I update my PVE Vm'cluster to 6.4.13 , then I update my Pve Ceph Cluster( 3 nodes) to PVE 6.4.13 and...
Hello ,
Thank you. i ve just update my 3 CEPH Cluster nodes to 14.2.22.
I ve only the warning : mons are allowing insecure global_id reclaim. ( no clients warning , because i use external rdb pool certainly?)
At this step , can i set the : ceph config set mon...
Hello,
I am running two Promox clusters, one PVE only for the benefit of having a Ceph cluster running the RBD pool, so no VMs on that one. Plus, my actual VM cluster connected to this RBD pool.
I update my PVE Vm'cluster to 6.4.13 , now i want to update my Pve Ceph Cluster( 3nodes) . I am...
I forgot to say that I ve tried to create a vm' in the local disk of a PM' Node. No problem, the speed is correct : 1Gbit/sec. ( snapshot mode)
Whereas the speed on vm's stored in my Ceph pool : 30 Mbits/sec. Its not normal..
1/ My PBS use a local storage -> so on my HDD SAS 15000K disk mirror.
2/ I im using VM backup and snapshot Mode. I ve tried to use the "stop" mode , its faster indeed . But That's not what i want. I dont want to stop vm' during backup.
I know people who have a similar environnment ...
Hello,
I'am using a PVE Cluster of 4 nodes ( v6.2.15) with an external ceph storage of 3 nodes. My Proxmox nodes and ceph nodes work in a 10Gbits environnement.
I've tried several backup destination , NAS, Proxmox Backup Server but the result is the same. My Backup are very very slow...
hello,
I'm using actually a PVE Cluster 6.2.15 with 4 nodes and an external ceph storage based on a CEPH Nautilus Cluster 14.2.11 with 3 nodes where Disks VM's are stored.
I would like to upgrade my PVE cluster from 6.x to 7.x and my ceph cluster nodes from nautilus to octopus. What i ve...
hello,
I'm using actually a PVE Cluster 6.2.15 with 4 nodes and an external pool storage based on a CEPH Nautilus Cluster with 3 nodes , where Disks VM's are stored.
(+ I have an HP physical backup windows server with 8 SATA 4To disks in raid5 )
This Infrastructure is working on a 10Gbits...
ok thank you. I upgraded from pve5 to 6 on the first ceph node. Its works but i have an error on the PM interface :
mon_command failed - command not known (500)
I think its normal because i'am in migration procedure, luminous cannot work with pve6 isn't it ?
Can I upgrade my 2 last nodes...
Hi ,
i'am using three PM nodes with a ceph cluster. ( PM 5.4.13 and CEPH luminous 12.2.12 )
I would like to upgrade PM 5 .x to 6. x and then, upgrade luminous to nautilus. I read the upgrade procedure : https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
When i run the pve5to6 script, i...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.