So hopefully this is an easy question someone can explain to me, I am not the best when it comes to network and networking.
I have four nics on each server 2x1G and 2x10G.
I have a vmbr0 for promxox management 192.168.x.x
Now I am going to be setting up ceph for the two 10G ports. My...
So I have never had to do this so I thought I would ask to make sure it will work and I don't lose data on the cephFS pool.
First I have been replacing my hdd OSD's with SSD's. I have moved my VM/CT storage to the new crush rule for the SSD's. Now the part I am not 100% certain about is that...
So currently I am running a 4 node proxmox ceph cluster (adding a 5th in two months). The hardware is the same on all the servers. I have seen a lot of different posts about the ceph/network/management network/corosync network/vm network/and last but not least backup network.
I am wanting...
So I have a template file called makevm.sh. Inside I have all my qm set and imports to create VMs. It has worked great for a long time, but now I want to add an efi disk so I can change the type of bios. I have the bios now set with qm set 188 --bios ovmf and it works. Then I am trying to...
Checking to see if anyone has tried using a msata ssd to a usb 3.0 converter as the proxmox boot drive.
1. If so have you had any problems?
2. Any endurance issues with this type of ssd?
I am trying to free up another drive bay in several servers if this is a possible method.
Looking to see if any one uses these disks in a ceph cluster SanDisk LB406S 400GB 2.5" SAS SSD 6Gbps.
If so can you tell me if they are performing well for you or not or if these drives should be avoided?
So I am moving my ceph OSD's from sas drives to SSD drives. I was able to move the vm's to the new ssd ceph pool with ease using using the move disk in the vm hardware. But there is no option to move the CloudInit Drive (ide2) over to the new ssd ceph pool.
So right now in my hdd ceph pool...
I might have missed the mark, but I have a simple question to see if Im correct or if I need to get a few more parts.
Currently I have a three node ceph cluster that was deployed through the proxmox gui. I have 2 x 10 Gb ports, one for public and other for private ceph traffic. Then I have 2...
I am curios if you can point me in the right direction for this or if I am even looking for the right thing.
In short I have a ceph cluster with two different rule sets for hdd and sdd. I use my sdd pool to run my all my VMs and big work loads. Since I have transitioned half of my sas hdd to...
Question to see if this will work or if works only one way.
Right now if I want to use the user data in in snippets I set it with qm set 900 --cicustom "user=proxmoxnfs-iso:snippets/user.yaml" after I clone a template and it works fine.
Want I have been trying to do and it does not seem...