So with the help of a forum member I was able to bond my two 10G nics and set it up on the switch! I also was able to make 2 vlans for ceph public and ceph cluster. Then I was able to make a 3rd vlan using this bond for a migration network!
So hopefully this is an easy question someone can explain to me, I am not the best when it comes to network and networking.
I have four nics on each server 2x1G and 2x10G.
I have a vmbr0 for promxox management 192.168.x.x
Now I am going to be setting up ceph for the two 10G ports. My...
So I have never had to do this so I thought I would ask to make sure it will work and I don't lose data on the cephFS pool.
First I have been replacing my hdd OSD's with SSD's. I have moved my VM/CT storage to the new crush rule for the SSD's. Now the part I am not 100% certain about is that...
So currently I am running a 4 node proxmox ceph cluster (adding a 5th in two months). The hardware is the same on all the servers. I have seen a lot of different posts about the ceph/network/management network/corosync network/vm network/and last but not least backup network.
I am wanting...
So I was looking through my influx db that is collecting the stats from my proxmox cluster and wondering if I am missing the status or it just doesn't get sent over. I am looking for the ceph metrics but am not seeing them. Is there an alternative to collect them if they are not being sent...
So based on your statement that unprivileged containers are not safe to use for any forward facing web services? Care to elaborate more because I do disagree but maybe I just don’t know enough so if you could expand why would be great.
Just create lxc container and use a mount point to the zpool. Will be much better to manage the data in the long run. It truly runs great, been doing it like this for a long time and there is less abstract layers!
It may not be @bbgeek17 but it’s just strange that the issue popped up after the NAS change but it could be related to the issue you linked above after the NAS. Just strange it cropped up after it. I don’t know unless we can look at some logs.
I think it somehow got messed up when you changed around the NAS stuff, just by reading through these posts. Hope you were able to re load and take another stab at this. Proxmox is solid and does just about anything you could ever need!
So I am assuming you have a pcie card that has four m.2 slots? If that is the setup then to be able to run more then one m.2 on that pcie card your bios must support bifurcation and must have it enabled for it to work.
So I have a template file called makevm.sh. Inside I have all my qm set and imports to create VMs. It has worked great for a long time, but now I want to add an efi disk so I can change the type of bios. I have the bios now set with qm set 188 --bios ovmf and it works. Then I am trying to...
Checking to see if anyone has tried using a msata ssd to a usb 3.0 converter as the proxmox boot drive.
1. If so have you had any problems?
2. Any endurance issues with this type of ssd?
I am trying to free up another drive bay in several servers if this is a possible method.