Great, thank you for the information! The config for Ceph shows:
osd_pool_default_min_size = 2
osd_pool_default_size = 3
So that means at least 2 nodes to be up for Ceph to be working properly?
Hi there,
We have a cluster that has 4 nodes with Ceph storage. One of those nodes died yesterday, which puts the cluster in danger of freezing if another node goes down. I'm working on getting the dead node repaired and back online, but it will take a few weeks to get parts for it and get it...
I have been updating my Proxmox cluster over the past couple of weeks going from Proxmox 5.x up to it's current level of Proxmox 7.
I had some issues when going to Proxmox 7 where I basically had to rebuild my Ceph OSDs on each of my nodes because they weren't mounting properly. I basically...
Hi,
This is more of a quick and probably easily answered question. When I start a backup of a VM in snapshot mode, it freezes the filesystem, makes a snapshot, and then thaws the filesystem and writes a backup of the VM to a file. Is that correct? Would files changing during that backup would...
Hi all,
I have a node in one of my clusters that the disk is completely full and I can't find out what's taking up the space, so I'd like to re-format and reinstall Proxmox and rejoin it to my cluster.
It also has 8 OSDs which are part of the cluster's Ceph storage.
What is the best way to...
Hi there,
My ZFS filesystem for / is showing completely used up, but I can't find where it went bad. We had an unexpected power loss the other day right when this system was running backups, so I suspect that might be the cause.
Here's the output from zfs list:
NAME USED AVAIL...
OK, I got Proxmox installed and did the network this way:
(Not exact format, just ad-libbing it)
# LAN side on pfSense (172.16.0.1 = pfSense VM)
vmbr0:
address 172.16.0.2/24
gateway 172.16.0.1
bridge-ports none
bridge-stp off
bridge-fd 0
# WAN (eno2 is connected to WAN switch)
vmbr1...
Hi all,
I've read a bunch of articles about this, but I can't seem to find a specific article for my use case. Here it is:
I will be hosting a web server on public WAN and would like pfSense to be the firewall that only allows traffic from ports 80/443 into the DMZ to that web server. The...
Go ahead and marked this SOLVED. I ran these commands:
rbd ls -l RBD and saw that an unused disk image was out there from a VM I had created new disks on. I went ahead and issued a rbd rm <disk image name> -p RBD and that cleaned it up.
I can now do migrations again. :)
Hi all,
I just recently updated my cluster from 6.0 to 6.1-7. I currently have 4 nodes, each with 8 OSDs and storage called RBD which contains all of my disk images. The VMs are working fine, but when I try to migrate one to another node, I get the following error:
2020-02-06 12:36:11 starting...
Hi,
I configured my servers in a new Proxmox cluster we're building to use the BIOS-based watchdog, which in turn enabled the softdog kernel module in Proxmox VE. Is this watchdog better/worse/same than using the IPMI watchdog?
Also, my system gives me three options for what do when the...
I need to free up one of my 1G NICs on my cluster for a new network, however, I only have 1 free NIC and it's a 10G NIC. Currently I have my cluster network on a separate network. Would putting the cluster network on the Ceph network (10G) cause many issues? I can see it causing issues if it was...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.