Search results

  1. T

    Adding a node to a cluster with Ceph storage

    Great, thank you for the information! The config for Ceph shows: osd_pool_default_min_size = 2 osd_pool_default_size = 3 So that means at least 2 nodes to be up for Ceph to be working properly?
  2. T

    Adding a node to a cluster with Ceph storage

    Hi there, We have a cluster that has 4 nodes with Ceph storage. One of those nodes died yesterday, which puts the cluster in danger of freezing if another node goes down. I'm working on getting the dead node repaired and back online, but it will take a few weeks to get parts for it and get it...
  3. T

    Is my ceph.conf file correct

    I have been updating my Proxmox cluster over the past couple of weeks going from Proxmox 5.x up to it's current level of Proxmox 7. I had some issues when going to Proxmox 7 where I basically had to rebuild my Ceph OSDs on each of my nodes because they weren't mounting properly. I basically...
  4. T

    VM Backup with snapshot mode

    Hi, This is more of a quick and probably easily answered question. When I start a backup of a VM in snapshot mode, it freezes the filesystem, makes a snapshot, and then thaws the filesystem and writes a backup of the VM to a file. Is that correct? Would files changing during that backup would...
  5. T

    Removing a node from cluster properly from command line

    Hi all, I have a node in one of my clusters that the disk is completely full and I can't find out what's taking up the space, so I'd like to re-format and reinstall Proxmox and rejoin it to my cluster. It also has 8 OSDs which are part of the cluster's Ceph storage. What is the best way to...
  6. T

    My rpool/ROOT/pve-1 filesystem is full

    Results from both of those commands: root@alderaan:/var# du -h -x -d1 / 512 /media 715M /usr 8.9M /bin 1.0K /lib64 3.9M /etc 8.2M /sbin 512 /home 357M /lib 173K /root 28K /tmp 1.0K /mnt 512 /srv 512 /opt 344M /var 95M /boot 1.5G /...
  7. T

    My rpool/ROOT/pve-1 filesystem is full

    Hi there, My ZFS filesystem for / is showing completely used up, but I can't find where it went bad. We had an unexpected power loss the other day right when this system was running backups, so I suspect that might be the cause. Here's the output from zfs list: NAME USED AVAIL...
  8. T

    Proxmox hosting pfSense as a firewall to DMZ web server

    OK, I got Proxmox installed and did the network this way: (Not exact format, just ad-libbing it) # LAN side on pfSense (172.16.0.1 = pfSense VM) vmbr0: address 172.16.0.2/24 gateway 172.16.0.1 bridge-ports none bridge-stp off bridge-fd 0 # WAN (eno2 is connected to WAN switch) vmbr1...
  9. T

    Proxmox hosting pfSense as a firewall to DMZ web server

    Hi all, I've read a bunch of articles about this, but I can't seem to find a specific article for my use case. Here it is: I will be hosting a web server on public WAN and would like pfSense to be the firewall that only allows traffic from ports 80/443 into the DMZ to that web server. The...
  10. T

    Upgraded from PVE 6.0.x to 6.1-7, cannot migrate VMs due to Ceph error?

    Go ahead and marked this SOLVED. I ran these commands: rbd ls -l RBD and saw that an unused disk image was out there from a VM I had created new disks on. I went ahead and issued a rbd rm <disk image name> -p RBD and that cleaned it up. I can now do migrations again. :)
  11. T

    Upgraded from PVE 6.0.x to 6.1-7, cannot migrate VMs due to Ceph error?

    Hi all, I just recently updated my cluster from 6.0 to 6.1-7. I currently have 4 nodes, each with 8 OSDs and storage called RBD which contains all of my disk images. The VMs are working fine, but when I try to migrate one to another node, I get the following error: 2020-02-06 12:36:11 starting...
  12. T

    Is softdog better/worse/same as watchdog_ipmi?

    Hi, I configured my servers in a new Proxmox cluster we're building to use the BIOS-based watchdog, which in turn enabled the softdog kernel module in Proxmox VE. Is this watchdog better/worse/same than using the IPMI watchdog? Also, my system gives me three options for what do when the...
  13. T

    Cluster and Ceph network on 10G

    I need to free up one of my 1G NICs on my cluster for a new network, however, I only have 1 free NIC and it's a 10G NIC. Currently I have my cluster network on a separate network. Would putting the cluster network on the Ceph network (10G) cause many issues? I can see it causing issues if it was...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!