Search results

  1. H

    iSCSI inconsistency in 3-node cluster

    iscsiadm -m discovery -t st -p freenas:3260 -l What does that show
  2. H

    Using external Ceph cluster

    So the biggest issue was that the I had not created the rdb pool on storage side ... duh! after pool was created all I needed was put key ring in /etc/pve/priv/ceph/ and us gui to add rbd. I'm now able to use my external Ceph cluster as an RBD storage backend for VMs.
  3. H

    Using external Ceph cluster

    so it looks like the ceph binaries that are installed on the Proxmox hosts is 12.2 so would the recommended action be to install nautilus from the croit or Proxmox deb repo ? root@prdopspve02:~# ceph osd get-require-min-compat-client 2020-12-18 09:51:09.288536 7f1535b4e700 -1 Errors while...
  4. H

    Using external Ceph cluster

    Our external Ceph cluster is Nautilus.
  5. H

    Using external Ceph cluster

    We have a Ceph cluster running outside of our Proxmox cluster and I wanted to add this storage to our proxmox cluster. I tried to follow the instructions which was add keyring in /etc/pve/priv/ceph/[storageID].keyring and then add your rdb: storagID to /etc/pve/storage.conf. I now see the rbd...
  6. H

    Failover and high availability network bonding

    If your switches support 802.3ad then that is what you should use. Then also make sure you change the hash policy on the bond to layer2+3 which should give you maximum speed over all the bonded interfaces
  7. H

    Proxmox UI reports wrong memory usage

    So I feel that is wrong since the buffer/cache memory is reclaimable for use by applications and not "used" but temporarily used. Looking at the UI now it looks like all of my VMs are bumping up against the read-line but in reality they still have a good amount of available memory.
  8. H

    Proxmox UI reports wrong memory usage

    We have a few Linux vms (Debian Buster) where Proxmox is reporting way higher usage than what the vm IS using. In the UI I get this: 93.96% (15.03 GiB of 16.00 GiB) $ free -h total used free shared buff/cache available Mem: 15Gi 6.4Gi...
  9. H

    Another iscsi/multipath question

    @wolfgang I use LVM on the LUN since that is WAY easier :) and its also very similar to the ESXi setup using VMFS. Our SAN backend is emc VNXe and from my testing so far these settings work great after I figured out that the MTU on linux want set because I tried to set it to 9216. After setting...
  10. H

    Another iscsi/multipath question

    I "think" I have this configured correctly but wanted to see if there was others that might know better. Step 1. in GUI add a new iscsi pointing to the iscsi portal and adding 1 target. (also uncheck Use LUNs) Step 2. Add node-startup = automatic to /etc/iscsi/iscsid.conf (Restart iscsi...
  11. H

    [SOLVED] network configuration recommendation

    I will answer my own question I guess... What I decided to do was to create 2 more bridges with each of the bridges having 1 nic AND 2 vlans each bridge1 -> nic3 -> vlan 1/2 (SAN) bridge2 -> nic4 -> vlan 1/2 (SAN) This way I can set the MTU to something higher for the Storage...
  12. H

    [SOLVED] network configuration recommendation

    We are currently testing Proxmox as a replacement for our Esxi farm (No license) where we have 20+ servers. Currently our Esxi hosts have 1 bond (2x1Gib) and 2x1Gib thats multipath iscsi (2 paths each) to our Storage. Bond has VLan tagging and all vms connect to this "bridge". I'm wondering the...