redundancy

  1. A

    [SOLVED] Disk format and configuration 2x Proxmox HV with NetApp SAN Storage via FC

    Hi all, We are in the process of updating our production servers and plan to start with two HPE servers equipped with 32G 2P FC HBA adapters, connected to a NetApp MetroCluster SAN storage. The storage setup provides approximately 3TB on each of the two LUNs, which are configured with...
  2. C

    Poor man's redundancy options

    Hi all, I set up a cluster with 2 nodes running Proxmox 8.2 + 1 qdevice. I have a VM that provides critical network services (DNS, proxy server, etc). This VM rarely changes, so live replication is not required. What I'm looking for: A way to have this VM shutdown, but available to start up...
  3. Y

    Ceph - 4 Node NVME Cluster Recommendations

    I need to build a 4 node Ceph cluster and I need an effective capacity of 30TB. Each node is configured like this: Supermicro 2U Storage Server 24 x NVME, X11DPU, Dual 1600W 2 x Intel Xeon Gold 6240 18 Core 2.6Ghz Processor 768G Ram 2x 100G Network 6 x Samsung PM9A3 3.84TB PCIe 4.0 2.5...
  4. L

    Redundant network links to storage

    Proxmox cluster consisting of 3+ nodes and TrueNAS storage. NFS share for VM disks and CT volumes. Dedicated network for storage access. Connected via 10G fiber switch and interfaces. We have experienced a failure when link between node and storage was down. All the linux guests continued to...
  5. F

    Cold Redundancy of a VM in a cluster of two nodes

    Hello, I need help managing the redundancy of a VM. At the moment I have 2 physical machines in Cluster MAIN and BACKUP (Proxmox 7.3) both configured: in RAID1 ZFS (directly from the ProxMox installation). The two machines have two 1TB SSDs. I know I can't have HA since I don't have 3 nodes...
  6. L

    CEPH: osd_pool_default_size > # nodes possible?

    Hello All, I have allocated two nodes in my PVE cluster for storage. The idea is to use cephfs wherever possible, and possibly set up ganesha NFS or samba file share export in containers on the storage nodes for applications that need those. I don't particularly trust the disks I have now...
  7. K

    1 drive internal and 1 external. How to provide redundancy?

    I'm using an Intel Nuc 12 I5 with 64GB of ram, a single internal NVME 2 TB drive (where Proxmox is installed with ZFS) and a USB-C NVME 2 TB external drive. In Proxmox I have 1 VM with Proxmox Backup Server which backups some remote CTs/VMs which should totally takes around 300GB of disk space...
  8. K

    How to provide redundancy with an external drive and ZFS?

    I'm using Proxmox on a Intel Nuc using ZFS. It has a single disk (a fast NVME) and an external disk attached (a slow HDD which I could replace with a fastest NVME). I'm using Proxmox Backup Server virtualized on a VM which backups some local and remote VMs and CTs. Other than that, on the same...
  9. L

    [SOLVED] Redundancy fails when node fails?

    We use ceph as FS on a cluster with 7 nodes. This cluster is used for testing, development and more. Today one of the nodes died. Since all the LXC and KVM are stored on ceph storage, they are completely there, but the configuration of the guests is not available since it's stored on the node...
  10. G

    Proxmox + Ceph 3 Nodes cluster and network redundancy help

    So I just joined a startup as their sysadmin, my role is to build the server from scratch both hardware and software. My experience is setting up a single node Proxmox homelab with things like OMV, Emby, nGinx Reverse Proxy, Guacamole, .... so I'm a noob but I'm learning. The use-case is...
  11. S

    Proxmox in der Cloud als Backupsystem

    Ich habe Proxmox auf meinem eigenen Server am laufen, und möchte nun ein zweites Proxmox evtl. in der Cloud laufen lassen, um bei einem Hardware Defekt schnell umschalten zu können. Die Kriterien sind, dass das Cloud Proxmox günstig ist, wenn es nur auf standby ist (also kein gemieteter...
  12. B

    mixing vlans and bonding?

    Does anyone know if it's possible to mix bonding and vlas? Say I would like to setup a bond over 2 interface eno1 and eno2, and the interface eno1.3000 and eno1.3001. The main purpose of it is to use iscsi multipath while using the bond for ceph.
  13. aasami

    [SOLVED] multipath on iSCSI

    Hello all! I would like to ask for help with configuration of multipath on iSCSI disk in Proxmox 6.2. I have configured multipath on the server: [hp12 ~]# iscsiadm -m session tcp: [1] 10.1.100.112:3260,22 iqn.2000-05.com.3pardata:20220002ac005aab (non-flash) tcp: [4] 10.1.100.113:3260,121...
  14. U

    Need help with configuring drives for redundancy

    I have an old AMD Phenom x6 PC and figured I'd tinker around with it. My disks are as follows 3x1 TB (currently in zraid) 1x2TB 1x8TB ( newly acquired and shucked from a WD black) I don't need more than 5-6TB of storage atm. I'm thinking there is a way to mirror the 3 1TBs and the 2TB on the...
  15. A

    Best Practice bei der Einrichtung von Proxmox

    Hallo zusammen, ich bin derzeit dabei mir eine NAS/Virtualisierungssystem einzurichten und habe mir folgende Hardware schon angeschafft: ASRock Rack X470D4U Ryzen 5 3600X 1 x 16 GB ECC 4 x 2 TB Seagate IronWolf NAS (Raid 5) 2 x 128GB SSD (entweder Hardware Raid 1 oder 1 x 128 GB System...
  16. K

    multiple NFS storage setup for HA cluster & simulate a failed node

    hi all! I have 3 nodes in HA cluster using Proxmox 5.4 . The nodes use the public IP for cluster communication and the private network for the storage network. Each node has an NFS storage shared with the other nodes. The disk in all 3 nodes are in RAID 1 (mirroring) (the reason i cant use ceph...