Search results

  1. Z

    iSCSI Huawei Oceanstore Dorado 3000 LVM and Multipath

    Hi, I have a strange issue setting up iSCSI connection betwwen PVE 8.3.2 and Huawei Ocenastore dorado 3000 v6 via iSCSI and then configuring multipath. Because of this is a test environment, I create only one LUN. PVE has 4x10Gb Nics (Intel) and I setup two networks with 2 different VLANs...
  2. Z

    [SOLVED] Ceph 18.2.2 - How to partition disks

    Hello, I have a Cluster of 6 Nodes with 4 3.2 TB NVMe Disks for each Node. Now I want to add a Node but I have 4 6.4 NVMe Disks. Now I would like to keep the Cluster balanced and therefore I would like to use only 3.2 TB on the disks of the new node. The question is: how should I partition 6.4...
  3. Z

    PVE 7.1-12 Ceph n. pg not deep-scrubbed in time

    Hi, the morning of April 17 I upgraded my 5 nodes Proxmox Cluster (with Ceph 16.2.7) from 7.1-7 to 7.1-12 following these steps: 1. Set noout, noscrub and nodeep-scrub before start the update process; 2. I have updated all 5 nodes without problems; 3. Unset the flags noout, noscrub and...
  4. Z

    Correct/Official procedure to update a PVE7 Cluster with Ceph 16.2

    Hi, is there an official procedure to update a PVE7 Cluster with Ceph 16.2? I have a Cluster with 5 nodes PVE 7.0.10 with Ceph 16.2.5. Up to now this is the procedure I have used (for example to update node 1): 1. Migrate all VMs on node 1 to others nodes 2. apt update 3. apt dist-upgrade 4...
  5. Z

    PVE7 - Ceph 16.2.5 - Pools and number of PG

    Hi, I have a cluster of 5 PVE7 nodes with Ceph 16.2.5. The Hardware configuration of 4 of 5 nodes is: CPU: n.2 EPYC ROME 7402 RAM: 1 TB ECC 2 x SSD 960 GB ZFS Raid 1 for Proxmox 4 x Micron 9300 MAX 3.2 TB NVMe for Pool 1 named Pool-NVMe 2 x Micron 5300 PRO 3.8 TB SSD for Pool 2 named Pool-SSD...
  6. Z

    Restore single Virtual Disk from PBS

    Hi, my configuration is: Cluster of 5 PVE nodes (PVE 7.0-10) whit Ceph 16.2.5 n. Proxmox Backup Server 1.0-5 (I will update it next month) I have some backups of a Windows Server 2019 guest wits 2 virtual disks (scsi0 and scsi1) and I want to restore only one virtual disk (scsi0). How can I...
  7. Z

    PVE7.0.11 and InfluxDB v2

    Hi, I would like to send PVE7.0.11 metrics to a InfluxDB v2 server and I tried the native plug-in InfluxDB in PVE7. Now the problem is that in my InfluxDB v2 Server I used a self signed certificate and when I try to "create" an instance of InfluxDB with protocol HTTPS I receive the following...
  8. Z

    PVE 7 and Mellanox EN driver

    Hi, i have a cluster of 5 PVE7 nodes whit mellanox nic (MCX516A-CCAT) and I wolud like to installa mellanox en driver, but on their site there isn't Debian 11 but debian 10.8 or Ubuntu 20.04 or 21.04 . Now the question is: which one should I use? Thank you
  9. Z

    Proxmox 7 - RRD Graph empty

    Hi, I installed Proxmox 7, upgrading it from Proxmox 6.4 due to the problem described in this post: https://forum.proxmox.com/threads/proxmox-7-stable-installation-aborted.92443/#post-403342 Everything goes well, except the fact that all RRD Graph are empty, totally white! I installed chrony...
  10. Z

    Proxmox 7 stable installation aborted

    Hi, I just tried to install the Proxmox 7 ISO on two nodes of a Supermicro BigTwin Server whit this specific: Server: 2124BT – HNTR Motherboard: H12DST-B CPU: 2 x AMD EPYC ROME 7404 24C RAM: 1TB DDR4 3200 ECC REG Graphics Card: Aspeed AST2500 BMC (on the motherboard) The installation stops...
  11. Z

    Uninstall Proxmox VE from the host where is installed PBS

    Hi, I have a Host on which I first installed PBS and then PVE, now I would like to uninstall only PVE without causing damage to PBS, how should I proceed? thank you
  12. Z

    Proxmox and native (original) Ceph Octopus

    Hi, we are planning to create a Proxmox cluster (8 nodes) for computing and use a native (original) Ceph cluster (Octopus always update to the last version) installed on Debian Buster. Now I have some questions: For Ceph "client" which repository should I use? The original proxomox repository...
  13. Z

    Ceph Single Node vs ZFS

    Hi, a client of mine currently has only one server with the following characteristics: Server: SUPERMICRO Server AS -1113S-WN10RT CPU: 1 x AMD EPYC 7502 32C RAM: 512 GB ECC REC NIC 1: 4 x 10Gb SFP+ Intel XL710-AM1 (AOC-STG-I4S) NIC 2: 4 x 10Gb SFP+ Intel XL710-AM1 (AOC-STG-I4S) SSD for OS ...
  14. Z

    Proxmox QinQ

    Hi, I would like to start using Q in Q.... This is a typical scenario: bond0: eno1 + eno2 bond0.10 : vlan 10 bond0.20: vlan 20 vmbr10: bridge assigned to VMs of customer A vmbr11: bridge assigned to VMs of customer B Customer A have 10 VMs, 4 of them have to "ping" between them but not with...
  15. Z

    Ceph Public and Cluster networks with different configuration

    Hi, within a few weeks I will have to configure a Proxmox 5-node Cluster using Ceph as storage. I have 2 Cisco Nexus 3064-X switches (48 ports 10Gb SFP+) and I would like to configure the ceph networks, for each node, in the following way:: Ceph Public: 2 x 10Gb Linux Bond in Active/Standby...
  16. Z

    Proxmox 5.4 stops to work (zfs issue?)

    Hi, I have a single node with proxmox 5.4-13, and tonight it stops to work. I had to hard reboot the node... I have 3 zfs pool (one for proxmox in raid 1, one for my HDD disks in raidz2 and one for my SSD disks in raidz2) and all the pools are online and scrub is ok. pve version is...
  17. Z

    Proxmox 6.1 with Ceph and RoCE

    Hi, in a few weeks I will have to set up a 4 nodes cluster with ceph. Each node will have 4 nvme osd and I would like to use RoCE for ceph public network and ceph cluster network. My network card is Supermicro AOC-M25G-m4s, practically a Mellanox ConnectX-4LX, with 4 ports 25Gb. I would like to...
  18. Z

    Firewall multicast rules

    Hi, I created a cluster of 4 nodes, now I would like to know which rule I have to add, in the firewall gui, to permit multicast traffic on the management subnet (192.168.15.0/24 , iface vmbr0)... Thank you very much
  19. Z

    OVS Bridge and jumbo frame (mtu=9000)

    Hi, I have read several posts about to configure OVS bridge and jumbo frame (mtu 9000) but I am still confused, so I have some question: Is it possible to set mtu=9000 in the GUI when I create/modify an OVS Bridge? Is it possible to set mtu=9000 in the GUI when I create/modify an OVS IntPort...
  20. Z

    FreeNAS 11.1 as SAN (iSCSI)

    Hi, I've a project to use a FreeNAS 11.1 box as SAN for a cluster of 3 Proxmox 5.2. FreeNas will have 12 disks for storage VM and Containers: 6 x 8TB HDD; 6 x 480GB SSD. I use a 960GB NVMe for ZIL and L2ARC. 2 superDOM is for boot (RAID 1). In FreeNAS I will create two Volumes RAIDZ2: one...