Search results

  1. H

    Replace faulted disk

    Hello, I have a faulted disk on a mirror pool. the spare automatically resilvered the pool (set autoreplace=on) See below: pool: Datastore_7.2k_2 state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue...
  2. H

    [SOLVED] Renewal PVE licences

    Hello, i renewed four licences for my cluster (10 nodes PVE). University of Littoral - France My reseller doesn't have any response from you. Reseller => Teclib SAS Paris. Can you do something ? Thanks in advance. Regards.
  3. H

    Adding hard disks to a PBS datastore

    Hi, I need to add some hard drives to upgrade to raid6 on PBS. Currently, I am in raid5 with three hard disks. I will "break" raid5 and recreate a raid6 with 6 hard disks (hardware raid). My PBS is already in production with a datastore. To recreate it, should I delete the file...
  4. H

    [SOLVED] Email rapport for backups

    Hi, I have a 10 node cluster and a PBS server. I would like to get a single email with the result of all the backups. Is this possible? Currently, I have configured to send emails per node. Thank you for your answers. Regards
  5. H

    [SOLVED] Changing network configuration of nodes

    Hello, I have a cluster with 7 nodes. Two network legs: 1 => 192.168.38.x/24 => Vlan routed for web output 2 => 192.168.46.x/24 => for corosync I want to change the web output (for each node) to another Vlan => 192.168.39.x/24 Will this have an impact on my cluster? Especially to access the...
  6. H

    problem after upgrade 5.4 => 6.4

    Following the update of one of my nodes, I encounter a problem on the ZFS RPOOL Cf screenshot. I followed the procedure : https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 During the update, I encountered a label problem on my mirroring: 3127700216878447358 ? I performed the operations...
  7. H

    Tips/Good practice for large RAID/ZFS pools

    Hello, I currently have a Dell R540 blade with 256GB RAM and 6x 7.2TB HDD 7.2K. (2 x Xeon Bronze 3106@1.7Ghz) I have created two 7.2TB mirroring pools with a spare. This in order to host two virtual NAS (1.5TB / 1TB). This node consumes 160GB of RAM. The idea is to reduce RAM consumption. Do...
  8. H

    [SOLVED] Quorum: 3 Activity blocked

    Hi, My cluster consists of five nodes. Following a sudden change in the ips vlan of my nodes, I modified the configuration files (network/interface). For four nodes the 'corosync/pvecm status' is OK. However, for the last node, I made the command => pvecm expected 1. Since then, it's not...
  9. H

    cluster pve 5.4 network strangeness

    Hi, My cluster consists of 4 nodes (Dell R440) Networks : 192.168.38.0/24 (private network for vms) 193.49.x.y (public network for vms) The vms communicate perfectly on the first three nodes with an external physical server (public network). But the physical server fails to "ping" a vm that...
  10. H

    ZFS Best Pratices

    Hello, I have a blade in my cluster with a 32TB pool (ZFS raidz1-0). 5x7.2Tb 7.2k HDD Can the whole pool be used directly or is it better to "cut it out". (3x10TB ?) "Don't create massive storage pools "just because you can." Even though ZFS can create 78-bit storage pool sizes." I'm thinking...
  11. H

    Network access proxmox nodes

    Hello, After adding two nodes to my production cluster and entering the license keys I updated all nodes with apt dist-upgrade ******************************* root@ipmpve3:~# pveversion pve-manager/5.4-7/fc10404a (running kernel: 4.15.18-16-pve) ******************************* my problem is...
  12. H

    Corosync configuration

    Hello, Can this be a problem when the node name is different from 'ring0_addr' in corosync.conf ? ********************************** node { name: ipmpve2 nodeid: 2 quorum_votes: 1 ring0_addr: cluster-ipm2 } ********************************** To properly shut down Proxmox...
  13. H

    ZFS pool configuration same name

    Hello, When creating a zfs pool (raidz) on the command line, it does not appear in the dashboard. If I create it through the graphical user interface, I get an error message informing me that the pool already exists on the other node. Cluster: 2 nodes pve 5.3-8 / local-zfs+datastore raidz...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!