Search results

  1. H

    Replace faulted disk

    Now it's solved. i have to do a zpool clear and after un zpool offline et un zpool detach. pool: Datastore_7.2k_2 state: ONLINE scan: resilvered 1.03T in 04:18:14 with 0 errors on Thu Nov 23 16:02:56 2023 config: NAME STATE READ WRITE CKSUM...
  2. H

    Replace faulted disk

    Hello, Thanks for your answer. However the faulted disk don't pass offline. He stay tag faulted... Have you an idears ? thanks in advance
  3. H

    Replace faulted disk

    Hello, I have a faulted disk on a mirror pool. the spare automatically resilvered the pool (set autoreplace=on) See below: pool: Datastore_7.2k_2 state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue...
  4. H

    [SOLVED] Renewal PVE licences

    Hi, My reseller have resolved th problem Regards.
  5. H

    [SOLVED] Renewal PVE licences

    Hello, i renewed four licences for my cluster (10 nodes PVE). University of Littoral - France My reseller doesn't have any response from you. Reseller => Teclib SAS Paris. Can you do something ? Thanks in advance. Regards.
  6. H

    Adding hard disks to a PBS datastore

    Hi, I need to add some hard drives to upgrade to raid6 on PBS. Currently, I am in raid5 with three hard disks. I will "break" raid5 and recreate a raid6 with 6 hard disks (hardware raid). My PBS is already in production with a datastore. To recreate it, should I delete the file...
  7. H

    [SOLVED] Email rapport for backups

    Hi, Thanks for your answer, I understand. This PBS license already gives me great performance in terms of time and volume of backups :) Regards
  8. H

    [SOLVED] Email rapport for backups

    Hi, I have a 10 node cluster and a PBS server. I would like to get a single email with the result of all the backups. Is this possible? Currently, I have configured to send emails per node. Thank you for your answers. Regards
  9. H

    [SOLVED] Changing network configuration of nodes

    Hi, Thank you for your reply. The corosync is on a different network, so it was no problem for me to change the routed Vlan for the web output. Regards
  10. H

    [SOLVED] Changing network configuration of nodes

    Hello, I have a cluster with 7 nodes. Two network legs: 1 => 192.168.38.x/24 => Vlan routed for web output 2 => 192.168.46.x/24 => for corosync I want to change the web output (for each node) to another Vlan => 192.168.39.x/24 Will this have an impact on my cluster? Especially to access the...
  11. H

    problem after upgrade 5.4 => 6.4

    Following the update of one of my nodes, I encounter a problem on the ZFS RPOOL Cf screenshot. I followed the procedure : https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 During the update, I encountered a label problem on my mirroring: 3127700216878447358 ? I performed the operations...
  12. H

    Tips/Good practice for large RAID/ZFS pools

    Perfect. Thank you for your valuable advice. I will implement all this little by little. Yours sincerely. PS : are you interested in a feedback ?
  13. H

    Tips/Good practice for large RAID/ZFS pools

    Thank you, now I understand what I wanted to implement for several months. In my case, a 256GB SSD hard drive can be beneficial for the ZFS-ARC ? If yes, a "zpool add -f rpool cache /dev/xxx" is sufficient ? Best regards
  14. H

    Tips/Good practice for large RAID/ZFS pools

    Thank you very much for your response. However, the file /etc/modprobe.d/zfs.conf does not exist on my node. I have seen the concept of ZFS ARC on https://pve.proxmox.com/wiki/ZFS_on_Linux and my values are (arc_summary) : zfs_arc_min 0 zfs_arc_max 0 Do you need to create the file and...
  15. H

    Tips/Good practice for large RAID/ZFS pools

    Hello, I currently have a Dell R540 blade with 256GB RAM and 6x 7.2TB HDD 7.2K. (2 x Xeon Bronze 3106@1.7Ghz) I have created two 7.2TB mirroring pools with a spare. This in order to host two virtual NAS (1.5TB / 1TB). This node consumes 160GB of RAM. The idea is to reduce RAM consumption. Do...
  16. H

    [SOLVED] Quorum: 3 Activity blocked

    Hi, After a 'fresh_install' of the node. I applied the network configuration advised/given by 'spirit'. Many thanks to him. Best regards,
  17. H

    [SOLVED] Quorum: 3 Activity blocked

    Thank you very much. I will apply this. However, the cluster communicates on 46.0/24. Also in the file '/etc/pve/priv/known_hosts' ipmpve6' still appears. Can this be a problem when adding to the cluster ?
  18. H

    [SOLVED] Quorum: 3 Activity blocked

    Here is the network interfaces. What do you think about it? What are the modifications to be made ? Thanking you, Sincerely
  19. H

    [SOLVED] Quorum: 3 Activity blocked

    Hello Thank you for your response. I understand well for the name of the network legs but I am in a hurry to get back into production. The network paw names (with Tag Vlan) work on another node. Do you think I can reintegrate this node without any problem ? Thank you, Yours sincerely,

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!