Now it's solved. i have to do a zpool clear and after un zpool offline et un zpool detach.
pool: Datastore_7.2k_2
state: ONLINE
scan: resilvered 1.03T in 04:18:14 with 0 errors on Thu Nov 23 16:02:56 2023
config:
NAME STATE READ WRITE CKSUM...
Hello,
I have a faulted disk on a mirror pool.
the spare automatically resilvered the pool (set autoreplace=on) See below:
pool: Datastore_7.2k_2
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue...
Hello,
i renewed four licences for my cluster (10 nodes PVE).
University of Littoral - France
My reseller doesn't have any response from you.
Reseller => Teclib SAS Paris.
Can you do something ?
Thanks in advance.
Regards.
Hi,
I need to add some hard drives to upgrade to raid6 on PBS.
Currently, I am in raid5 with three hard disks.
I will "break" raid5 and recreate a raid6 with 6 hard disks (hardware raid).
My PBS is already in production with a datastore.
To recreate it, should I delete the file...
Hi,
I have a 10 node cluster and a PBS server.
I would like to get a single email with the result of all the backups.
Is this possible?
Currently, I have configured to send emails per node.
Thank you for your answers.
Regards
Hi,
Thank you for your reply.
The corosync is on a different network, so it was no problem for me to change the routed Vlan for the web output.
Regards
Hello,
I have a cluster with 7 nodes.
Two network legs:
1 => 192.168.38.x/24 => Vlan routed for web output
2 => 192.168.46.x/24 => for corosync
I want to change the web output (for each node) to another Vlan => 192.168.39.x/24
Will this have an impact on my cluster?
Especially to access the...
Following the update of one of my nodes,
I encounter a problem on the ZFS RPOOL Cf screenshot.
I followed the procedure :
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
During the update, I encountered a label problem on my mirroring: 3127700216878447358 ?
I performed the operations...
Thank you, now I understand what I wanted to implement for several months.
In my case, a 256GB SSD hard drive can be beneficial for the ZFS-ARC ?
If yes, a "zpool add -f rpool cache /dev/xxx" is sufficient ?
Best regards
Thank you very much for your response.
However, the file /etc/modprobe.d/zfs.conf does not exist on my node.
I have seen the concept of ZFS ARC on https://pve.proxmox.com/wiki/ZFS_on_Linux and my values are (arc_summary) :
zfs_arc_min 0
zfs_arc_max 0
Do you need to create the file and...
Hello,
I currently have a Dell R540 blade with 256GB RAM and 6x 7.2TB HDD 7.2K.
(2 x Xeon Bronze 3106@1.7Ghz)
I have created two 7.2TB mirroring pools with a spare.
This in order to host two virtual NAS (1.5TB / 1TB).
This node consumes 160GB of RAM.
The idea is to reduce RAM consumption.
Do...
Thank you very much.
I will apply this.
However, the cluster communicates on 46.0/24.
Also in the file '/etc/pve/priv/known_hosts' ipmpve6' still appears.
Can this be a problem when adding to the cluster ?
Hello
Thank you for your response.
I understand well for the name of the network legs but I am in a hurry to get back into production.
The network paw names (with Tag Vlan) work on another node.
Do you think I can reintegrate this node without any problem ?
Thank you,
Yours sincerely,
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.