I come back with this issue. I force NFS v4.2 on the storages config in PVE manager , reboot the node but always the grey question mark is present on storages.
I also try with a another browser
Nothing to have statistics for the storage size free or used, but access to the store and to the VM...
mount display as example :
IP:/zpool-129262/vmstore11 on /mnt/pve/vmstore11 type nfs4 (rw,relatime,vers=4.2,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=IP,local_lock=none,addr=IP)
seams to have nfs4
Hi,
Question already open in forums but no solutions for me
On a cluster of 5 nodes (pve-manager/8.4.11/14a32011146091ed and Linux 6.8.12-13-pve (2025-07-22T10:00Z)) runing in OVH infra with NAS storages mounted with NFS , I have grey question mark on this storages, hower all is functionning...
Hi,
May I delete this file to gain space on my main partition
-rw------- 1 www-data www-data 6085509120 Dec 10 2024 pveupload-cf0a58fc86ce7e778f4a3f998f179e2f
Thx
Hi
I have existing et fine running cluster in OVH
I use vrack of OVH for interconnection
Nodes of cluster run PVE 6.4-13 and have 2 NIC's
en01 pubic IP for management
eno2 connected on the private vrack with differents vlan
vmbr1 eno1.100 for heartbeat for the cluster
vmbr2 eno1.200 for...
Hi,
I Try to choose between this two solutions, VM's on Nas-ha or Vm's on CDA (cloud array disk or ceph )
Actually we have some cluser only with NAs-ha that run fine but somme time (one ou two time per years there are network failure during some seconds and Vm lose there disk and we must...
Hi,
I try to setup a virtual sophos as firewall for my VM in a private vlan et also give access to remote user/site using vpn
proxmox 7
eno1 -> vmbr0 (public IP as management on ovh infrastructure)
eno2 (connected to the vrack service in ovh)
vmbr1 -> en02
vmbr2 - > eno2.100 (private lan...
Thx,
I have read this... Bust in the GUI I have for the HA status with "master" and "lrm" types.
What about if I remove definiltively the the node who as the type as "master" ?
Hi,
I must remove an old node witch is the master for HA
How to transfer this role to another node ?
Simply stop the odl node an remove it from the cluster and the system select a new one for master role ?
Or as I have read in the forum :
use the following ?
force a node to a master: pveca -m...
Hi,
I had also the same issue.
All nodes in /etc/hosts with correct IP (for me public IP with lets'encrypt certificates and private IP for the quorum in a dedicated vlan)
using short name to join the cluster failed
using FQDN resolve the issue and my new node si now part of the cluster.
Thx...
Hi, I have the same issue on a new fresh server trying to update from V6 to V7 (imag template in V§ fromOVH run fine but when I upgrade to Vè I encounter this error also.
Vhere in IPMI do you change the boot order ?
Thx